An increasing number of people are sharing medical images with AI chatbots like ChatGPT, Gemini, and Grok for health advice. Security experts warn this practice poses significant privacy risks.
While users can legally share their medical data, uploaded information may be used to train AI models without clear disclosure of how the data will be used or who can access it. Companies can also change their data policies at any time.
Private medical records have been found in AI training datasets, potentially exposing sensitive information to healthcare providers, employers, and government agencies. Most consumer AI apps aren’t protected by HIPAA, leaving uploaded medical data vulnerable.
X’s owner Elon Musk has encouraged users to share medical scans with Grok to improve its accuracy. However, X’s privacy policy indicates user data may be shared with unspecified “related” companies.
Experts emphasize that once information is uploaded to the internet, it becomes permanently accessible and may be used in ways users didn’t anticipate.
The key advice remains: think carefully before sharing any private medical information with AI platforms.