Security Stop-Press : ChatGPT Health Brings New Data Security Risks

Written by: Paul |

Security Stop-Press : ChatGPT Health Brings New Data Security Risks

OpenAI has launched ChatGPT Health, a dedicated space for health and wellness conversations that allows users to link personal health data, raising fresh security and privacy concerns around highly sensitive information.

OpenAI says more than 230 million people ask health-related questions on ChatGPT each week, prompting the creation of a separate Health environment with additional protections. Health conversations are isolated from standard chats, encrypted, and excluded from model training, while users can connect data from apps such as Apple Health and other wellness platforms with explicit consent.

Despite these safeguards, ChatGPT Health concentrates medical history, lifestyle data, and behavioural context into a single AI account. If an account is compromised through phishing, weak passwords, or reused credentials, attackers could potentially gain access to deeply personal health information rather than just general chat content. OpenAI also stresses that Health is not intended for diagnosis or treatment, as large language models can still produce inaccurate or misleading responses.

For businesses, the risk lies in staff using AI tools with sensitive personal data on accounts that may not be properly secured. Strong password policies, mandatory multi-factor authentication, and clear guidance on linking personal data to AI services are essential steps to reduce exposure as consumer health features increasingly overlap with everyday work technology.