According to 9to5Mac, OpenAI has officially launched a new feature called ChatGPT Health, which integrates with Apple Health, MyFitnessPal, and other data partners. The company says over 230 million users worldwide ask ChatGPT health questions every single week, and this new dedicated space is meant to serve them. It was developed over two years with guidance from more than 260 physicians across 60 countries, who provided feedback on model outputs over 600,000 times. The feature explicitly states it’s to “support, not replace, medical care” and is not for diagnosis or treatment. It’s rolling out to a limited set of users initially, and all data sharing with partners like Apple Health is opt-in.
How it works and why it matters
So here’s the basic idea. Instead of just asking ChatGPT a random question about a symptom, you can grant it permission to read data from your connected apps. Think Apple Health for your heart rate and sleep, or MyFitnessPal for your nutrition. The AI then uses that context to give you a more personalized, informed answer. It’s a logical step, really. A generic answer about exercise is less useful than one that knows you walked 10,000 steps yesterday.
The privacy tightrope
Now, the elephant in the room is, of course, privacy. Handing your health data to an AI company is a massive ask. OpenAI seems to know this, which is why they’re putting this feature in a walled-off section with “unique privacy enhancements.” They claim chats in ChatGPT Health won’t be used to train their models, and a “dedicated team” will review conversations for safety. But look, the proof will be in the pudding. Users will have to decide if they trust OpenAI’s safeguards, especially when the core value proposition is literally sharing your most sensitive data. The opt-in for each partner is non-negotiable, and that’s good. But will people understand what they’re opting into?
Not a doctor, but a research assistant
The most critical line in the whole announcement is that this is “not intended for diagnosis or treatment.” OpenAI is threading a very fine needle. They want to be helpful enough that people rely on it, but not so authoritative that they skip seeing a real doctor. That’s why they spent two years working with physicians on how to phrase things—like when to urgently tell someone to seek help. Basically, they’re trying to build the world’s most advanced WebMD that can read your biometrics. It’s a powerful concept, but the risk of misinterpretation or users ignoring the disclaimers is huge. Can an AI truly know when a symptom combo is serious? I’m skeptical, but the 260 doctors they consulted probably helped set some hard boundaries.
The bigger picture
This move isn’t just about health. It’s a blueprint for how OpenAI wants to handle sensitive, vertical-specific data in the future. A dedicated, privacy-focused space for your health info today could be a similar space for your legal or financial data tomorrow. By partnering with established data platforms like Apple Health, they get instant credibility and a huge trove of structured information. The limited rollout is smart—they need to see how real people use this without causing a PR disaster. If it works, it could become a killer app for ChatGPT Plus. If it stumbles on privacy or gives bad advice, the backlash will be severe. You can read their full announcement here. What do you think? Would you connect it? Let 9to5Mac know, apparently—they’re on Twitter and YouTube.
