According to TechCrunch, OpenAI announced a new product called ChatGPT Health on Wednesday. The company says over 230 million users ask health and wellness questions on its platform each week, prompting this dedicated space. The feature will silo health chats away from general conversations, though it can reference context from them. It will also integrate with personal data from apps like Apple Health and MyFitnessPal. OpenAI’s CEO of Applications, Fidji Simo, framed it as a response to healthcare cost, access, and continuity issues. The feature is expected to roll out in the coming weeks.
The huge gamble
Here’s the thing: this is an absolutely massive, and frankly risky, bet. OpenAI is essentially saying, “We know you’re already using our product for this incredibly sensitive, high-stakes topic, so here’s a slightly better version.” They’re leaning into a behavior that already exists. The promise of not using Health data for training and keeping chats separate is a necessary first step for trust. But let’s be real. It doesn’t solve the fundamental problem. As they admit in their own terms, it’s “not intended for use in the diagnosis or treatment of any health condition.” So what is it for? Basically, it’s for the gray area of “wellness” and general information that people will inevitably use to try and self-diagnose. That’s a dangerous line to walk.
Winners, losers, and hallucinations
In terms of market impact, this immediately puts pressure on every other “health AI” startup and even big telehealth players. If 230 million people a week are already going to ChatGPT, that’s a user base nobody can ignore. The potential losers? Maybe symptom checker websites and lower-tier wellness apps. The winners could be the data integrators like Apple Health, who become essential pipes for this new AI layer. But the elephant in the room is, and always will be, hallucinations. An LLM doesn’t know truth; it predicts plausible text. When that text is about your chest pain or a weird rash, “plausible” isn’t good enough. It’s a core limitation they can’t currently engineer away. So they’re building a sleek, integrated experience on top of a fundamentally unpredictable engine. That’s a tough sell in healthcare.
The regulatory mountain ahead
And then there’s the FDA, HIPAA, and a whole world of medical device regulation. Fidji Simo’s blog post is careful to position this as an assistant, not a provider. But you can bet regulators are watching this launch like hawks. Every cautious disclaimer in OpenAI’s announcement is a pre-emptive shield against the liability storm that will come after the first high-profile case of bad AI health advice. They’re trying to have it both ways: solve real healthcare problems without taking on any of the legal responsibility of being a healthcare entity. I think that position will become unsustainable very quickly. Either they pull back, or they get dragged into the regulatory fray. There’s no middle ground in medicine.
