This post was originally published on this site.

Dr. Sina Bari, a practicing surgeon and AI healthcare leader at data company iMerit, has seen firsthand how ChatGPT can lead patients astray with faulty medical advice.
“I recently had a patient come in, and when I recommended a medication, they had a dialogue printed out from ChatGPT that said this medication has a 45% chance of pulmonary embolism,” Dr. Bari told TechCrunch.
When Dr. Bari investigated further, he found that the statistic was from a paper about the impact of that medication in a niche subgroup of people with tuberculosis, which didn’t apply to his patient.
And yet, when OpenAI announced its dedicated ChatGPT Health chatbot last week, Dr. Bari felt more excitement than concern.
ChatGPT Health, which will roll out in the coming weeks, allows users to talk to the chatbot about their health in a more private setting, where their messages won’t be used as training data for the underlying AI model.
“I think it’s great,” Dr. Bari said. “It is something that’s already happening, so formalizing it so as to protect patient information and put some safeguards around it […] is going to make it all the more powerful for patients to use.”
Users can get more personalized guidance from ChatGPT Health by uploading their medical records and syncing with apps like Apple Health and MyFitnessPal. For the security-minded, this raises immediate red flags.
Techcrunch event
San Francisco
|
October 13-15, 2026
“All of a sudden there’s medical data transferring from HIPAA compliant organizations to non-HIPAA compliant vendors,” Itai Schwartz, co-founder of data loss prevention firm MIND, told TechCrunch. “So I’m curious to see how the regulators would approach this.”
But the way some industry professionals see it, the cat is already out of the bag. Now, instead of Googling cold symptoms, people are talking to AI chatbots – over 230 million people already talk to ChatGPT about their health each week.
“This was one of the biggest use cases of ChatGPT,” Andrew Brackin, a partner at Gradient who invests in health tech, told TechCrunch. “So it makes a lot of sense that they would want to build a more kind of private, secure, optimized version of ChatGPT for these health care questions.”
AI chatbots have a persistent problem with hallucinations, a particularly sensitive issue in healthcare. According to Vectara’s Factual Consistency Evaluation Model, OpenAI’s GPT-5 is more prone to hallucinations than many Google and Anthropic models. But AI companies see the potential to rectify inefficiencies in the healthcare space (Anthropic also announced a health product this week).
For Dr. Nigam Shah, a medicine professor at Stanford and chief data scientist for Stanford Health Care, the inability of American patients to access care is more urgent than the threat of ChatGPT dispensing poor advice.
“Right now, you go to any health system and you want to meet the primary care doctor – the wait time will be three to six months,” Dr. Shah said. “If your choice is to wait six months for a real doctor, or talk to something that is not a doctor but can do some things for you, which would you pick?”
Dr. Shah thinks a clearer route to introduce AI into healthcare systems comes on the provider side, rather than the patient side.
Medical journals have often reported that administrative tasks can consume about half of a primary care physician’s time, which slashes the number of patients they can see in a given day. If that kind of work could be automated, doctors would be able to see more patients, perhaps reducing the need for people to use tools like ChatGPT Health without additional input from a real doctor.
Dr. Shah leads a team at Stanford that is developing ChatEHR, a software that is built into the electronic health record (EHR) system, allowing clinicians to interact with a patient’s medical records in a more streamlined, efficient manner.
“Making the electronic medical record more user friendly means physicians can spend less time scouring every nook and cranny of it for the information they need,” Dr. Sneha Jain, an early tester of ChatEHR, said in a Stanford Medicine article. “ChatEHR can help them get that information up front so they can spend time on what matters — talking to patients and figuring out what’s going on.”
Anthropic is also working on AI products that can be used on the clinician and insurer sides, rather than just its public-facing Claude chatbot. This week, Anthropic announced Claude for Healthcare by explaining how it could be used to reduce the time spent on tedious administrative tasks, like submitting prior authorization requests to insurance providers.
“Some of you see hundreds, thousands of these prior authorization cases a week,” said Anthropic CPO Mike Krieger in a recent presentation at J.P. Morgan’s Healthcare Conference. “So imagine cutting twenty, thirty minutes out of each of them – it’s a dramatic time savings.”
As AI and medicine become more intertwined, there’s an inescapable tension between the two worlds – a doctor’s primary incentive is to help their patients, while tech companies are ultimately accountable to their shareholders, even if their intentions are noble.
“I think that tension is an important one,” Dr. Bari said. “Patients rely on us to be cynical and conservative in order to protect them.”




