ChatGPT Health lets you connect medical records to an AI that makes things up

ai doctor

But despite OpenAI’s talk of supporting health goals, the company’s terms of service directly state that ChatGPT and other OpenAI services are “not for use in the diagnosis or treatment of any health condition.”

It appears the policy is not changing with ChatGPT Health. “Health is designed to support, not replace, medical care. It is not intended to diagnose or treat. Instead, it helps you navigate everyday questions and understand patterns over time – not just moments of illness – so you can feel more informed and prepared for important medical interactions,” OpenAI writes in its announcement.

a cautionary tale

The SFGate report on Sam Nelson’s death explains why maintaining that disclaimer makes sense legally. According to chat logs reviewed by the publication, Nelson first asked ChatGPT about recreational drug dosage in November 2023. The AI ​​assistant initially refused and referred him to health care professionals. But ChatGPT’s responses reportedly changed over the course of 18 months of negotiations. Eventually, the chatbot told him things like “Hell yes—let’s go full trippy mode” and advised him to double his cough syrup intake. The day after he started drug addiction treatment, his mother found him dead of a drug overdose.

While Nelson’s case did not involve an analysis of doctor-approved health care instructions, like ChatGPT Health will link His case is not unique, as many people have been misled by chatbots that provide or encourage false information. Dangerous behavior, as we’ve covered in the past.

This is because AI language models can easily generate misleading information, making it difficult for some users to distinguish between fact and fiction. The AI ​​models that services like ChatGPT use are statistical relationships in training data (like books, YouTube transcripts, and text from websites) to generate plausible responses rather than necessarily accurate ones. Additionally, ChatGPT’s output can vary widely depending on who is using the chatbot and what has previously occurred in the user’s chat history (including notes about previous chats).



<a href

Leave a Comment