“Talking to AI should be like talking to a lawyer or a doctor. I hope society figures this out soon,” Altman Posted To X.
This tweet is currently unavailable. It may be loading or may have been removed.
CEO has advocated repeatedly For Strong privacy protection For your chatbot’s interactions with users, even as states crack down on advertised AI bots medical Or legal expert.
But user privacy isn’t the only reason people like Altman are pushing for a strict shield between chatbot conversations and the court, legal experts tell Mashable — there’s also a self-serving motivation. If LLMs remain untouched by the courts, it motivates not only AI users, but also companies. In fact, Altman’s comments to Vaughn may have been inspired by OpenAI’s own legal troubles: courts were seeking to protect the AI giant and eventually hand over its user chat logs as legal discovery, an action that could have been blocked if the AI were viewed as too similar to a therapist, doctor, or lawyer in the eyes of the court.
What is one way to accomplish this? Push for a cultural shift that treats AI guidance with the same respect as human professionals, starting with our health.
Using Chatgpt Health? Read this first.
What exactly is “AI privilege”?
“Privilege has a certain meaning for lawyers and in the legal context,” explained senior staff attorney Melody Dinser. Tech Justice Law Project. For example, there is the standard attorney-client privilege, as well as psychiatrist-client privilege and spouse privilege. Communications to the clergy, political votes, and trade or state secrets are also recognized by the courts. In all these cases, communications between the two parties are confidential and not admissible in court proceedings.
States also have their own privilege rules, which cover state law for cases in state courts. Some states privilege conversations between you and your general practitioner, in addition to your psychiatrist, Dinser said. But many states do not do this. It’s all explained in this rule 501 of federal rules of evidenceThat allows federal courts to broadly recognize privileges that state courts already recognize, Dinser explained.
For example, if you are being sued, the other side of the lawsuit cannot admit your therapist’s session notes, nor can they admit confidential conversations between you or your attorney or your spouse.
“The whole purpose of [client privilege] The key is to be able to have clear and open discussions with these providers so they can provide you with the best advice,” says Lily Lee, a data privacy and AI risk management attorney and founder of metaverse lawtold Mashable. “And from a societal perspective, we want individuals to be candid and open and honest with their attorneys, physicians, and psychologists.”
But these are conditions imposed on human relationships, not digital ones. If you believe that an AI chatbot is as effective as a human therapist or legal advisor, should those communications be protected as well? Some AI developers, like Altman, say yes.
AI Chatbots: Tools or People?
“The Open AI copyright lawsuit has brought this into sharp focus,” Lee said. She is referring to a recent series consolidated A total of 16 copyright cases were opened against OpenAI on behalf of publishers, artists and authors over the past few years. The issues at hand – including questions of fair use and how to handle data used to train LLMs – are a kind of temperature gauge to assess the perception of AI in the eyes of the court.
Because of this, legal experts are closely monitoring how courts classify AI developers, their products, and the user data contained within them. More specifically, they need to track how the law is treating LLMs during evidence and discovery, including their training data and chat logs.
We don’t want a situation where there is only one pure liability shield.
In February, a federal judge ruled that legal strategy documents prepared by Anthropic’s cloud chatbot – and then sent by a client to their attorney – were Not covered by attorney-client privilege. This decision made headlines. The judge in the case relied in part on Anthropic’s own privacy policy to determine whether the chats were safe. Because Anthropic’s rules do not promise complete confidentiality when using its public product, and because there was no communication between the licensed attorneys. understanding of their confidentialityPrivilege not applicable. The documents were fair game.
Mashable Trend Report
But that same month, a different judge in a separate, though similar, case ruled the opposite. In this example, the attorney-client privilege applies to the AI-generated work because the output becomes “attorney-client work product” according to the judge. In this use case the chatbot was not a “person” but a tool used by the attorney and client. This is an important distinction, because if the chatbot were viewed as a third-party entity, the customer would be voluntarily giving it confidential information in a way that would eliminate the assumption of privilege.
These are just some of the earliest federal district court cases, including what are called cases of first impression. Basically, no one has ever asked these questions, and we’re only in the beginning stages of figuring them out.
Meanwhile, copyright cases involving OpenAI have raised more questions about search and data. Shortly before the above two decisions, OpenAI successfully appealed In one decision the company waived its attorney-client privilege, opening access to previously unprivileged data. The company was ordered to hand over Millions of anonymous ChatGPT conversation logsAlso internal communication.
Companies like OpenAI have opposed such discovery, arguing its privacy. Judges who have ruled in favor of admitting the data have argued that removing personally identifiable information, narrowing the focus of the logs, and not disclosing the data externally makes digital searches admissible in court. The legal landscape is replete with such questions.
Overall, AI developers are pushing to keep their internal data away from search. And while user privacy is one of the most important issues in the age of AI, calculating AI privileges in a legal context has become a puzzle. How do we protect users’ private data, without making it impossible to hold the creators of AI accountable?
“We don’t want a situation where there is only one pure liability shield,” Lee said.
A new Mashable series, AI + Health, will examine how artificial intelligence is changing the medical and health landscape. We’ll find out how to keep your health data safeDive into using AI to understand your blood testsLearn how two women are using AI to detect a dangerous form of heart diseaseAnd many more.
Health AI is big business
Earlier this year, OpenAI launched ChatGPT Health, a new consumer-facing “mode” for its tentpole chatbot that intends to turn the AI into a personal health guru. The company encourages users to upload their medical history to better personalize the experience. Data is not currently protected under Health Insurance Portability and Accountability Act (HIPAA), the nation’s leading health privacy regulation.
Other companies followed OpenAI’s lead, with Anthropic, Microsoft, and Amazon releasing their own health-oriented chatbot companions — some HIPAA compliant and some not — in the months that followed. OpenAI competitor Google has long been investing in AI, primarily in medical use cases physicians and researchers. Google-owned Fitbit offers personal health coaching Using an integrated Gemini assistant. The company is also manufacturing “”Conversational Diagnostic AI Agent,” known as an Articulate Medical Intelligence Explorer (or AMIE).
Altman and his competitors Moving towards profit potential Even in the healthcare industry, AI privilege regulation is not yet on the horizon. In January, OpenAI acquired health startup Torchand this Altman Backed MergeLabsA biotech company interested in brain computer interfaces (BCIs) received a valuation of $850 million.
According to a recent news report By Menlo Ventures, $1.4 billion in 2025 spent toward health care-specific generic AI solutions. Most of it flowed into AI startups. And these figures only include clinical-grade products, tools produced by companies like OpenEvidence and Hippocratic AI for medical professionals, not spending on commercial products like ChatGPS Health.
A world of human-chatbot privilege?
In non-clinical grade products, wellness devices, and non-HIPAA compliant chatbots, there is a lack of regulation and legal clarity Many privacy experts are concerned. Some believe the uncertain policy landscape could be a boon for AI developers, who may launch their own health AI products in a regulatory effort to boost company profits in a strategic move. And Legal benefits.
As chatbots accumulate more “confidential” conversations, more privileges may be included under Rule 501. In states that shield communications with your physician, will AI “doctors” also count? Or consider a less obvious example offered by Dinser: Suppose a user asks a chatbot how they contracted a sexually transmitted infection despite their spouse testing negative, could the prompt and response be presented as evidence – or would it trigger some other form of protection, such as spousal privilege?
In a hypothetical world with broader AI privileges, or even chatbots being included in existing privileges rules, AI companies could try to refuse to admit clear evidence of maliciousness. For example, if an AI company was sued for misleading individuals about their health, prosecutors could not access internal records or chat analysis containing people’s health records.
Perhaps, Dinser suggests, if more users are entering their personal medical records,
This may be part of the reason why – besides revenue – companies try to instill the same kind of trust in AI assistants that we have in human professionals. With many people already consulting AI for their health needs, and companies like OpenAI already facing litigation, it’s no mystery why executives like Altman want to keep chatbot conversations away from the prying eyes of lawyers and judges.
The information contained in this article is for educational and informational purposes only and is not intended to be health or medical advice. Always consult a physician or other qualified health provider with any questions you may have regarding a medical condition or health objectives.
Disclosure: Mashable’s parent company Ziff Davis previously filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in the training and operation of its AI systems.
<a href