
Well, it may not be that simple. But Utah is the first state in the nation to test a pilot program that would allow a chatbot to renew prescriptions, including psychiatric drugs, without requiring approval from a doctor. The program will be run by Y Combinator-backed startup, Legion Health, and will open for a 12-month pilot period starting this month.
Legion Health offers telehealth appointments for people seeking mental health support, but its use in Utah’s program will be limited compared to its standard offerings. It will charge people who join the program (according to The Verge, there is currently a waiting list) a $19 per month membership that will allow them to refill their prescriptions through an AI chatbot. Patients invited to the program will be considered “stable,” meaning there has been no change in their medication or psychiatric hospitalization within the past year, and only 15 medications considered low-risk can be renewed through the chatbot. This includes drugs such as Prozac, Zoloft, Wellbutrin, and Lexapro. While Legion Health offers controlled substances like Adderall, they would not be eligible for Utah testing.
As far as how implementation will be handled, Utah has set up the program for people to choose whether to participate. The first 250 prescriptions issued by the chatbot will be monitored by a licensed physician, and the system must achieve a 98% approval rate before being able to issue prescriptions without immediate inspection.
It is this stage, and what comes after, that is cause for potential concern. It seems as if Utah’s intention with the program, if it is successful, is for widespread implementation. The state Department of Commerce notes that “Most of Utah’s counties lack mental health providers, leaving 500,000 residents without adequate access to behavioral health care.” This is undoubtedly a problem, but it’s not clear that Legion is a solution to it.
Legion is actually Utah’s second AI-powered prescription pilot program. The first, provided by a company called Doctronic, was launched late last year to renew prescriptions for commonly prescribed medications such as cholesterol and blood pressure medications. It took basically no time at all for safety researchers to spew conspiratorial rhetoric about vaccines and do things like triple a patient’s dosage for opioids. A study published last year found that large language models used in health care settings are extremely vulnerable to jailbreak attacks, which is not exactly what you want for a tool that can prescribe medications without human oversight.
AI may have a role in acting as additional support in health care settings. Several studies have found that AI tools used as assistants rather than working autonomously can help reduce prescription error rates and reduce wait times for medications to be filled. But it requires a person to be on top, not just act as a backstop, and there’s still the possibility that doctors will inadvertently leave work to the system. Last year, a study found that doctors who used AI assistance to identify a patient’s risk of cancer performed better with the tool on, but actually did worse than their pre-AI baseline if the tool was removed.
Expanding access to mental health services is a worthwhile endeavor. Expanding access to chatbots seems a very questionable way to achieve this.
<a href