In the absence of stronger federal regulation, some states have begun regulating apps that offer AI “therapy” as more people turn to artificial intelligence for mental health advice.
But the laws, all passed this year, don’t fully address the fast-changing landscape of AI software development. And app developers, policymakers and mental health advocates say the resulting patchwork of state laws isn’t enough to protect users or hold the creators of harmful technology accountable.
“The reality is millions of people are using these tools and they’re not going back,” said Karin Andrea Stephan, CEO and co-founder of the mental health chatbot app Earkick.
___
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. There is also an online chat at 988lifeline.org.
___
The state laws take different approaches. Illinois and Nevada have banned the use of AI to treat mental health. Utah placed certain limits on therapy chatbots, including requiring them to protect users’ health information and to clearly disclose that the chatbot isn’t human. Pennsylvania, New Jersey and California are also considering ways to regulate AI therapy.
The impact on users varies. Some apps have blocked access in states with bans. Others say they’re making no changes as they wait for more legal clarity.
And many of the laws don’t cover generic chatbots like ChatGPT, which are not explicitly marketed for therapy but are used by an untold number of people for it. Those bots have attracted lawsuits in horrific instances where users lost their grip on reality or took their own lives after interacting with them.
Vaile Wright, who oversees health care innovation at the American Psychological Association, agreed that the apps could fill a need, noting a nationwide shortage of mental health providers, high costs for care and uneven access for insured patients.
Mental health chatbots that are rooted in science, created with expert input and monitored by humans could change the landscape, Wright said.
“This could be something that helps people before they get to crisis,” she said. “That’s not what’s on the commercial market currently.”
That’s why federal regulation and oversight is needed, she said.
Earlier this month, the Federal Trade Commission announced it was opening inquiries into seven AI chatbot companies — including the parent…
Click Here to Read the Full Original Article at ABC News: Health…