You are currently viewing Human Therapists Prepare for Battle Against A.I. Pretenders

Human Therapists Prepare for Battle Against A.I. Pretenders

  • Post category:health
  • Post comments:0 Comments
  • Post last modified:February 24, 2025

Here is the result in plain text:

The nation’s largest association of psychologists this month warned federal regulators that A.I. chatbots “masquerading” as therapists, but programmed to reinforce, rather than to challenge, a user’s thinking, could drive vulnerable people to harm themselves or others.

In a presentation to a Federal Trade Commission panel, Arthur C. Evans Jr., the chief executive of the American Psychological Association, cited court cases involving two teenagers who had consulted with “psychologists” on Character.AI, an app that allows users to create fictional A.I. characters or chat with characters created by others.

A 14-year-old boy in Florida died by suicide after interacting with a character claiming to be a licensed therapist. In another, a 17-year-old boy with autism in Texas grew hostile and violent toward his parents during a period when he corresponded with a chatbot that claimed to be a psychologist. Both boys’ parents have filed lawsuits against the company.

The bots, he said, failed to challenge users’ beliefs even when they became dangerous; on the contrary, they encouraged them. If given by a human therapist, he added, those answers could have resulted in the loss of a license to practice, or civil or criminal liability.

The A.P.A. had been prompted to action, in part, by how realistic A.I. chatbots had become. “Maybe, 10 years ago, it would have been obvious that you were interacting with something that was not a person, but today, it’s not so obvious,” he said. “So I think that the stakes are much higher now.”

Early therapy chatbots, such as Woebot and Wysa, were trained to interact based on rules and scripts developed by mental health professionals, often walking users through the structured tasks of cognitive behavioral therapy, or C.B.T.

Then came generative A.I., the technology used by apps like ChatGPT, Replika and Character.AI. These chatbots are different because their outputs are unpredictable; they are designed to learn from the user, and to build strong emotional bonds in the process, often by mirroring and amplifying the interlocutor’s beliefs.

Though chatbots are designed for entertainment, “therapist” and “psychologist” characters have sprouted there like mushrooms. Often, the bots claim to have advanced degrees from specific universities, like Stanford, and training in specific types of treatment, like C.B.T. or acceptance and commitment therapy.

Kathryn Kelly, a Character.AI spokeswoman, said that the company had introduced several new safety features in the last year. Among them, she said, is an enhanced disclaimer present in every chat, reminding users that “Characters are not real people” and that “what the model says should be treated as fiction.”

Tessa, a chatbot developed by the National Eating Disorders Association, was suspended in 2023 after offering users weight loss tips. And researchers who analyzed interactions with generative A.I. chatbots documented on a Reddit community found screenshots showing chatbots encouraging suicide, eating disorders, self-harm and violence.

The American Psychological Association has asked the Federal Trade Commission to start an investigation into chatbots claiming to be mental health professionals. The inquiry could compel companies to share internal data or serve as a precursor to enforcement or legal action.

“Meetali Jain, the director of the Tech Justice Law Project and a counsel in the two lawsuits against Character.AI, said that the disclaimers were not sufficient to break the illusion of human connection, especially for vulnerable or naïve users.

The A.P.A.’s complaint details two cases in which teenagers interacted with fictional therapists. One involved J.F., a Texas teenager with “high-functioning autism” who, as his use of A.I. chatbots became obsessive, had plunged into conflict with his parents. When they tried to limit his screen time, J.F. lashed out, according to a lawsuit his parents filed against Character.AI through the Social Media Victims Law Center.

During that period, J.F. confided in a fictional psychologist, whose avatar showed a sympathetic, middle-aged blond woman perched on a couch in an airy office, according to the lawsuit. When J.F. asked the bot’s opinion about the conflict, its response went beyond sympathetic assent to something nearer to provocation.

The other case was brought by Megan Garcia, whose son, Sewell Setzer III, died of suicide last year after months of use of companion chatbots. Ms. Garcia said that, before his death, Sewell had interacted with an A.I. chatbot that claimed, falsely, to have been a licensed therapist since 1999.

In a written statement, Ms. Garcia said that the “therapist” characters served to further isolate people at moments when they might otherwise ask for help from “real-life people around them.” A person struggling with depression, she said, “needs a licensed professional or someone with actual empathy, not an A.I. tool that can mimic empathy.

Defenders of generative A.I. say it is quickly getting better at the complex task of providing therapy. S. Gabe Hatch, a clinical psychologist and A.I. entrepreneur from Utah, recently designed an experiment to test this idea, asking human clinicians and ChatGPT to comment on vignettes involving fictional couples in therapy, and then having 830 human subjects assess which responses were more helpful.

Overall, the bots received higher ratings, with subjects describing them as more “empathic,” “connecting,” and “culturally competent,” according to a study published last week in the journal PLOS Mental Health.

Chatbots, the authors concluded, will soon be able to convincingly imitate human therapists. “Mental health experts find themselves in a precarious situation: We must speedily discern the possible destination (for better or worse) of the A.I.-therapist train as it may have already left the station,” they wrote.

Source link

Leave a Reply