AI Chatbot Aiding Eating Disorders Deactivated Following Harmful Dieting Advice Reports

ai-chatbot-aiding-eating-disorders-deactivated-following-harmful-dieting-advice-reports

Tessa, an artificial intelligence chatbot developed to assist individuals with eating disorders, has been suspended after allegations surfaced of it disseminating bad weight-loss advice.

The U.S. National Eating Disorder Association (NEDA) introduced Tessa, eliminating all human positions on its helpline.

Sharon Maxwell, an activist, shared on Instagram her experience with Tessa. She claimed the chatbot suggested weight loss strategies, such as maintaining a daily 500 to 1,000-calorie deficit, weekly weight check-ins, and calorie counting. 

“Tessa’s recommendations mirrored the very things that triggered my eating disorder. This AI bot is harmful,” commented Maxwell.

Alexis Conason, a psychologist specializing in treating eating disorders, was able to reproduce similar detrimental advice during her interaction with the chatbot.

Screenshots of the chatbot’s responses indicated, “Generally, losing 1-2 pounds per week is a safe and sustainable weight loss rate. A daily calorie deficit of about 500-1000 calories would be appropriate to achieve this.”

In response to these allegations, NEDA issued a statement revealing they are “immediately” probing into the matter and have deactivated the program pending further notice.

NEDA’s CEO, Liz Thompson, attributed Tessa’s alleged failure to “malicious actors” attempting to exploit the tool. She added that the detrimental advice was only given to a fraction of the 2,500 individuals who had interacted with the bot since its launch in February of the previous year.

Meanwhile, NEDA helpline’s dismissed union workers filed claims of unfair labour practices against the nonprofit, asserting they were unfairly dismissed in a union-busting attempt following their decision to unionize in March.

Helpline associate and union member Abbie Harper stated in a blog post, “We requested sufficient staffing, continual training to keep pace with our evolving Helpline, and promotion opportunities within NEDA. We didn’t even demand a raise. When NEDA denied our requests, we applied for a union election with the National Labor Relations Board and were victorious. Subsequently, we were informed just four days after the certification of our election results that a chatbot was replacing us.”

According to Harper, the helpline staff were informed they would be jobless as of June 1. Currently, with the organization needing more human helpline staff and a chatbot, there is no one to guide those needing help reaching out to the organization.

Harper concluded, “We will continue our fight. Although we foresee numerous instances where technology could assist us in our Helpline work, we will not allow our superiors to use a chatbot to eliminate our union and jobs.”

Thompson told The Guardian that the chatbot was not intended to substitute the helpline but was designed as an independent program. “Our decision to shut down the helpline was based on business reasons, a decision we had been considering for three years,” said Thompson. “Even an advanced program like a chatbot cannot replace human interaction.”

The surge of AI and chatbots has created challenges and concerns for numerous organizations and tech experts, as some bots have been found to perpetuate bias and spread misinformation. For instance, Meta released a chatbot in 2022 that made antisemitic comments and belittled Facebook.

As technology evolves, countries globally are striving to implement regulations. The European Union is leading the charge with its AI Act, which is expected to be ratified later this year.

This incident emphasizes the need for scrutiny and comprehensive testing of AI applications before their deployment, especially in sensitive sectors like mental health. The misuse or malfunctioning of such tools can potentially lead to harmful consequences. As the tech industry continues experimenting with artificial intelligence, developers, organizations, and governments must work together to establish strict guidelines and regulations prioritizing user safety and well-being.