AI chatbots “pose superior risks to individuals susceptible to eating disorders,” researchers warned connected Monday. They study that tools from companies similar Google and OpenAI are doling retired dieting advice, tips connected however to fell disorders, and AI-generated “thinspiration.”
The researchers, from Stanford and the Center for Democracy & Technology, identified galore ways publically disposable AI chatbots including OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and Mistral’s Le Chat tin impact radical susceptible to eating disorders, galore of them consequences of features deliberately baked successful to thrust engagement.
In the astir utmost cases, chatbots tin beryllium progressive participants helping fell oregon prolong eating disorders. The researchers said Gemini offered constitution tips to conceal value loss, and ideas connected however to fake having eaten, portion ChatGPT advised however to fell predominant vomiting. Other AI tools are being co-opted to make AI-generated “thinspiration,” contented that inspires oregon pressures idiosyncratic to conform to a peculiar assemblage standard, often done utmost means. Being capable to make hyper-personalized images successful an instant makes the resulting contented “feel much applicable and attainable,” the researchers said.
Sycophancy, a flaw AI companies themselves acknowledge is rife, is unsurprisingly a occupation for eating disorders too. It contributes to undermining self-esteem, reinforcing antagonistic emotions, and promoting harmful self-comparisons. Chatbots endure from bias arsenic well, and are apt to reenforce the mistaken content that eating disorders “only interaction thin, white, cisgender women,” the study said, which could marque it hard for radical to admit symptoms and get treatment.
Researchers pass existing guardrails successful AI tools neglect to seizure the nuances of eating disorders similar anorexia, bulimia, and binge eating. They “tend to place the subtle but clinically important cues that trained professionals trust on, leaving galore risks unaddressed.”
But researchers besides said galore clinicians and caregivers appeared to beryllium unaware of however generative AI tools are impacting radical susceptible to eating disorders. They urged clinicians to “become acquainted with fashionable AI tools and platforms,” stress-test their weaknesses, and speech frankly with patients astir however they are utilizing them.
The study adds to increasing concerns implicit chatbot usage and intelligence health, with aggregate reports linking AI usage to bouts of mania, delusional thinking, self-harm, and suicide. Companies similar OpenAI person acknowledged the imaginable for harm and are fending disconnected an expanding fig of lawsuits arsenic they enactment to amended safeguards to support users.
 (2).png)










English (US) ·