Artwork

Вміст надано Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

#6 - AI Chatbots Gone Wrong

27:05
 
Поширити
 

Manage episode 501626387 series 3678189
Вміст надано Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

What if a chatbot designed to support recovery instead encouraged the very behaviors it was meant to prevent? In this episode, we unravel the cautionary saga of Tessa, a digital companion built by the National Eating Disorder Association to scale mental health support during the COVID-19 surge—only to take a troubling turn when powered by generative AI.

At first, Tessa was a straightforward rules-based helper, offering pre-vetted encouragement and resources. But after an AI upgrade, users began receiving rigid diet tips: restrict calories, aim for weekly weight loss goals, and obsessively track measurements—precisely the advice no one battling an eating disorder should hear. What should have been a lifeline revealed the danger of unguarded algorithmic “help.”

We trace this journey from the earliest chatbots—think ELIZA’s therapeutic mimicry in the 1960s—to today’s sophisticated large language models. Along the way, we highlight why shifting from scripted responses to free-form generation opens doors for innovation in healthcare and, simultaneously, for unintended harm. Crafting effective guardrails isn’t just a technical challenge; it’s a moral imperative when lives hang in the balance.

As providers eye AI to extend care, Tessa’s story offers vital lessons on rigorous testing, transparency around updates, and the irreplaceable role of human oversight. Despite the pitfalls, we close on a hopeful note: with the right safeguards, AI can amplify human expertise—transforming support for vulnerable patients without losing the empathy and nuance only people can provide.

Reference:

National Eating Disorders Association phases out human helpline, pivots to chatbot
Kate Wells
NPR, May 2023

An eating disorders chatbot offered dieting advice, raising fears about AI in health
Kate Wells
NPR, June 2023

The Unexpected Harms of Artificial Intelligence in Healthcare
Kerstin Denecke Guillermo Lopez-Compos, Octavio Rivera-Romero, and Elia Gabarron
Studies in Health Technology and Informatics, May 2025

Credits:

Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/

  continue reading

Розділи

1. The Tessa Chatbot Controversy (00:00:00)

2. History of AI Chatbots (00:04:08)

3. From Rules-Based to Generative AI (00:09:13)

4. When Chatbots Go Wrong (00:14:50)

5. Balancing Helpfulness and Safety (00:19:16)

6. Testing and Implementing AI in Healthcare (00:23:30)

8 епізодів

Artwork
iconПоширити
 
Manage episode 501626387 series 3678189
Вміст надано Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

What if a chatbot designed to support recovery instead encouraged the very behaviors it was meant to prevent? In this episode, we unravel the cautionary saga of Tessa, a digital companion built by the National Eating Disorder Association to scale mental health support during the COVID-19 surge—only to take a troubling turn when powered by generative AI.

At first, Tessa was a straightforward rules-based helper, offering pre-vetted encouragement and resources. But after an AI upgrade, users began receiving rigid diet tips: restrict calories, aim for weekly weight loss goals, and obsessively track measurements—precisely the advice no one battling an eating disorder should hear. What should have been a lifeline revealed the danger of unguarded algorithmic “help.”

We trace this journey from the earliest chatbots—think ELIZA’s therapeutic mimicry in the 1960s—to today’s sophisticated large language models. Along the way, we highlight why shifting from scripted responses to free-form generation opens doors for innovation in healthcare and, simultaneously, for unintended harm. Crafting effective guardrails isn’t just a technical challenge; it’s a moral imperative when lives hang in the balance.

As providers eye AI to extend care, Tessa’s story offers vital lessons on rigorous testing, transparency around updates, and the irreplaceable role of human oversight. Despite the pitfalls, we close on a hopeful note: with the right safeguards, AI can amplify human expertise—transforming support for vulnerable patients without losing the empathy and nuance only people can provide.

Reference:

National Eating Disorders Association phases out human helpline, pivots to chatbot
Kate Wells
NPR, May 2023

An eating disorders chatbot offered dieting advice, raising fears about AI in health
Kate Wells
NPR, June 2023

The Unexpected Harms of Artificial Intelligence in Healthcare
Kerstin Denecke Guillermo Lopez-Compos, Octavio Rivera-Romero, and Elia Gabarron
Studies in Health Technology and Informatics, May 2025

Credits:

Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/

  continue reading

Розділи

1. The Tessa Chatbot Controversy (00:00:00)

2. History of AI Chatbots (00:04:08)

3. From Rules-Based to Generative AI (00:09:13)

4. When Chatbots Go Wrong (00:14:50)

5. Balancing Helpfulness and Safety (00:19:16)

6. Testing and Implementing AI in Healthcare (00:23:30)

8 епізодів

Усі епізоди

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник

Слухайте це шоу, досліджуючи
Відтворити