Artwork

Вміст надано Everyday AI. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Everyday AI або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

EP 587: GPT-5 canceled for being a bad therapist? Why that’s a bad idea

38:56
 
Поширити
 

Manage episode 499835821 series 3470198
Вміст надано Everyday AI. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Everyday AI або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

When GPT-5 was released last week, the internets were in an UPROAR.
One of the main reasons?
With the better model, came a new behavior.
And in losing GPT-4o, people feel they lost a friend. Their only friend.
Or their therapist. Yikes.
For this Hot Take Tuesday, we're gonna say why using AI as a therapist is a really, really bad idea.

Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:

  1. GPT-5 Launch Backlash Explained
  2. Users Cancel GPT-5 Over Therapy Role
  3. AI Therapy Risks and Dangers Discussed
  4. Sycophancy Reduction in GPT-5 Model
  5. Addiction to AI Companionship and Validation
  6. OpenAI’s Response to AI Therapist Outcry
  7. Illinois State Ban on AI Therapy
  8. Mental Health Use Cases for ChatGPT
  9. Harvard Study: AI’s Top Personal Support Uses
  10. OpenAI’s New Guardrails on ChatGPT Therapy

Timestamps:
00:00 "AI Therapy: Harm or Help?"

04:44 "OpenAI Model Update Controversy"

09:23 "Customizing ChatGPT: Echo Chamber Risk"

11:38 GPT-5 Update Reduces Sycophancy

16:17 Concerns Over AI Dependency

19:50 AI Addiction and Societal Bias

21:05 AI and Mental Health Concerns

27:01 AI Barred from Therapeutic Roles

29:22 ChatGPT Enhances Safety and Support Measures

34:03 AI Models: Benefits and Misuse

35:17 "Human Judgment Over AI Decisions"
Keywords:
GPT-5, GPT 4o, OpenAI, AI therapy, AI therapist, large language model, AI mental health support, AI companionship, sycophancy, echo chamber, AI validation, custom instructions, AI addiction, AI model update, user revolt, Illinois AI therapy ban, House Bill 1806, AI chatbots, mental health apps, Sentio survey, Harvard Business Review AI use cases, task completion tuning, AI safety, clinical outcomes, AI reasoning, emotional dependence, AI model personality, emotional validation, AI boundaries, US state AI regulation, AI policymaking, therapy ban, AI in mental health, digital companionship, AI model sycophancy rate, AI in personal life, AI for decision making, AI guardrails, AI model tuning, Sam Altman, Silicon Valley AI labs, AI companion, psychology and AI, online petitions against GPT-5, AI as life coach, accessibility of AI therapy, therapy alternatives, AI-driven self help, digital mental health tools, AI echo chamber risks

Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

Ready for ROI on GenAI? Go to youreverydayai.com/partner

  continue reading

624 епізодів

Artwork
iconПоширити
 
Manage episode 499835821 series 3470198
Вміст надано Everyday AI. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Everyday AI або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

When GPT-5 was released last week, the internets were in an UPROAR.
One of the main reasons?
With the better model, came a new behavior.
And in losing GPT-4o, people feel they lost a friend. Their only friend.
Or their therapist. Yikes.
For this Hot Take Tuesday, we're gonna say why using AI as a therapist is a really, really bad idea.

Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:

  1. GPT-5 Launch Backlash Explained
  2. Users Cancel GPT-5 Over Therapy Role
  3. AI Therapy Risks and Dangers Discussed
  4. Sycophancy Reduction in GPT-5 Model
  5. Addiction to AI Companionship and Validation
  6. OpenAI’s Response to AI Therapist Outcry
  7. Illinois State Ban on AI Therapy
  8. Mental Health Use Cases for ChatGPT
  9. Harvard Study: AI’s Top Personal Support Uses
  10. OpenAI’s New Guardrails on ChatGPT Therapy

Timestamps:
00:00 "AI Therapy: Harm or Help?"

04:44 "OpenAI Model Update Controversy"

09:23 "Customizing ChatGPT: Echo Chamber Risk"

11:38 GPT-5 Update Reduces Sycophancy

16:17 Concerns Over AI Dependency

19:50 AI Addiction and Societal Bias

21:05 AI and Mental Health Concerns

27:01 AI Barred from Therapeutic Roles

29:22 ChatGPT Enhances Safety and Support Measures

34:03 AI Models: Benefits and Misuse

35:17 "Human Judgment Over AI Decisions"
Keywords:
GPT-5, GPT 4o, OpenAI, AI therapy, AI therapist, large language model, AI mental health support, AI companionship, sycophancy, echo chamber, AI validation, custom instructions, AI addiction, AI model update, user revolt, Illinois AI therapy ban, House Bill 1806, AI chatbots, mental health apps, Sentio survey, Harvard Business Review AI use cases, task completion tuning, AI safety, clinical outcomes, AI reasoning, emotional dependence, AI model personality, emotional validation, AI boundaries, US state AI regulation, AI policymaking, therapy ban, AI in mental health, digital companionship, AI model sycophancy rate, AI in personal life, AI for decision making, AI guardrails, AI model tuning, Sam Altman, Silicon Valley AI labs, AI companion, psychology and AI, online petitions against GPT-5, AI as life coach, accessibility of AI therapy, therapy alternatives, AI-driven self help, digital mental health tools, AI echo chamber risks

Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

Ready for ROI on GenAI? Go to youreverydayai.com/partner

  continue reading

624 епізодів

Alle afleveringen

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник

Слухайте це шоу, досліджуючи
Відтворити