Android Backstage, a podcast by and for Android developers. Hosted by developers from the Android engineering team, this show covers topics of interest to Android programmers, with in-depth discussions and interviews with engineers on the Android team at Google. Subscribe to Android Developers YouTube → https://goo.gle/AndroidDevs
…
continue reading
Вміст надано LessWrong. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією LessWrong або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !
Переходьте в офлайн за допомогою програми Player FM !
“Will alignment-faking Claude accept a deal to reveal its misalignment?” by ryan_greenblatt
Manage episode 464329097 series 3364760
Вміст надано LessWrong. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією LessWrong або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
I (and co-authors) recently put out "Alignment Faking in Large Language Models" where we show that when Claude strongly dislikes what it is being trained to do, it will sometimes strategically pretend to comply with the training objective to prevent the training process from modifying its preferences. If AIs consistently and robustly fake alignment, that would make evaluating whether an AI is misaligned much harder. One possible strategy for detecting misalignment in alignment faking models is to offer these models compensation if they reveal that they are misaligned. More generally, making deals with potentially misaligned AIs (either for their labor or for evidence of misalignment) could both prove useful for reducing risks and could potentially at least partially address some AI welfare concerns. (See here, here, and here for more discussion.)
In this post, we discuss results from testing this strategy in the context of our paper where [...]
---
Outline:
(02:43) Results
(13:47) What are the models objections like and what does it actually spend the money on?
(19:12) Why did I (Ryan) do this work?
(20:16) Appendix: Complications related to commitments
(21:53) Appendix: more detailed results
(40:56) Appendix: More information about reviewing model objections and follow-up conversations
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
January 31st, 2025
Source:
https://www.lesswrong.com/posts/7C4KJot4aN8ieEDoz/will-alignment-faking-claude-accept-a-deal-to-reveal-its
---
Narrated by TYPE III AUDIO.
…
continue reading
In this post, we discuss results from testing this strategy in the context of our paper where [...]
---
Outline:
(02:43) Results
(13:47) What are the models objections like and what does it actually spend the money on?
(19:12) Why did I (Ryan) do this work?
(20:16) Appendix: Complications related to commitments
(21:53) Appendix: more detailed results
(40:56) Appendix: More information about reviewing model objections and follow-up conversations
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
January 31st, 2025
Source:
https://www.lesswrong.com/posts/7C4KJot4aN8ieEDoz/will-alignment-faking-claude-accept-a-deal-to-reveal-its
---
Narrated by TYPE III AUDIO.
456 епізодів
Manage episode 464329097 series 3364760
Вміст надано LessWrong. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією LessWrong або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
I (and co-authors) recently put out "Alignment Faking in Large Language Models" where we show that when Claude strongly dislikes what it is being trained to do, it will sometimes strategically pretend to comply with the training objective to prevent the training process from modifying its preferences. If AIs consistently and robustly fake alignment, that would make evaluating whether an AI is misaligned much harder. One possible strategy for detecting misalignment in alignment faking models is to offer these models compensation if they reveal that they are misaligned. More generally, making deals with potentially misaligned AIs (either for their labor or for evidence of misalignment) could both prove useful for reducing risks and could potentially at least partially address some AI welfare concerns. (See here, here, and here for more discussion.)
In this post, we discuss results from testing this strategy in the context of our paper where [...]
---
Outline:
(02:43) Results
(13:47) What are the models objections like and what does it actually spend the money on?
(19:12) Why did I (Ryan) do this work?
(20:16) Appendix: Complications related to commitments
(21:53) Appendix: more detailed results
(40:56) Appendix: More information about reviewing model objections and follow-up conversations
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
January 31st, 2025
Source:
https://www.lesswrong.com/posts/7C4KJot4aN8ieEDoz/will-alignment-faking-claude-accept-a-deal-to-reveal-its
---
Narrated by TYPE III AUDIO.
…
continue reading
In this post, we discuss results from testing this strategy in the context of our paper where [...]
---
Outline:
(02:43) Results
(13:47) What are the models objections like and what does it actually spend the money on?
(19:12) Why did I (Ryan) do this work?
(20:16) Appendix: Complications related to commitments
(21:53) Appendix: more detailed results
(40:56) Appendix: More information about reviewing model objections and follow-up conversations
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
January 31st, 2025
Source:
https://www.lesswrong.com/posts/7C4KJot4aN8ieEDoz/will-alignment-faking-claude-accept-a-deal-to-reveal-its
---
Narrated by TYPE III AUDIO.
456 епізодів
Alle episoder
×Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.