Show notes are at https://stevelitchfield.com/sshow/chat.html
…
continue reading
Вміст надано LessWrong. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією LessWrong або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !
Переходьте в офлайн за допомогою програми Player FM !
“Shallow review of technical AI safety, 2024” by technicalities, Stag, Stephen McAleese, jordine, Dr. David Mathers
MP3•Головна епізоду
Manage episode 458257246 series 3364760
Вміст надано LessWrong. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією LessWrong або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
from aisafety.world
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
…
continue reading
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
399 епізодів
MP3•Головна епізоду
Manage episode 458257246 series 3364760
Вміст надано LessWrong. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією LessWrong або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
from aisafety.world
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
…
continue reading
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
399 епізодів
Усі епізоди
×Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.