Artwork

Вміст надано Future of Life Institute. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Future of Life Institute або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

Bart Selman on the Promises and Perils of Artificial Intelligence

1:41:03
 
Поширити
 

Manage episode 292995395 series 1334308
Вміст надано Future of Life Institute. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Future of Life Institute або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence. Topics discussed in this episode include: -Negative and positive outcomes from AI in the short, medium, and long-terms -The perils and promises of AGI and superintelligence -AI alignment and AI existential risk -Lethal autonomous weapons -AI governance and racing to powerful AI systems -AI consciousness You can find the page for this podcast here: https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 1:35 Futures that Bart is excited about 4:08 Positive futures in the short, medium, and long-terms 7:23 AGI timelines 8:11 Bart’s research on “planning” through the game of Sokoban 13:10 If we don’t go extinct, is the creation of AGI and superintelligence inevitable? 15:28 What’s exciting about futures with AGI and superintelligence? 17:10 How long does it take for superintelligence to arise after AGI? 21:08 Would a superintelligence have something intelligent to say about income inequality? 23:24 Are there true or false answers to moral questions? 25:30 Can AGI and superintelligence assist with moral and philosophical issues? 28:07 Do you think superintelligences converge on ethics? 29:32 Are you most excited about the short or long-term benefits of AI? 34:30 Is existential risk from AI a legitimate threat? 35:22 Is the AI alignment problem legitimate? 43:29 What are futures that you fear? 46:24 Do social media algorithms represent an instance of the alignment problem? 51:46 The importance of educating the public on AI 55:00 Income inequality, cyber security, and negative futures 1:00:06 Lethal autonomous weapons 1:01:50 Negative futures in the long-term 1:03:26 How have your views of AI alignment evolved? 1:06:53 Bart’s plans and intentions for the Association for the Advancement of Artificial Intelligence 1:13:45 Policy recommendations for existing AIs and the AI ecosystem 1:15:35 Solving the parts of the AI alignment that won’t be solved by industry incentives 1:18:17 Narratives of an international race to powerful AI systems 1:20:42 How does an international race to AI affect the chances of successful AI alignment? 1:23:20 Is AI a zero sum game? 1:28:51 Lethal autonomous weapons governance 1:31:38 Does the governance of autonomous weapons affect outcomes from AGI 1:33:00 AI consciousness 1:39:37 Alignment is important and the benefits of AI can be great This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
  continue reading

204 епізодів

Artwork
iconПоширити
 
Manage episode 292995395 series 1334308
Вміст надано Future of Life Institute. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Future of Life Institute або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence. Topics discussed in this episode include: -Negative and positive outcomes from AI in the short, medium, and long-terms -The perils and promises of AGI and superintelligence -AI alignment and AI existential risk -Lethal autonomous weapons -AI governance and racing to powerful AI systems -AI consciousness You can find the page for this podcast here: https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 1:35 Futures that Bart is excited about 4:08 Positive futures in the short, medium, and long-terms 7:23 AGI timelines 8:11 Bart’s research on “planning” through the game of Sokoban 13:10 If we don’t go extinct, is the creation of AGI and superintelligence inevitable? 15:28 What’s exciting about futures with AGI and superintelligence? 17:10 How long does it take for superintelligence to arise after AGI? 21:08 Would a superintelligence have something intelligent to say about income inequality? 23:24 Are there true or false answers to moral questions? 25:30 Can AGI and superintelligence assist with moral and philosophical issues? 28:07 Do you think superintelligences converge on ethics? 29:32 Are you most excited about the short or long-term benefits of AI? 34:30 Is existential risk from AI a legitimate threat? 35:22 Is the AI alignment problem legitimate? 43:29 What are futures that you fear? 46:24 Do social media algorithms represent an instance of the alignment problem? 51:46 The importance of educating the public on AI 55:00 Income inequality, cyber security, and negative futures 1:00:06 Lethal autonomous weapons 1:01:50 Negative futures in the long-term 1:03:26 How have your views of AI alignment evolved? 1:06:53 Bart’s plans and intentions for the Association for the Advancement of Artificial Intelligence 1:13:45 Policy recommendations for existing AIs and the AI ecosystem 1:15:35 Solving the parts of the AI alignment that won’t be solved by industry incentives 1:18:17 Narratives of an international race to powerful AI systems 1:20:42 How does an international race to AI affect the chances of successful AI alignment? 1:23:20 Is AI a zero sum game? 1:28:51 Lethal autonomous weapons governance 1:31:38 Does the governance of autonomous weapons affect outcomes from AGI 1:33:00 AI consciousness 1:39:37 Alignment is important and the benefits of AI can be great This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
  continue reading

204 епізодів

모든 에피소드

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник