Artwork

Вміст надано European Leadership Network. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією European Leadership Network або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

Fake Brains & Killer Robots

1:32:33
 
Поширити
 

Manage episode 432058675 series 3528929
Вміст надано European Leadership Network. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією European Leadership Network або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

Welcome to “Fake Brains & Killer Robots”, the fifth episode of “Ok Doomer!” the podcast series by The European Leadership Network’s (ELN) New European Voices on Existential Risk (NEVER) network. Hosted by the ELN’s Policy and Impact Director, Jane Kinninmont, and the ELN’s Project and Communications Coordinator, Edan Simpson, this episode will focus on the potential existential risks associated with artificial intelligence.

Jane kicks off the episode with “What’s the Problem?” We hear from Alice Saltini, a Policy Fellow at the European Leadership Network who has been focusing on the interactions between AI and nuclear command and control systems.

Alice discusses the immediate threats of AI, such as hallucinations and cyber vulnerabilities in nuclear command and control systems, emphasising the need for caution, regulation and international cooperation to mitigate the risks associated with AI and nuclear weapons.

Edan’s “How To Fix It” panel features Dr Ganna Pogrebna, Executive Director of the Artificial Intelligence and Cyber Futures Institute at Charles Sturt University in Australia. Ganna is also the Organiser of the Behavioural Data Science strand at the Alan Turing Institute, the United Kingdom’s national centre of excellence for AI and Data Science in London, where she serves as a fellow.

She’s joined by NEVER member Konrad Siefert. Konrad is co-CEO of the Simon Institute for Long-term Governance, which works to improve the international regime complex for governing rapid technological change and representing future generations in institutional design and policy processes. Previously, he co-founded Effective Altruism Switzerland.

Our third and final guest is NEVER member Nicolo Miotto; Nicolò currently works at the Organisation for Security and Co-operation in Europe (OSCE) Conflict Prevention Centre. Nicolò’s research foci include arms control, disarmament and non-proliferation, emerging disruptive technologies, and terrorism and violent extremism.

The panel discusses how best to govern, regulate, and limit the risks of AI and what that actually means; the role of multilateral institutions such as the UN in implementing these efforts; what potential opportunities and setbacks new forms of AI could have for arms control, especially regarding WMD proliferation; and to what extent AI developers are aware of the possible misuses of new technologies and how best to safeguard against them.

Moving on to “Turn Back the Clock,” we look back to a time in history when humanity faced a potential existential threat but pulled back from the brink of destruction. On today’s episode, Jane is joined by Dr Jochen Hung, Associate Professor of Cultural History at Utrecht University in the Netherlands. They discuss historical perspectives on technological change and its impact on society, drawing parallels between the anxieties and hopes of people in the 1920s concerning modern technologies and those of the present day.

Finally, as always, the episode is wrapped up in “The Debrief,” where Jane and Edan review the episode to make sense of everything they've covered.

Catch up on previous episodes, and make sure to subscribe to future episodes of "Ok Doomer!

------------------

Follow the ELN on:

X (formerly Twitter)

LinkedIn

Facebook

The ELN's website

The NEVER webpage

  continue reading

6 епізодів

Artwork
iconПоширити
 
Manage episode 432058675 series 3528929
Вміст надано European Leadership Network. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією European Leadership Network або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

Welcome to “Fake Brains & Killer Robots”, the fifth episode of “Ok Doomer!” the podcast series by The European Leadership Network’s (ELN) New European Voices on Existential Risk (NEVER) network. Hosted by the ELN’s Policy and Impact Director, Jane Kinninmont, and the ELN’s Project and Communications Coordinator, Edan Simpson, this episode will focus on the potential existential risks associated with artificial intelligence.

Jane kicks off the episode with “What’s the Problem?” We hear from Alice Saltini, a Policy Fellow at the European Leadership Network who has been focusing on the interactions between AI and nuclear command and control systems.

Alice discusses the immediate threats of AI, such as hallucinations and cyber vulnerabilities in nuclear command and control systems, emphasising the need for caution, regulation and international cooperation to mitigate the risks associated with AI and nuclear weapons.

Edan’s “How To Fix It” panel features Dr Ganna Pogrebna, Executive Director of the Artificial Intelligence and Cyber Futures Institute at Charles Sturt University in Australia. Ganna is also the Organiser of the Behavioural Data Science strand at the Alan Turing Institute, the United Kingdom’s national centre of excellence for AI and Data Science in London, where she serves as a fellow.

She’s joined by NEVER member Konrad Siefert. Konrad is co-CEO of the Simon Institute for Long-term Governance, which works to improve the international regime complex for governing rapid technological change and representing future generations in institutional design and policy processes. Previously, he co-founded Effective Altruism Switzerland.

Our third and final guest is NEVER member Nicolo Miotto; Nicolò currently works at the Organisation for Security and Co-operation in Europe (OSCE) Conflict Prevention Centre. Nicolò’s research foci include arms control, disarmament and non-proliferation, emerging disruptive technologies, and terrorism and violent extremism.

The panel discusses how best to govern, regulate, and limit the risks of AI and what that actually means; the role of multilateral institutions such as the UN in implementing these efforts; what potential opportunities and setbacks new forms of AI could have for arms control, especially regarding WMD proliferation; and to what extent AI developers are aware of the possible misuses of new technologies and how best to safeguard against them.

Moving on to “Turn Back the Clock,” we look back to a time in history when humanity faced a potential existential threat but pulled back from the brink of destruction. On today’s episode, Jane is joined by Dr Jochen Hung, Associate Professor of Cultural History at Utrecht University in the Netherlands. They discuss historical perspectives on technological change and its impact on society, drawing parallels between the anxieties and hopes of people in the 1920s concerning modern technologies and those of the present day.

Finally, as always, the episode is wrapped up in “The Debrief,” where Jane and Edan review the episode to make sense of everything they've covered.

Catch up on previous episodes, and make sure to subscribe to future episodes of "Ok Doomer!

------------------

Follow the ELN on:

X (formerly Twitter)

LinkedIn

Facebook

The ELN's website

The NEVER webpage

  continue reading

6 епізодів

همه قسمت ها

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник