Player FM - Internet Radio Done Right
900 subscribers
Checked 2d ago
Додано eight років тому
Вміст надано Gus Docker and Future of Life Institute. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Gus Docker and Future of Life Institute або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !
Переходьте в офлайн за допомогою програми Player FM !
Imagine A World: What if new governance mechanisms helped us coordinate?
Manage episode 376136784 series 1334308
Вміст надано Gus Docker and Future of Life Institute. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Gus Docker and Future of Life Institute або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies seriously enough, or will it take a dramatic event, such as an AI-driven war, to get their act together? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In this first episode of Imagine A World we explore the fictional worldbuild titled 'Peace Through Prophecy'. Host Guillaume Riesen speaks to the makers of 'Peace Through Prophecy', a second place entry in FLI's Worldbuilding Contest. The worldbuild was created by Jackson Wagner, Diana Gurvich and Holly Oatley. In the episode, Jackson and Holly discuss just a few of the many ideas bubbling around in their imagined future. At its core, this world is arguably about community. It asks how technology might bring us closer together, and allow us to reinvent our social systems. Many roads are explored, a whole garden of governance systems bolstered by Artificial Intelligence and other technologies. Overall, there's a shift towards more intimate and empowered communities. Even the AI systems eventually come to see their emotional and creative potentials realized. While progress is uneven, and littered with many human setbacks, a pretty good case is made for how everyone's best interests can lead us to a more positive future. Please note: This episode explores the ideas created as part of FLI’s Worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions Explore this imagined world: https://worldbuild.ai/peace-through-prophecy The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Media and concepts referenced in the episode: https://en.wikipedia.org/wiki/Prediction_market https://forum.effectivealtruism.org/ 'Veil of ignorance' thought experiment: https://en.wikipedia.org/wiki/Original_position https://en.wikipedia.org/wiki/Isaac_Asimov https://en.wikipedia.org/wiki/Liquid_democracy https://en.wikipedia.org/wiki/The_Dispossessed https://en.wikipedia.org/wiki/Terra_Ignota https://equilibriabook.com/ https://en.wikipedia.org/wiki/John_Rawls https://en.wikipedia.org/wiki/Radical_transparency https://en.wikipedia.org/wiki/Audrey_Tang https://en.wikipedia.org/wiki/Quadratic_voting#Quadratic_funding
…
continue reading
224 епізодів
Manage episode 376136784 series 1334308
Вміст надано Gus Docker and Future of Life Institute. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Gus Docker and Future of Life Institute або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies seriously enough, or will it take a dramatic event, such as an AI-driven war, to get their act together? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In this first episode of Imagine A World we explore the fictional worldbuild titled 'Peace Through Prophecy'. Host Guillaume Riesen speaks to the makers of 'Peace Through Prophecy', a second place entry in FLI's Worldbuilding Contest. The worldbuild was created by Jackson Wagner, Diana Gurvich and Holly Oatley. In the episode, Jackson and Holly discuss just a few of the many ideas bubbling around in their imagined future. At its core, this world is arguably about community. It asks how technology might bring us closer together, and allow us to reinvent our social systems. Many roads are explored, a whole garden of governance systems bolstered by Artificial Intelligence and other technologies. Overall, there's a shift towards more intimate and empowered communities. Even the AI systems eventually come to see their emotional and creative potentials realized. While progress is uneven, and littered with many human setbacks, a pretty good case is made for how everyone's best interests can lead us to a more positive future. Please note: This episode explores the ideas created as part of FLI’s Worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions Explore this imagined world: https://worldbuild.ai/peace-through-prophecy The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Media and concepts referenced in the episode: https://en.wikipedia.org/wiki/Prediction_market https://forum.effectivealtruism.org/ 'Veil of ignorance' thought experiment: https://en.wikipedia.org/wiki/Original_position https://en.wikipedia.org/wiki/Isaac_Asimov https://en.wikipedia.org/wiki/Liquid_democracy https://en.wikipedia.org/wiki/The_Dispossessed https://en.wikipedia.org/wiki/Terra_Ignota https://equilibriabook.com/ https://en.wikipedia.org/wiki/John_Rawls https://en.wikipedia.org/wiki/Radical_transparency https://en.wikipedia.org/wiki/Audrey_Tang https://en.wikipedia.org/wiki/Quadratic_voting#Quadratic_funding
…
continue reading
224 епізодів
Wszystkie odcinki
×F
Future of Life Institute Podcast


1 Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish) 1:22:33
1:22:33
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:22:33
On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us. We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here: https://palisaderesearch.org/blog/specification-gaming Timestamps: 00:00 The pace of AI progress 04:15 How we might lose control 07:23 Why are AIs sometimes dumb? 12:52 Benchmarks vs real world 19:11 Loss of control scenarios 26:36 Why would AI turn against us? 30:35 AIs hacking chess 36:25 Why didn't more advanced AIs hack? 41:39 Creating honest AIs 49:44 AI attackers vs AI defenders 58:27 How good is security at AI companies? 01:03:37 A sense of urgency 01:10:11 What should we do? 01:15:54 Skepticism about AI progress…
F
Future of Life Institute Podcast


Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts. You can learn more about Ann's work here: https://www.wiseancestors.org Timestamps: 00:00 What is Wise Ancestors? 04:27 Recovering after catastrophes 11:40 Decentralized science 18:28 Upfront benefit-sharing 26:30 Local communities 32:44 Recreating optimal environments 38:57 Cross-cultural collaboration…
F
Future of Life Institute Podcast


1 Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective 1:25:56
1:25:56
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:25:56
Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI. You can learn more about Michael's work here: https://catholic.tech/academics/faculty/michael-baggot Timestamps: 00:00 Meta-narratives and transhumanism 15:28 Advanced AI and religious communities 27:22 Superintelligence 38:31 Countercultures and technology 52:38 Christian perspectives and tradition 01:05:20 God-like artificial intelligence 01:13:15 A positive vision for AI…
F
Future of Life Institute Podcast


1 David Dalrymple on Safeguarded, Transformative AI 1:40:06
1:40:06
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:40:06
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware. You can learn more about David's work at ARIA here: https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/ Timestamps: 00:00 What is Safeguarded AI? 16:28 Implementing Safeguarded AI 22:58 Can we trust Safeguarded AIs? 31:00 Formalizing more of the world 37:34 The performance cost of verified AI 47:58 Changing attitudes towards AI 52:39 Flexible Hardware-Enabled Guarantees 01:24:15 Mind uploading 01:36:14 Lessons from David's early life…
F
Future of Life Institute Podcast


1 Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters 1:09:26
1:09:26
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:09:26
Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com Timestamps: 00:00 What is GiveDirectly? 15:04 AI for targeting cash transfers 29:39 AI for predicting natural disasters 46:04 How scalable is GiveDirectly's AI approach? 58:10 Decentralized vs. centralized data collection 1:04:30 Dream scenario for GiveDirectly…
F
Future of Life Institute Podcast


1 Nathan Labenz on the State of AI and Progress since GPT-4 3:20:04
3:20:04
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається3:20:04
Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4. You can find Nathan's podcast here: https://www.cognitiverevolution.ai Timestamps: 00:00 AI progress since GPT-4 10:50 Multimodality 19:06 Low-cost models 27:58 Coding versus medicine/law 36:09 AI agents 45:29 How much are people using AI? 53:39 Open source 01:15:22 AI industry analysis 01:29:27 Are some AI models kept internal? 01:41:00 Money is not the limiting factor in AI 01:59:43 AI and biology 02:08:42 Robotics and self-driving 02:24:14 Inference-time compute 02:31:56 AI governance 02:36:29 Big-picture overview of AI progress and safety…
F
Future of Life Institute Podcast


1 Connor Leahy on Why Humanity Risks Extinction from AGI 1:58:50
1:58:50
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:58:50
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this. Here's the document we discuss in the episode: https://www.thecompendium.ai Timestamps: 00:00 The Compendium 15:25 The motivations of AGI corps 31:17 AI is grown, not written 52:59 A science of intelligence 01:07:50 Jobs, work, and AGI 01:23:19 Superintelligence 01:37:42 Open-source AI 01:45:07 What can we do?…
F
Future of Life Institute Podcast


1 Suzy Shepherd on Imagining Superintelligence and "Writing Doom" 1:03:08
1:03:08
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:03:08
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world. Here's Writing Doom: https://www.youtube.com/watch?v=xfMQ7hzyFW4 Timestamps: 00:00 Writing Doom 08:23 Humor in Writing Doom 13:31 Concise writing 18:37 Getting feedback 27:02 Alternative characters 36:31 Popular video formats 46:53 AI in filmmaking 49:52 Meaning in the future…
F
Future of Life Institute Podcast


1 Andrea Miotti on a Narrow Path to Safe, Transformative AI 1:28:09
1:28:09
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:28:09
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like. Here's the document we discuss in the episode: https://www.narrowpath.co Timestamps: 00:00 A Narrow Path 06:10 Can we predict future AI capabilities? 11:10 Risks from current AI development 17:56 The benefits of narrow AI 22:30 Against self-improving AI 28:00 Cybersecurity at AI companies 33:55 Unbounded AI 39:31 Global coordination on AI safety 49:43 Monitoring training runs 01:00:20 Benefits of cooperation 01:04:58 A science of intelligence 01:25:36 How you can help…
F
Future of Life Institute Podcast


1 Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents 1:30:29
1:30:29
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:30:29
Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode: https://epochai.org/blog/can-ai-scaling-continue-through-2030 Timestamps: 00:00 How important is scaling? 08:03 How capable will AIs be in 2030? 18:33 AI agents, reasoning, and planning 23:39 Automating coding and mathematics 31:26 Uncertainty about investing in AI 40:34 Gap between investment and returns 45:30 Compute, software and data 51:54 Inference-time compute 01:08:49 Returns to software R&D 01:19:22 Limits to expanding compute…
F
Future of Life Institute Podcast


1 Ryan Greenblatt on AI Control, Timelines, and Slowing Down Around Human-Level AI 2:08:44
2:08:44
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається2:08:44
Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI. You can learn more about Ryan's work here: https://www.redwoodresearch.org/team/ryan-greenblatt Timestamps: 00:00 AI control 09:35 Challenges to AI control 23:48 AI control as a bridge to alignment 26:54 Policy and coordination for AI safety 29:25 Slowing down around human-level AI 49:14 Scheming and misalignment 01:27:27 AI timelines and takeoff speeds 01:58:15 Human cognition versus AI cognition…
F
Future of Life Institute Podcast


1 Tom Barnes on How to Build a Resilient World 1:19:41
1:19:41
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:19:41
Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world. Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence Timestamps: 00:00 Spending on safety vs capabilities 09:06 Racing dynamics - is the classic story true? 28:15 How are governments preparing for advanced AI? 49:06 US-China dialogues on AI 57:44 Coordination failures 1:04:26 Global resilience 1:13:09 Patient philanthropy The John von Neumann biography we reference: https://www.penguinrandomhouse.com/books/706577/the-man-from-the-future-by-ananyo-bhattacharya/…
F
Future of Life Institute Podcast


1 Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond 2:16:11
2:16:11
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається2:16:11
Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more. Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai Timestamps: 00:00 Is AI plateauing or accelerating? 06:55 How do we get AI agents? 16:12 Do agency and reasoning emerge? 23:57 Compute thresholds in regulation 28:59 Superintelligence as an ideological goal 37:09 General progress vs superintelligence 44:22 Meta and open source AI 49:09 Technological change and regime change 01:03:06 How will governments react to AI? 01:07:50 Will the US nationalize AGI corporations? 01:17:05 Economics of an intelligence explosion 01:31:38 AI cognition vs human cognition 01:48:03 AI and future religions 01:56:40 Is consciousness functional? 02:05:30 AI and children…
F
Future of Life Institute Podcast


1 Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal 1:03:10
1:03:10
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:03:10
Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home Timestamps: 00:00 Innovation prizes at XPRIZE 08:25 Deciding which prizes to create 19:00 Creating new markets 29:51 How far can prizes scale? 35:25 When are prizes successful? 46:06 100M dollar carbon removal prize 54:40 Upcoming prizes 59:52 Anousheh's time in space…
F
Future of Life Institute Podcast


Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org Timestamps: 00:00 Mary's journey to presidency 05:11 Long-view leadership 06:55 Prioritizing global problems 08:38 Risks from artificial intelligence 11:55 Climate change 15:18 Barriers to global gender equality 16:28 Risk of nuclear war 20:51 Advice to future leaders 22:53 Humor in politics 24:21 Barriers to international cooperation 27:10 Institutions and technological change…
Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.