Player FM - Internet Radio Done Right
851 subscribers
Checked 6d ago
Додано nine років тому
Вміст надано Gus Docker and Future of Life Institute. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Gus Docker and Future of Life Institute або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !
Переходьте в офлайн за допомогою програми Player FM !
Future of Life Institute Podcast
Відзначити всі (не)відтворені ...
Manage series 1334308
Вміст надано Gus Docker and Future of Life Institute. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Gus Docker and Future of Life Institute або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
…
continue reading
243 епізодів
Відзначити всі (не)відтворені ...
Manage series 1334308
Вміст надано Gus Docker and Future of Life Institute. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Gus Docker and Future of Life Institute або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
…
continue reading
243 епізодів
Усі епізоди
×F
Future of Life Institute Podcast


1 Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd) 1:27:00
1:27:00
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:27:00
Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene. Follow Benjamin's work at: https://benjamintodd.substack.com Timestamps: 00:00 What are reasoning models? 04:04 Reinforcement learning supercharges reasoning 05:06 Reasoning models vs. agents 10:04 Economic impact of automated math/code 12:14 Compute as a bottleneck 15:20 Shift from giant pre-training to post-training/agents 17:02 Three feedback loops: algorithms, chips, robots 20:33 How fast could an algorithmic loop run? 22:03 Chip design and production acceleration 23:42 Industrial/robotics loop and growth dynamics 29:52 Society’s slow reaction; “warning shots” 33:03 Robotics: software and hardware bottlenecks 35:05 Scaling robot production 38:12 Robots at ~$0.20/hour? 43:13 Regulation and humans-in-the-loop 49:06 Personal prep: why it still matters 52:04 Build an information network 55:01 Save more money 58:58 Land, real estate, and scarcity in an AI world 01:02:15 Valuable skills: get close to AI, or far from it 01:06:49 Fame, relationships, citizenship 01:10:01 Redistribution, welfare, and politics under AI 01:12:04 Try to become more resilient 01:14:36 Information hygiene 01:22:16 Seven-year horizon and scaling limits by ~2030…
F
Future of Life Institute Podcast


1 From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace) 1:37:20
1:37:20
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:37:20
On this episode, Calum Chace joins me to discuss the transformative impact of AI on employment, comparing the current wave of cognitive automation to historical technological revolutions. We talk about "universal generous income", fully-automated luxury capitalism, and redefining education with AI tutors. We end by examining verification of artificial agents and the ethics of attributing consciousness to machines. Learn more about Calum's work here: https://calumchace.com Timestamps: 00:00:00 Preview and intro 00:03:02 Past tech revolutions and AI-driven unemployment 00:05:43 Cognitive automation: from secretaries to every job 00:08:02 The “peak horse” analogy and avoiding human obsolescence 00:10:55 Infinite demand and lump of labor 00:18:30 Fully-automated luxury capitalism 00:23:31 Abundance economy and a potential employment cliff 00:29:37 Education reimagined with personalized AI tutors 00:36:22 Real-world uses of LLMs: memory, drafting, emotional insight 00:42:56 Meaning beyond jobs: aristocrats, retirees, and kids 00:49:51 Four futures of superintelligence 00:57:20 Conscious AI and empathy as a safety strategy 01:10:55 Verifying AI agents 01:25:20 Over-attributing vs under-attributing machine consciousness…
F
Future of Life Institute Podcast


1 How AI Could Help Overthrow Governments (with Tom Davidson) 1:53:49
1:53:49
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:53:49
On this episode, Tom Davidson joins me to discuss the emerging threat of AI-enabled coups, where advanced artificial intelligence could empower covert actors to seize power. We explore scenarios including secret loyalties within companies, rapid military automation, and how AI-driven democratic backsliding could differ significantly from historical precedents. Tom also outlines key mitigation strategies, risk indicators, and opportunities for individuals to help prevent these threats. Learn more about Tom's work here: https://www.forethought.org Timestamps: 00:00:00 Preview: why preventing AI-enabled coups matters 00:01:24 What do we mean by an “AI-enabled coup”? 00:01:59 Capabilities AIs would need (persuasion, strategy, productivity) 00:02:36 Cyber-offense and the road to robotized militaries 00:05:32 Step-by-step example of an AI-enabled military coup 00:08:35 How AI-enabled coups would differ from historical coups 00:09:24 Democratic backsliding (Venezuela, Hungary, U.S. parallels) 00:12:38 Singular loyalties, secret loyalties, exclusive access 00:14:01 Secret-loyalty scenario: CEO with hidden control 00:18:10 From sleeper agents to sophisticated covert AIs 00:22:22 Exclusive-access threat: one project races ahead 00:29:03 Could one country outgrow the rest of the world? 00:40:00 Could a single company dominate global GDP? 00:47:01 Autocracies vs democracies 00:54:43 Mitigations for singular and secret loyalties 01:06:25 Guardrails, monitoring, and controlled-use APIs 01:12:38 Using AI itself to preserve checks-and-balances 01:24:53 Risk indicators to watch for AI-enabled coups 01:33:05 Tom’s risk estimates for the next 5 and 30 years 01:46:50 How you can help – research, policy, and careers…
F
Future of Life Institute Podcast


1 What Happens After Superintelligence? (with Anders Sandberg) 1:44:54
1:44:54
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:44:54
Anders Sandberg joins me to discuss superintelligence and its profound implications for human psychology, markets, and governance. We talk about physical bottlenecks, tensions between the technosphere and the biosphere, and the long-term cultural and physical forces shaping civilization. We conclude with Sandberg explaining the difficulties of designing reliable AI systems amidst rapid change and coordination risks. Learn more about Anders's work here: https://mimircenter.org/anders-sandberg Timestamps: 00:00:00 Preview and intro 00:04:20 2030 superintelligence scenario 00:11:55 Status, post-scarcity, and reshaping human psychology 00:16:00 Physical limits: energy, datacenter, and waste-heat bottlenecks 00:23:48 Technosphere vs biosphere 00:28:42 Culture and physics as long-run drivers of civilization 00:40:38 How superintelligence could upend markets and governments 00:50:01 State inertia: why governments lag behind companies 00:59:06 Value lock-in, censorship, and model alignment 01:08:32 Emergent AI ecosystems and coordination-failure risks 01:19:34 Predictability vs reliability: designing safe systems 01:30:32 Crossing the reliability threshold 01:38:25 Personal reflections on accelerating change…
F
Future of Life Institute Podcast


1 Why the AI Race Ends in Disaster (with Daniel Kokotajlo) 1:10:26
1:10:26
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:10:26
On this episode, Daniel Kokotajlo joins me to discuss why artificial intelligence may surpass the transformative power of the Industrial Revolution, and just how much AI could accelerate AI research. We explore the implications of automated coding, the critical need for transparency in AI development, the prospect of AI-to-AI communication, and whether AI is an inherently risky technology. We end by discussing iterative forecasting and its role in anticipating AI's future trajectory. You can learn more about Daniel's work at: https://ai-2027.com and https://ai-futures.org Timestamps: 00:00:00 Preview and intro 00:00:50 Why AI will eclipse the Industrial Revolution 00:09:48 How much can AI speed up AI research? 00:16:13 Automated coding and diffusion 00:27:37 Transparency in AI development 00:34:52 Deploying AI internally 00:40:24 Communication between AIs 00:49:23 Is AI inherently risky? 00:59:54 Iterative forecasting…
F
Future of Life Institute Podcast


1 Preparing for an AI Economy (with Daniel Susskind) 1:03:37
1:03:37
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:03:37
On this episode, Daniel Susskind joins me to discuss disagreements between AI researchers and economists, how we can best measure AI’s economic impact, how human values can influence economic outcomes, what meaningful work will remain for humans in the future, the role of commercial incentives in AI development, and the future of education. You can learn more about Daniel's work here: https://www.danielsusskind.com Timestamps: 00:00:00 Preview and intro 00:03:19 AI researchers versus economists 00:10:39 Measuring AI's economic effects 00:16:19 Can AI be steered in positive directions? 00:22:10 Human values and economic outcomes 00:28:21 What will remain for people to do? 00:44:58 Commercial incentives in AI 00:50:38 Will education move towards general skills? 00:58:46 Lessons for parents…
F
Future of Life Institute Podcast


1 Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex) 1:27:14
1:27:14
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:27:14
Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world. Learn more about Ed's work here: https://ed.newtonrex.com Timestamps: 00:00:00 Preview and intro 00:04:18 AI-generated music 00:12:15 Resigning from Stability AI 00:16:20 AI industry attitudes towards rights 00:26:22 Fairly Trained 00:37:16 Special kinds of training data 00:50:42 The longer-term future of AI 00:56:09 Will AI improve living standards? 01:03:10 AI versions of artists 01:13:28 Authenticity and art 01:18:45 Competitive pressures in AI 01:24:06 Priorities going forward…
F
Future of Life Institute Podcast


1 AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse) 1:15:49
1:15:49
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:15:49
On this episode, Sarah Hastings-Woodhouse joins me to discuss what benchmarks actually measure, AI’s development trajectory in comparison to other technologies, tasks that AI systems can and cannot handle, capability profiles of present and future AIs, the notion of alignment by default, and the leading AI companies’ vague AGI plans. We also discuss the human psychology of AI, including the feelings of living in the "fast world" versus the "slow world", and navigating long-term projects given short timelines. Timestamps: 00:00:00 Preview and intro 00:00:46 What do benchmarks measure? 00:08:08 Will AI develop like other tech? 00:14:13 Which tasks can AIs do? 00:23:00 Capability profiles of AIs 00:34:04 Timelines and social effects 00:42:01 Alignment by default? 00:50:36 Can vague AGI plans be useful? 00:54:36 The fast world and the slow world 01:08:02 Long-term projects and short timelines…
F
Future of Life Institute Podcast


1 Could Powerful AI Break Our Fragile World? (with Michael Nielsen) 1:01:28
1:01:28
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:01:28
On this episode, Michael Nielsen joins me to discuss how humanity's growing understanding of nature poses dual-use challenges, whether existing institutions and governance frameworks can adapt to handle advanced AI safely, and how we might recognize signs of dangerous AI. We explore the distinction between AI as agents and tools, how power is latent in the world, implications of widespread powerful hardware, and finally touch upon the philosophical perspectives of deep atheism and optimistic cosmism. Timestamps: 00:00:00 Preview and intro 00:01:05 Understanding is dual-use 00:05:17 Can we handle AI like other tech? 00:12:08 Can institutions adapt to AI? 00:16:50 Recognizing signs of dangerous AI 00:22:45 Agents versus tools 00:25:43 Power is latent in the world 00:35:45 Widespread powerful hardware 00:42:09 Governance mechanisms for AI 00:53:55 Deep atheism and optimistic cosmism…
F
Future of Life Institute Podcast


1 Facing Superintelligence (with Ben Goertzel) 1:32:33
1:32:33
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:32:33
On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward. Timestamps: 00:00:00 Preview and intro 00:01:59 Thinking about AGI in the 1970s 00:07:28 What's different about this AI boom? 00:16:10 Former taboos about AGI 00:19:53 AI research worth revisiting 00:35:53 Will the first AGI be simple? 00:48:49 Is alignment achievable? 01:02:40 Benchmarks and economic impact 01:15:23 Bottlenecks to superintelligence 01:23:09 What should we do?…
F
Future of Life Institute Podcast


1 Will Future AIs Be Conscious? (with Jeff Sebo) 1:34:27
1:34:27
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:34:27
On this episode, Jeff Sebo joins me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also discuss AI companions, AI rights, and how we might measure consciousness effectively. You can follow Jeff’s work here: https://jeffsebo.net/ Timestamps: 00:00:00 Preview and intro 00:02:56 Imagining artificial consciousness 00:07:51 Substrate-independence? 00:11:26 Are we making progress? 00:18:03 Intuitions about explanations 00:24:43 AI risk and AI consciousness 00:40:01 Consciousness and cognitive complexity 00:51:20 Intuition versus intellect 00:58:48 AIs as companions 01:05:24 AI rights 01:13:00 Acting under time pressure 01:20:16 Measuring consciousness 01:32:11 How can you help?…
F
Future of Life Institute Podcast


1 Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz) 1:35:09
1:35:09
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:35:09
On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity’s uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI’s growing influence in financial trading. You can follow Zvi's excellent blog here: https://thezvi.substack.com Timestamps: 00:00:00 Preview and introduction 00:02:01 Sycophantic AIs 00:07:28 Bottlenecks for AI agents 00:21:26 Are benchmarks useful? 00:32:39 AI agent time horizons 00:44:18 Impact of automating research 00:53:00 Limits to scaling inference compute 01:02:51 Will the future go well for humanity? 01:12:22 A good plan for safe AI 01:26:03 What makes AI different? 01:31:29 AI in trading…
F
Future of Life Institute Podcast


1 Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding) 1:02:32
1:02:32
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:02:32
On this episode, Jeffrey Ding joins me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI’s global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of automating translations to bridge knowledge gaps. You can learn more about Jeffrey’s work at: https://jeffreyjding.github.io Timestamps: 00:00:00 Preview and introduction 00:01:36 A US-China AI arms race? 00:10:58 Attitudes to AI safety in China 00:17:53 Diffusion of AI 00:25:13 Innovation without diffusion 00:34:29 AI development concentration 00:41:40 Learning from the history of technology 00:47:48 Translating Chinese AI writings 00:55:36 Automating translation of AI writings…
F
Future of Life Institute Podcast


1 How Will We Cooperate with AIs? (with Allison Duettmann) 1:36:02
1:36:02
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:36:02
On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children. You can learn more about Allison's work at: https://foresight.org Timestamps: 00:00:00 Preview 00:01:07 Centralized AI versus decentralized AI 00:13:02 Risks from decentralized AI 00:25:39 International AI governance 00:39:52 Cooperation with future AIs 00:53:51 AI for decision-making 01:05:58 Capital intensity of AI 01:09:11 Lessons from history 01:15:50 Future space law and property rights 01:27:28 Is technology invented or discovered? 01:32:34 Children in the age of AI…
F
Future of Life Institute Podcast


1 Brain-like AGI and why it's Dangerous (with Steven Byrnes) 1:13:13
1:13:13
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:13:13
On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies. You can learn more about Steven's work at: https://sjbyrnes.com/agi.html Timestamps: 00:00 Preview 00:54 Brain-like AGI Safety 13:16 Controlled AGI versus Social-instinct AGI 19:12 Learning from the brain 28:36 Why is brain-like AI the most likely path to AGI? 39:23 Honesty in AI models 44:02 How to help with brain-like AGI safety 53:36 AI traits with both positive and negative effects 01:02:44 Different AI safety strategies…
Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.