AXREM Insights bringing you insights from within the industry. We'll be talking to our team and our members and delving into the people behind the products and services.
…
continue reading
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
…
continue reading
AxR \ (•◡•) /(ᵔᴥᵔ)ʕ•ᴥ•ʔ Cover art photo provided by rawpixel on Unsplash: https://unsplash.com/@rawpixel
…
continue reading
AI researchers often complain about the poor coverage of their work in the news media. But why is this happening, and how can it be fixed? In this episode, I speak with Shakeel Hashim about the resource constraints facing AI journalism, the disconnect between journalists' and AI researchers' views on transformative AI, and efforts to improve the st…
…
continue reading
In this festive Christmas special of AXREM Insights, Melanie Johnson and Sally Edgington are joined by AXREM Chair Jeevan Gunaratnam and Vice-Chair Huw Shumer to reflect on the highlights of 2024. The episode blends heartwarming personal stories with professional achievements, capturing the spirit of the season. From Jeevan’s volunteer work at Cris…
…
continue reading
Lots of people in the AI safety space worry about models being able to make deliberate, multi-step plans. But can we already see this in existing neural nets? In this episode, I talk with Erik Jenner about his work looking at internal look-ahead within chess-playing neural networks. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.c…
…
continue reading
1
39 - Evan Hubinger on Model Organisms of Misalignment
1:45:47
1:45:47
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
1:45:47
The 'model organisms of misalignment' line of research creates AI models that exhibit various types of misalignment, and studies them to try to understand how the misalignment occurs and whether it can be somehow removed. In this episode, Evan Hubinger talks about two papers he's worked on at Anthropic under this agenda: "Sleeper Agents" and "Sycop…
…
continue reading
In this BMUS ASM podcast special, hosts Melanie Johnson and Sally Edgington are joined by Emma Tucker (COO of BMUS), Peter Cantin (incoming BMUS President), and Shaunna Smith (Chair of the Education Committee) to discuss the upcoming BMUS Annual Scientific Meeting. The event, scheduled for December in Coventry, boasts a diverse programme including …
…
continue reading
You may have heard of singular learning theory, and its "local learning coefficient", or LLC - but have you heard of the refined LLC? In this episode, I chat with Jesse Hoogland about his work on SLT, and using the refined LLC to find a new circuit in language models. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast T…
…
continue reading
Road lines, street lights, and licence plates are examples of infrastructure used to ensure that roads operate smoothly. In this episode, Alan Chan talks about using similar interventions to help avoid bad outcomes from the deployment of AI agents. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast The transcript: https…
…
continue reading
Do language models understand the causal structure of the world, or do they merely note correlations? And what happens when you build a big AI society out of them? In this brief episode, recorded at the Bay Area Alignment Workshop, I chat with Zhijing Jin about her research on these questions. Patreon: https://www.patreon.com/axrpodcast Ko-fi: http…
…
continue reading
In this podcast episode, AXREM's Melanie Johnson and Sally Edgington interview Andrew New, CEO, and Richard Evans, Executive Commercial Director of NHS Supply Chain, focusing on collaborative efforts to improve NHS operations and supply chain efficiency. The discussion covers recent organisational changes aimed at streamlining procurement processes…
…
continue reading
In this episode, Melanie Johnson and Sally Edgington host Jacqui Rock, the Chief Commercial Officer of NHS England, who shares insights from her career in finance and government, her motivation for joining the NHS, and her commitment to driving change. Jacqui discusses the NHS reform plan, focusing on shifts from hospital to community care, analog …
…
continue reading
In this episode of AXREM Insights, hosts Melanie Johnson and Sally Edgington interview Liberal Democrat MP Tim Farron, discussing his work advocating for improved healthcare access in rural areas, particularly in his Westmorland and Lonsdale constituency. Tim emphasises the challenges his constituents face with access to cancer treatment, such as l…
…
continue reading
In this episode of AXREM Insights, host Melanie Johnson and co-host Sally Edgington sit down with David Lawson, Director of Medical Technology at the Department of Health and Social Care, to explore the future of MedTech in the UK. David shares insights into his career journey, from starting as an admin assistant to becoming one of the youngest hea…
…
continue reading
1
37 - Jaime Sevilla on AI Forecasting
1:44:25
1:44:25
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
1:44:25
Epoch AI is the premier organization that tracks the trajectory of AI - how much compute is used, the role of algorithmic improvements, the growth in data used, and when the above trends might hit an end. In this episode, I speak with the director of Epoch AI, Jaime Sevilla, about how compute, data, and algorithmic improvements are impacting AI, an…
…
continue reading
The latest episode of Axrem Insights podcast dives into the upcoming International Imaging Congress (IIC) 2024, where hosts Melanie Johnson and Sally Edgington interview Dr. Ram Senasi, Chair of the IIC Advisory Board and Consultant Pediatric Radiologist. Dr. Senasi shares insights into his passion for education, the role of technology in healthcar…
…
continue reading
1
36 - Adam Shai and Paul Riechers on Computational Mechanics
1:48:27
1:48:27
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
1:48:27
Sometimes, people talk about transformers as having "world models" as a result of being trained to predict text data on the internet. But what does this even mean? In this episode, I talk with Adam Shai and Paul Riechers about their work applying computational mechanics, a sub-field of physics studying how to predict random processes, to neural net…
…
continue reading
Patreon: https://www.patreon.com/axrpodcast MATS: https://www.matsprogram.org Note: I'm employed by MATS, but they're not paying me to make this video.
…
continue reading
In this episode of AXREM Insights, host Melanie Johnson and co-host Sally Edgington sit down with Jemimah Eve, Director of Policy and Impact at the Institute of Physics and Engineering in Medicine (IPEM). Jemimah discusses her career journey, starting from a background in chemistry and surface science to her current leadership role at IPEM. She exp…
…
continue reading
1
S3E4 | Viral Encounters and Workforce Solutions: Richard Evans on Leading the Society of Radiographers
16:08
In this episode of AXREM Insights, Melanie Johnson and Sally Edgington sit down with Richard Evans, CEO of the Society of Radiographers, for a fascinating chat about his career journey—from hospital porter to radiography expert. Richard shares how a twist of fate led him into the world of radiography and how his passion for the profession has only …
…
continue reading
The latest episode of the Axrem Insights podcast features a lively discussion from some of our Health care trade associations David Stockdale from the British Healthcare Trades Association (BHTA), Nikki from BAREMA, and Helen from BIVDA. The conversation, hosted by Melanie Johnson and Sally Edgington, focuses on the theme of partnerships in the hea…
…
continue reading
In this episode of our Partnerships Podcast, Melanie and Sally sit down with Catherine Kirkpatrick, a seasoned professional in the ultrasound community. Catherine shares her journey and insights into the ultrasound field, detailing her multifaceted roles, including her work as a Consultant Sonographer at United Lincolnshire Hospitals and Developmen…
…
continue reading
1
35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization
2:17:24
2:17:24
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:17:24
How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat with Peter Hase about his re…
…
continue reading
In this insightful episode, Melanie Johnson and Sally Edgington welcome Dr. Katherine Halliday, President of the Royal College of Radiologists (RCR). Dr. Halliday shares her inspiring journey from paediatric radiology to becoming a leader in the field. She delves into the challenges and opportunities within the radiology sector, focusing on workfor…
…
continue reading
1
34 - AI Evaluations with Beth Barnes
2:14:02
2:14:02
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:14:02
How can we figure out if AIs are capable enough to pose a threat to humans? When should we make a big effort to mitigate risks of catastrophic AI misbehaviour? In this episode, I chat with Beth Barnes, founder of and head of research at METR, about these questions and more. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast The transcript:…
…
continue reading
Welcome to AXREM Insights, where hosts Melanie Johnson and Sally Edgington explore advancements in healthcare through MedTech and innovation. In this special episode on the AXREM Patient Monitoring Manifesto, they interview Yasmeen Mahmoud, a business leader at Philips UKI. Yasmeen, who joined Philips through a graduate scheme, has extensive experi…
…
continue reading
In this pre-election special episode of the podcast, Melanie Johnson and Sally Edgington discuss politics with Ila Dobson, AXREM's Government Affairs Director, and Daniel Laing, Senior Account Director at Tendo Consulting. Ila shares her extensive background in healthcare and long-term involvement with AXREM, while Daniel discusses his career in pu…
…
continue reading
In this episode of AXREM Insights, hosts Melanie Johnson and Sally Edgington interview several key attendees live from the UKIO event. Dawn PhillipsJarrett, with 20 years of experience in radiology, shares her journey from studying chemistry and working in energy and water conservation to her current role in healthcare imaging. She emphasizes the i…
…
continue reading
1
33 - RLHF Problems with Scott Emmons
1:41:24
1:41:24
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
1:41:24
Reinforcement Learning from Human Feedback, or RLHF, is one of the main ways that makers of large language models make them 'aligned'. But people have long noted that there are difficulties with this approach when the models are smarter than the humans providing feedback. In this episode, I talk with Scott Emmons about his work categorizing the pro…
…
continue reading
In the premiere of Season 2 of AXREM Insights, co-hosts Melanie Johnson and Sally Edgington dive into the world of diagnostic imaging and oncology with a special guest, Dr. Emma Hyde. As the President of UKIO and an Associate Professor of Diagnostic Imaging at the University of Derby, Dr. Hyde shares her journey from a student radiographer to a lea…
…
continue reading
In this episode of AXREM Insights, Sarah Cowan and David Britton share their professional journeys and personal interests, illustrating the diverse paths within the medical technology industry. Sarah discusses her transition Marketing for a leisure centre to Siemens Medical a company she has been with for 17 years, highlighting her role with AXREM …
…
continue reading
1
32 - Understanding Agency with Jan Kulveit
2:22:29
2:22:29
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:22:29
What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast The transcript: axrp.net/episode/2024/05/30/epi…
…
continue reading
In this engaging episode of AXREM Insights, hosts Melanie Johnson and Sally Edgington sit down with Huw Shurmer, the strategic and government relationships manager for Fujifilm UK and current vice chair of AXREM. The conversation unfolds as Huw shares his fascinating career trajectory, starting from his academic background in theology to his pivota…
…
continue reading
In this episode of "Meet the Team," Jeevan Gunaratnam, Head of Government Affairs at Philips and current AXREM Chair, shares his journey in the medical technology field. Inspired by his uncle, a radiographer, Jeevan's early curiosity was piqued by medical devices, leading him from using a pacemaker as a paperweight to pursuing a career in engineeri…
…
continue reading
In the inaugural episode of the AXREM Insights Podcast, host Melanie Johnson interviews her co-host and AXREM CEO, Sally Edgington. Sally shares her remarkable journey from a diverse career background to her current role, driven by a lifelong interest in healthcare stemming from personal experiences as a patient. Despite facing challenges and setba…
…
continue reading
1
31 - Singular Learning Theory with Daniel Murfet
2:32:07
2:32:07
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:32:07
What's going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells …
…
continue reading
1
30 - AI Security with Jeffrey Ladish
2:15:44
2:15:44
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:15:44
Top labs use various forms of "safety training" on models before their release to make sure they don't do nasty stuff - but how robust is that? How can we ensure that the weights of powerful AIs don't get leaked or stolen? And what can AI even do these days? In this episode, I speak with Jeffrey Ladish about security and AI. Patreon: patreon.com/ax…
…
continue reading
Welcome to AXREM Insights, where healthcare meets innovation! Join hosts Melanie Johnson and Sally Edgington as they dive into the world of MedTech with industry leaders and experts. From diagnostic imaging to patient monitoring, we're bringing you first hand insights and intel straight from the heart of the industry. Get ready for Meet the Team, w…
…
continue reading
1
29 - Science of Deep Learning with Vikrant Varma
2:13:46
2:13:46
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:13:46
In 2022, it was announced that a fairly simple method can be used to extract the true beliefs of a language model on any given topic, without having to actually understand the topic at hand. Earlier, in 2021, it was announced that neural networks sometimes 'grok': that is, when training them on certain tasks, they initially memorize their training …
…
continue reading
1
28 - Suing Labs for AI Risk with Gabriel Weil
1:57:30
1:57:30
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
1:57:30
How should the law govern AI? Those concerned about existential risks often push either for bans or for regulations meant to ensure that AI is developed safely - but another approach is possible. In this episode, Gabriel Weil talks about his proposal to modify tort law to enable people to sue AI companies for disasters that are "nearly catastrophic…
…
continue reading
1
27 - AI Control with Buck Shlegeris and Ryan Greenblatt
2:56:05
2:56:05
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:56:05
A lot of work to prevent AI existential risk takes the form of ensuring that AIs don't want to cause harm or take over the world---or in other words, ensuring that they're aligned. In this episode, I talk with Buck Shlegeris and Ryan Greenblatt about a different approach, called "AI control": ensuring that AI systems couldn't take over the world, e…
…
continue reading
1
26 - AI Governance with Elizabeth Seger
1:57:13
1:57:13
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
1:57:13
The events of this year have highlighted important questions about the governance of artificial intelligence. For instance, what does it mean to democratize AI? And how should we balance benefits and dangers of open-sourcing powerful AI systems such as large language models? In this episode, I speak with Elizabeth Seger about her research on these …
…
continue reading
1
25 - Cooperative AI with Caspar Oesterheld
3:02:09
3:02:09
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
3:02:09
Imagine a world where there are many powerful AI systems, working at cross purposes. You could suppose that different governments use AIs to manage their militaries, or simply that many powerful AIs have their own wills. At any rate, it seems valuable for them to be able to cooperatively work together and minimize pointless conflict. How do we ensu…
…
continue reading
1
24 - Superalignment with Jan Leike
2:08:29
2:08:29
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:08:29
Recently, OpenAI made a splash by announcing a new "Superalignment" team. Lead by Jan Leike and Ilya Sutskever, the team would consist of top researchers, attempting to solve alignment for superintelligent AIs in four years by figuring out how to build a trustworthy human-level AI alignment researcher, and then using it to solve the rest of the pro…
…
continue reading
1
23 - Mechanistic Anomaly Detection with Mark Xu
2:05:52
2:05:52
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:05:52
Is there some way we can detect bad behaviour in our AI system without having to know exactly what it looks like? In this episode, I speak with Mark Xu about mechanistic anomaly detection: a research direction based on the idea of detecting strange things happening in neural networks, in the hope that that will alert us of potential treacherous tur…
…
continue reading
Very brief survey: bit.ly/axrpsurvey2023 Store is closing in a week! Link: store.axrp.net/ Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast
…
continue reading
1
22 - Shard Theory with Quintin Pope
3:28:21
3:28:21
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
3:28:21
What can we learn about advanced deep learning systems by understanding how humans learn and form values over their lifetimes? Will superhuman AI look like ruthless coherent utility optimization, or more like a mishmash of contextually activated desires? This episode's guest, Quintin Pope, has been thinking about these questions as a leading resear…
…
continue reading
1
21 - Interpretability for Engineers with Stephen Casper
1:56:02
1:56:02
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
1:56:02
Lots of people in the field of machine learning study 'interpretability', developing tools that they say give us useful information about neural networks. But how do we know if meaningful progress is actually being made? What should we want out of these tools? In this episode, I speak to Stephen Casper about these questions, as well as about a benc…
…
continue reading
1
20 - 'Reform' AI Alignment with Scott Aaronson
2:27:35
2:27:35
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
2:27:35
How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity the…
…
continue reading
Store: https://store.axrp.net/ Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Video: https://www.youtube.com/watch?v=kmPFjpEibu0
…
continue reading
1
19 - Mechanistic Interpretability with Neel Nanda
3:52:47
3:52:47
Відтворити пізніше
Відтворити пізніше
Списки
Подобається
Подобається
3:52:47
How good are we at understanding the internal computation of advanced machine learning models, and do we have a hope at getting better? In this episode, Neel Nanda talks about the sub-field of mechanistic interpretability research, as well as papers he's contributed to that explore the basics of transformer circuits, induction heads, and grokking. …
…
continue reading
I have a new podcast, where I interview whoever I want about whatever I want. It's called "The Filan Cabinet", and you can find it wherever you listen to podcasts. The first three episodes are about pandemic preparedness, God, and cryptocurrency. For more details, check out the podcast website (thefilancabinet.com), or search "The Filan Cabinet" in…
…
continue reading