Soroush Pour відкриті
[search 0]
більше
Download the App!
show episodes
 
When will the world create an artificial intelligence that matches human level capabilities, better known as an artificial general intelligence (AGI)? What will that world look like & how can we ensure it's positive & beneficial for humanity as a whole? Tech entrepreneur & software engineer Soroush Pour (@soroushjp) sits down with AI experts to discuss AGI timelines, pathways, implications, opportunities & risks as we enter this pivotal new era for our planet and species. Hosted by Soroush P ...
  continue reading
 
Loading …
show series
 
We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI (CHAI) at Berkeley. His work focuses on better unders…
  continue reading
 
We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for humanity. We talk to Katja about: * How AI Impacts latest rigorous survey of leading AI researchers shows …
  continue reading
 
We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video has ~600,000 views. He goes much deeper than many educational resources out there on alignment, going into important technical topics like the orthogo…
  continue reading
 
We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment. In 2022, Thomas spent ~75 hours putting together an overview of what everyone i…
  continue reading
 
We speak with Ryan Kidd, Co-Director at ML Alignment & Theory Scholars (MATS) program, previously "SERI MATS". MATS (https://www.matsprogram.org/) provides research mentorship, technical seminars, and connections to help new AI researchers get established and start producing impactful research towards AI safety & alignment. Prior to MATS, Ryan comp…
  continue reading
 
We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more. We t…
  continue reading
 
We speak with Jamie Bernardi, co-founder & AI Safety Lead at not-for-profit BlueDot Impact, who host the biggest and most up-to-date courses on AI safety & alignment at AI Safety Fundamentals (https://aisafetyfundamentals.com/). Jamie completed his Bachelors (Physical Natural Sciences) and Masters (Physics) at the U. Cambridge and worked as an ML E…
  continue reading
 
In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the technical and governance responses we should put in today and in the future to ensure a safe and positive future with AGI. Prof Richard Dazeley is the Dep…
  continue reading
 
In this episode, we have back on the show Hunter Jay, CEO Ripe Robotics, our co-host on Ep 1. We synthesise everything we've heard on AGI timelines from experts in Ep 1-5, take in more data points, and use this to give our own forecasts for AGI, ASI (i.e. superintelligence), and "intelligence explosion" (i.e. singularity). Importantly, we have diff…
  continue reading
 
In this episode, we have back on our show Alex Browne, ML Engineer, who we heard on Ep2. He got in contact after watching recent developments in the 4 months since Ep2, which have accelerated his timelines for AGI. Hear why and his latest prediction. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp Link…
  continue reading
 
In this episode, we speak with forecasting researcher & data scientist at Amazon AWS, Ryan Kupyn, about his timelines for the arrival of AGI. Ryan was recently ranked the #1 forecaster in Astral Codex Ten's 2022 Prediction contest, beating out 500+ other forecasters and proving himself to be a world-class forecaster. He has also done work in ML & w…
  continue reading
 
In this episode, we speak with Rain.AI CTO Jack Kendall about his timelines for the arrival of AGI. He also speaks to how we might get there and some of the implications. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp LinkedIn: https://www.linkedin.com/in/soroushjp/ Show links Jack Kendall Bio: Jack i…
  continue reading
 
In this episode, we speak with ML Engineer Alex Browne about his forecasted timelines for the potential arrival of AGI. He also speaks to how we might get there and some of the implications. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp LinkedIn: https://www.linkedin.com/in/soroushjp/ == Show links =…
  continue reading
 
We speak with AGI alignment researcher Logan Riggs Smith about his timelines for AGI. He also speaks to how we might get there and some of the implications. Hosted by Hunter Jay and Soroush Pour Show links Further writings from Logan Riggs Smith Cotra report on AGI timelines: Original report (very long) Scott Alexander analysis of this report…
  continue reading
 
Loading …

Короткий довідник