Artwork

Вміст надано Ryan. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Ryan або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

Episode 11: Episode 11 - AI Risk with Roman Yampolskiy

1:22:21
 
Поширити
 

Архівні серії ("Канал неактуальний" status)

When? This feed was archived on March 18, 2022 02:55 (2y ago). Last successful fetch was on November 10, 2021 18:08 (2+ y ago)

Why? Канал неактуальний status. Нашим серверам не вдалося отримати доступ до каналу подкасту протягом тривалого періоду часу.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 303068634 series 2981476
Вміст надано Ryan. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Ryan або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

For this episode I was delighted to be joined by Dr. Roman Yampolskiy, a professor of Computer Engineering and Computer Science at the University of Louisville. Few scholars have devoted as much time to seriously exploring the myriad of threats potentially inhering in the development of highly intelligent artificial machinery than Dr. Yampolskiy, who established the field of AI Safety Engineering, also known simply as AI Safety. After the preliminary inquiry into his background, I asked Roman Yampolskiy to explain deep neural networks, or artificial neural networks as they are also known. One of the most important topics in AI research is what is referred to as the Alignment Problem, which my guest helped to clarify. We then moved onto his work on two other vitally significant issues in AI, namely understandability and explainability. I then asked him to provide a brief history of AI Safety, which as he revealed built on Yudkowsky’s ideas of Friendly AI. We discussed whether there is an increased interest in the risks attendant to AI among researchers, the perverse incentive that exists among those in this industry to downplay the risks of their work, and how to ensure greater transparency, which as you will hear is worryingly far more difficult than many might assume based on the inherently opaque nature of how deep neural networks perform their operations. I homed in on the issue of massive job losses that increasing AI capabilities could potentially engender, as well as the perception I have that many who discuss this topic downplay the socioeconomic context within which automation occurs. After I asked my guest to define artificial general intelligence, or AGI, and super intelligence, we spent considerable time discussing the possibility of machines achieving human-level mental capabilities. This part of the interview was the most contentious and touched on neuroscience, the nature of consciousness, mind-body dualism, the dubious analogy between brains and computers that has been all to pervasive in the AI field since its inception, as well as a fascinating paper by Yampolskiy proposing to detect qualia in artificial systems that perceive the same visual illusions as humans. In the final stretch of the interview, we discussed the impressive language-based system GPT3, whether AlphaZero is the first truly intelligent artificial system, as Gary Kasparov claims, the prospects of quantum computing to potentially achieve AGI, and, lastly, what he considers to be the greatest AI risk factor, which according to my guest is “purposeful malevolent design.” While this far-ranging interview, with many concepts raised and names dropped, sometimes veered into various weeds some might deem overly specialised and/or technical, I nevertheless think there is plenty to glean about a range of fascinating, not to mention pertinent, topics for those willing to stay the course.
Roman Yampolskiy’s page at the University of Louisville: http://cecs.louisville.edu/ry/

Yampolskiy’s papers: https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=en

AI Risk Skepticism paper: https://arxiv.org/abs/2105.02704

Roman’s book, Artificial Superintelligence: A Futuristic Approach: https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1482234432

A book edited by Yampolskiy, ‘Artificial Intelligence Safety and Security,’ featuring some major figures in the field: https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/0815369824

‘The Myth of AI’ by Jaron Lanier: https://www.edge.org/conversation/jaron_lanier-the-myth-of-ai

‘The Empty Brain’ by Robert Epstein: https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

‘The Myth of a Superhuman AI’ by Kevin Kelly: https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/

Twitter account for Skeptically Curious: https://twitter.com/SkepticallyCur1

Patreon page for Skeptically Curious: https://www.patreon.com/skepticallycurious

  continue reading

16 епізодів

Artwork
iconПоширити
 

Архівні серії ("Канал неактуальний" status)

When? This feed was archived on March 18, 2022 02:55 (2y ago). Last successful fetch was on November 10, 2021 18:08 (2+ y ago)

Why? Канал неактуальний status. Нашим серверам не вдалося отримати доступ до каналу подкасту протягом тривалого періоду часу.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 303068634 series 2981476
Вміст надано Ryan. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Ryan або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

For this episode I was delighted to be joined by Dr. Roman Yampolskiy, a professor of Computer Engineering and Computer Science at the University of Louisville. Few scholars have devoted as much time to seriously exploring the myriad of threats potentially inhering in the development of highly intelligent artificial machinery than Dr. Yampolskiy, who established the field of AI Safety Engineering, also known simply as AI Safety. After the preliminary inquiry into his background, I asked Roman Yampolskiy to explain deep neural networks, or artificial neural networks as they are also known. One of the most important topics in AI research is what is referred to as the Alignment Problem, which my guest helped to clarify. We then moved onto his work on two other vitally significant issues in AI, namely understandability and explainability. I then asked him to provide a brief history of AI Safety, which as he revealed built on Yudkowsky’s ideas of Friendly AI. We discussed whether there is an increased interest in the risks attendant to AI among researchers, the perverse incentive that exists among those in this industry to downplay the risks of their work, and how to ensure greater transparency, which as you will hear is worryingly far more difficult than many might assume based on the inherently opaque nature of how deep neural networks perform their operations. I homed in on the issue of massive job losses that increasing AI capabilities could potentially engender, as well as the perception I have that many who discuss this topic downplay the socioeconomic context within which automation occurs. After I asked my guest to define artificial general intelligence, or AGI, and super intelligence, we spent considerable time discussing the possibility of machines achieving human-level mental capabilities. This part of the interview was the most contentious and touched on neuroscience, the nature of consciousness, mind-body dualism, the dubious analogy between brains and computers that has been all to pervasive in the AI field since its inception, as well as a fascinating paper by Yampolskiy proposing to detect qualia in artificial systems that perceive the same visual illusions as humans. In the final stretch of the interview, we discussed the impressive language-based system GPT3, whether AlphaZero is the first truly intelligent artificial system, as Gary Kasparov claims, the prospects of quantum computing to potentially achieve AGI, and, lastly, what he considers to be the greatest AI risk factor, which according to my guest is “purposeful malevolent design.” While this far-ranging interview, with many concepts raised and names dropped, sometimes veered into various weeds some might deem overly specialised and/or technical, I nevertheless think there is plenty to glean about a range of fascinating, not to mention pertinent, topics for those willing to stay the course.
Roman Yampolskiy’s page at the University of Louisville: http://cecs.louisville.edu/ry/

Yampolskiy’s papers: https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=en

AI Risk Skepticism paper: https://arxiv.org/abs/2105.02704

Roman’s book, Artificial Superintelligence: A Futuristic Approach: https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1482234432

A book edited by Yampolskiy, ‘Artificial Intelligence Safety and Security,’ featuring some major figures in the field: https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/0815369824

‘The Myth of AI’ by Jaron Lanier: https://www.edge.org/conversation/jaron_lanier-the-myth-of-ai

‘The Empty Brain’ by Robert Epstein: https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

‘The Myth of a Superhuman AI’ by Kevin Kelly: https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/

Twitter account for Skeptically Curious: https://twitter.com/SkepticallyCur1

Patreon page for Skeptically Curious: https://www.patreon.com/skepticallycurious

  continue reading

16 епізодів

Усі епізоди

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник