Переходьте в офлайн за допомогою програми Player FM !
The Threat of AI Regulation with Brian Chau
Manage episode 407148034 series 2853093
Brian Chau writes and hosts a podcast at the From the New World Substack, and recently established a new think tank, the Alliance for the Future.
He joins the podcast to discuss why he’s not worried about the alignment problem, where he disagrees with “doomers,” the accomplishments of ChatGPT versus DALL-E, the dangers of regulating AI until progress comes to a halt in the way it did with nuclear power, and more. With his background in computer science, Brian takes issue with many of those who write on this topic, arguing that they think in terms of flawed analogies and know little about the underlying technology. The conversation touches on a previous CSPI discussion with Leopold Aschenbrenner, and the value of continuing to work on alignment.
Brian’s view is that AI doomers are making people needlessly pessimistic. He believes that this technology has the potential to do great things for humanity, particularly when it comes to areas like software development and biotech. But the post-World War II era has seen many examples of government hindering progress, and AFF is dedicated to stopping that from happening with artificial intelligence.
Listen to the conversation here, or watch the video here.
Links
Brian on diminishing returns to machine learning, and discussing AI with Marc Andreessen
Vaswani et al. on transformers
Limits of current machine learning techniques
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.cspicenter.com
71 епізодів
Manage episode 407148034 series 2853093
Brian Chau writes and hosts a podcast at the From the New World Substack, and recently established a new think tank, the Alliance for the Future.
He joins the podcast to discuss why he’s not worried about the alignment problem, where he disagrees with “doomers,” the accomplishments of ChatGPT versus DALL-E, the dangers of regulating AI until progress comes to a halt in the way it did with nuclear power, and more. With his background in computer science, Brian takes issue with many of those who write on this topic, arguing that they think in terms of flawed analogies and know little about the underlying technology. The conversation touches on a previous CSPI discussion with Leopold Aschenbrenner, and the value of continuing to work on alignment.
Brian’s view is that AI doomers are making people needlessly pessimistic. He believes that this technology has the potential to do great things for humanity, particularly when it comes to areas like software development and biotech. But the post-World War II era has seen many examples of government hindering progress, and AFF is dedicated to stopping that from happening with artificial intelligence.
Listen to the conversation here, or watch the video here.
Links
Brian on diminishing returns to machine learning, and discussing AI with Marc Andreessen
Vaswani et al. on transformers
Limits of current machine learning techniques
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.cspicenter.com
71 епізодів
Усі епізоди
×Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.