Artwork

Вміст надано Turpentine, Erik Torenberg, and Nathan Labenz. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Turpentine, Erik Torenberg, and Nathan Labenz або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

Red Teaming o1 Part 2/2– Detecting Deception with Marius Hobbhahn of Apollo Research

1:01:51
 
Поширити
 

Manage episode 439899336 series 3452589
Вміст надано Turpentine, Erik Torenberg, and Nathan Labenz. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Turpentine, Erik Torenberg, and Nathan Labenz або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

In this Emergency Pod of The Cognitive Revolution, Nathan provides crucial insights into OpenAI's new O1 and O1-mini reasoning models. Featuring exclusive interviews with members of the O1 Red Team from Apollo Research and Hayes Labs, we explore the models' capabilities, safety profile, and OpenAI's pre-release testing approach. Dive into the implications of these advanced AI systems, including their potential to match or exceed expert performance in many areas. Join us for an urgent and informative discussion on the latest developments in AI technology and their impact on the future.

o1 Safety Card

Endless Jailbreaks with Bijection Learning: a Powerful, Scale-Agnostic Attack Method

Apollo Research

Apollo Careers Page

Papers mentioned:

Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?

Exploring Scaling Trends in LLM Robustness

Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/

SPONSORS:

Oracle: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive

Brave: The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR

Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/

Squad: Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.

RECOMMENDED PODCAST:

This Won't Last.

Eavesdrop on Keith Rabois, Kevin Ryan, Logan Bartlett, and Zach Weinberg's monthly backchannel. They unpack their hottest takes on the future of tech, business, venture, investing, and politics.

Apple Podcasts: https://podcasts.apple.com/us/podcast/id1765665937

Spotify: https://open.spotify.com/show/2HwSNeVLL1MXy0RjFPyOSz

YouTube: https://www.youtube.com/@ThisWontLastpodcast

CHAPTERS:

(00:00:00) About the Show

(00:00:22) About the Episode

(00:05:03) Introduction and Apollo Research Updates

(00:06:40) Focus on Deception in AI

(00:11:08) Open AI's 01 Model and Testing

(00:15:54) Evaluating AI Models for Scheming (Part 1)

(00:19:32) Sponsors: Oracle | Brave

(00:21:36) Evaluating AI Models for Scheming (Part 2)

(00:25:55) Specific Benchmarks and Tasks (Part 1)

(00:35:03) Sponsors: Omneky | Squad

(00:36:29) Specific Benchmarks and Tasks (Part 2)

(00:37:21) Model Capabilities and Potential Risks

(00:44:11) Ethical Considerations and Future Concerns

(00:50:31) Competing Trends in AI Development

(00:53:30) System Card Quotes and Implications

(00:58:36) Sponsors: Outro

  continue reading

193 епізодів

Artwork
iconПоширити
 
Manage episode 439899336 series 3452589
Вміст надано Turpentine, Erik Torenberg, and Nathan Labenz. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Turpentine, Erik Torenberg, and Nathan Labenz або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

In this Emergency Pod of The Cognitive Revolution, Nathan provides crucial insights into OpenAI's new O1 and O1-mini reasoning models. Featuring exclusive interviews with members of the O1 Red Team from Apollo Research and Hayes Labs, we explore the models' capabilities, safety profile, and OpenAI's pre-release testing approach. Dive into the implications of these advanced AI systems, including their potential to match or exceed expert performance in many areas. Join us for an urgent and informative discussion on the latest developments in AI technology and their impact on the future.

o1 Safety Card

Endless Jailbreaks with Bijection Learning: a Powerful, Scale-Agnostic Attack Method

Apollo Research

Apollo Careers Page

Papers mentioned:

Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?

Exploring Scaling Trends in LLM Robustness

Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/

SPONSORS:

Oracle: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive

Brave: The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR

Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/

Squad: Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.

RECOMMENDED PODCAST:

This Won't Last.

Eavesdrop on Keith Rabois, Kevin Ryan, Logan Bartlett, and Zach Weinberg's monthly backchannel. They unpack their hottest takes on the future of tech, business, venture, investing, and politics.

Apple Podcasts: https://podcasts.apple.com/us/podcast/id1765665937

Spotify: https://open.spotify.com/show/2HwSNeVLL1MXy0RjFPyOSz

YouTube: https://www.youtube.com/@ThisWontLastpodcast

CHAPTERS:

(00:00:00) About the Show

(00:00:22) About the Episode

(00:05:03) Introduction and Apollo Research Updates

(00:06:40) Focus on Deception in AI

(00:11:08) Open AI's 01 Model and Testing

(00:15:54) Evaluating AI Models for Scheming (Part 1)

(00:19:32) Sponsors: Oracle | Brave

(00:21:36) Evaluating AI Models for Scheming (Part 2)

(00:25:55) Specific Benchmarks and Tasks (Part 1)

(00:35:03) Sponsors: Omneky | Squad

(00:36:29) Specific Benchmarks and Tasks (Part 2)

(00:37:21) Model Capabilities and Potential Risks

(00:44:11) Ethical Considerations and Future Concerns

(00:50:31) Competing Trends in AI Development

(00:53:30) System Card Quotes and Implications

(00:58:36) Sponsors: Outro

  continue reading

193 епізодів

Усі епізоди

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник