Переходьте в офлайн за допомогою програми Player FM !
Localizing and Editing Knowledge in LLMs with Peter Hase - #679
Manage episode 411383461 series 2355587
Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.
The complete show notes for this episode can be found at twimlai.com/go/679.
701 епізодів
Localizing and Editing Knowledge in LLMs with Peter Hase - #679
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Manage episode 411383461 series 2355587
Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.
The complete show notes for this episode can be found at twimlai.com/go/679.
701 епізодів
All episodes
×Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.