Переходьте в офлайн за допомогою програми Player FM !
EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw
Manage episode 386129072 series 2892548
Guest:
Dr Gary McGraw, founder of the Berryville Institute of Machine Learning
Topics:
Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems?
If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance practices help?
How would you threat model a system with ML in it or a new ML system you are building?
What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
What are the key differences between securing the AI you built and AI you buy or subscribe to?
- Which security tools and frameworks will solve all of these problems for us?
Resources:
- EP135 AI and Security: The Good, the Bad, and the Magical
“An Architectural Risk Analysis Of Machine Learning Systems: Toward More Secure Machine Learning“ paper
“What to think about when you’re thinking about securing AI”
“Microsoft AI researchers accidentally leak 38TB of company data”
- Introducing Google’s Secure AI Framework
198 епізодів
Manage episode 386129072 series 2892548
Guest:
Dr Gary McGraw, founder of the Berryville Institute of Machine Learning
Topics:
Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems?
If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance practices help?
How would you threat model a system with ML in it or a new ML system you are building?
What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
What are the key differences between securing the AI you built and AI you buy or subscribe to?
- Which security tools and frameworks will solve all of these problems for us?
Resources:
- EP135 AI and Security: The Good, the Bad, and the Magical
“An Architectural Risk Analysis Of Machine Learning Systems: Toward More Secure Machine Learning“ paper
“What to think about when you’re thinking about securing AI”
“Microsoft AI researchers accidentally leak 38TB of company data”
- Introducing Google’s Secure AI Framework
198 епізодів
All episodes
×Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.