Переходьте в офлайн за допомогою програми Player FM !
Responsible AI: Defining, Implementing, and Navigating the Future; With Guest: Diya Wynn
Manage episode 363717659 series 3461851
In this episode of The MLSecOps Podcast, Diya Wynn, Sr. Practice Manager in Responsible AI in the Machine Learning Solutions Lab at Amazon Web Services shares her background and the motivations that led her to pursue a career in Responsible AI.
Diya shares her passion for work related to diversity, equity, and inclusion (DEI), and how Responsible AI offers a unique opportunity to merge her passion for DEI with what her core focus has always been: technology. She explores the definition of Responsible AI as an operating approach focused on minimizing unintended impact and maximizing benefits. The group also spends some time in this episode discussing Generative AI and its potential to perpetuate biases and raise ethical concerns.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
40 епізодів
Manage episode 363717659 series 3461851
In this episode of The MLSecOps Podcast, Diya Wynn, Sr. Practice Manager in Responsible AI in the Machine Learning Solutions Lab at Amazon Web Services shares her background and the motivations that led her to pursue a career in Responsible AI.
Diya shares her passion for work related to diversity, equity, and inclusion (DEI), and how Responsible AI offers a unique opportunity to merge her passion for DEI with what her core focus has always been: technology. She explores the definition of Responsible AI as an operating approach focused on minimizing unintended impact and maximizing benefits. The group also spends some time in this episode discussing Generative AI and its potential to perpetuate biases and raise ethical concerns.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
40 епізодів
Minden epizód
×Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.