Redefining AI is the 2024 New York Digital Award winning tech podcast! Discover a whole new take on Artificial Intelligence in joining host Lauren Hawker Zafer, a top voice in Artificial Intelligence on LinkedIn, for insightful chats that unravel the fascinating world of tech innovation, use case exploration and AI knowledge. Dive into candid discussions with accomplished industry experts and established academics. With each episode, you'll expand your grasp of cutting-edge technologies and ...
…
continue reading
Вміст надано Charles M Wood. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Charles M Wood або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !
Переходьте в офлайн за допомогою програми Player FM !
Challenges and Solutions in Managing Code Security for ML Developers - ML 175
Manage episode 451476040 series 2977446
Вміст надано Charles M Wood. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Charles M Wood або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Today, join Michael and Ben as they delve into crucial topics surrounding code security and the safe execution of machine learning models. This episode focuses on preventing accidental key leaks in notebooks, creating secure environments for code execution, and the pros and cons of various isolation methods like VMs, containers, and micro VMs.
They explore the challenges of evaluating and executing generated code, highlighting the risks of running arbitrary Python code and the importance of secure evaluation processes. Ben shares his experiences and best practices, emphasizing human evaluation and secure virtual environments to mitigate risks.
The episode also includes an in-depth discussion on developing new projects with a focus on proper engineering procedures, and the sophisticated efforts behind Databricks' Genie service and MLflow's RunLLM. Finally, Ben and Michael explore the potential of fine-tuning machine learning models, creating high-quality datasets, and the complexities of managing code execution with AI.
Tune in for all this and more as we navigate the secure pathways to responsible and effective machine learning development.
Socials
Become a supporter of this podcast: https://www.spreaker.com/podcast/adventures-in-machine-learning--6102041/support.
…
continue reading
They explore the challenges of evaluating and executing generated code, highlighting the risks of running arbitrary Python code and the importance of secure evaluation processes. Ben shares his experiences and best practices, emphasizing human evaluation and secure virtual environments to mitigate risks.
The episode also includes an in-depth discussion on developing new projects with a focus on proper engineering procedures, and the sophisticated efforts behind Databricks' Genie service and MLflow's RunLLM. Finally, Ben and Michael explore the potential of fine-tuning machine learning models, creating high-quality datasets, and the complexities of managing code execution with AI.
Tune in for all this and more as we navigate the secure pathways to responsible and effective machine learning development.
Socials
Become a supporter of this podcast: https://www.spreaker.com/podcast/adventures-in-machine-learning--6102041/support.
208 епізодів
Manage episode 451476040 series 2977446
Вміст надано Charles M Wood. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Charles M Wood або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Today, join Michael and Ben as they delve into crucial topics surrounding code security and the safe execution of machine learning models. This episode focuses on preventing accidental key leaks in notebooks, creating secure environments for code execution, and the pros and cons of various isolation methods like VMs, containers, and micro VMs.
They explore the challenges of evaluating and executing generated code, highlighting the risks of running arbitrary Python code and the importance of secure evaluation processes. Ben shares his experiences and best practices, emphasizing human evaluation and secure virtual environments to mitigate risks.
The episode also includes an in-depth discussion on developing new projects with a focus on proper engineering procedures, and the sophisticated efforts behind Databricks' Genie service and MLflow's RunLLM. Finally, Ben and Michael explore the potential of fine-tuning machine learning models, creating high-quality datasets, and the complexities of managing code execution with AI.
Tune in for all this and more as we navigate the secure pathways to responsible and effective machine learning development.
Socials
Become a supporter of this podcast: https://www.spreaker.com/podcast/adventures-in-machine-learning--6102041/support.
…
continue reading
They explore the challenges of evaluating and executing generated code, highlighting the risks of running arbitrary Python code and the importance of secure evaluation processes. Ben shares his experiences and best practices, emphasizing human evaluation and secure virtual environments to mitigate risks.
The episode also includes an in-depth discussion on developing new projects with a focus on proper engineering procedures, and the sophisticated efforts behind Databricks' Genie service and MLflow's RunLLM. Finally, Ben and Michael explore the potential of fine-tuning machine learning models, creating high-quality datasets, and the complexities of managing code execution with AI.
Tune in for all this and more as we navigate the secure pathways to responsible and effective machine learning development.
Socials
Become a supporter of this podcast: https://www.spreaker.com/podcast/adventures-in-machine-learning--6102041/support.
208 епізодів
Усі епізоди
×Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.