Artwork

Вміст надано Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

Preparing AI for the unexpected: Lessons from recent IT incidents

34:13
 
Поширити
 

Manage episode 435145212 series 3475282
Вміст надано Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

Can your AI models survive a big disaster? While a recent major IT incident with CrowdStrike wasn't AI related, the magnitude and reaction reminded us that no system no matter how proven is immune to failure. AI modeling systems are no different. Neglecting the best practices of building models can lead to unrecoverable failures. Discover how the three-tiered framework of robustness, resiliency, and anti-fragility can guide your approach to creating AI infrastructures that not only perform reliably under stress but also fail gracefully when the unexpected happens.
Show Notes

  • Model robustness (00:10:03)
    • Robustness is a very important but often overlooked component of building modeling systems. We suspect that part of the problem is due to:
      • The Kaggle-driven upbringing of data scientists
      • Assumed generalizability of modeling systems, when models are optimized to perform well on their training data but do not generalize enough to perform well on unseen data.

  • Model resilience (00:16:10)
    • Resiliency is the ability to absorb adverse stimuli without destruction and return to its pre-event state.
    • In practice, robustness and resiliency, testing, and planning are often easy components to leave out. This is where risks and threats are exposed.
    • See also, Episode 8. Model validation: Robustness and resilience

  • Models and antifragility (00:25:04)
    • Unlike resiliency, which is the ability to absorb damaging inputs without breaking, antifragility is the ability of a system to improve from challenging stimuli. (i.e. the human body)
    • A key question we need to ask ourselves if we are not actively building our AI systems to be antifragile, why are we using AI systems at all?

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Розділи

1. Preparing AI for the unexpected: Lessons from recent IT incidents (00:00:00)

2. Intro: Technology, incidents, and why? (00:00:03)

3. The "7P's" (00:09:05)

4. Model robustness (00:10:03)

5. Model resilience (00:16:10)

6. Models and antifragility (00:25:04)

24 епізодів

Artwork
iconПоширити
 
Manage episode 435145212 series 3475282
Вміст надано Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

Can your AI models survive a big disaster? While a recent major IT incident with CrowdStrike wasn't AI related, the magnitude and reaction reminded us that no system no matter how proven is immune to failure. AI modeling systems are no different. Neglecting the best practices of building models can lead to unrecoverable failures. Discover how the three-tiered framework of robustness, resiliency, and anti-fragility can guide your approach to creating AI infrastructures that not only perform reliably under stress but also fail gracefully when the unexpected happens.
Show Notes

  • Model robustness (00:10:03)
    • Robustness is a very important but often overlooked component of building modeling systems. We suspect that part of the problem is due to:
      • The Kaggle-driven upbringing of data scientists
      • Assumed generalizability of modeling systems, when models are optimized to perform well on their training data but do not generalize enough to perform well on unseen data.

  • Model resilience (00:16:10)
    • Resiliency is the ability to absorb adverse stimuli without destruction and return to its pre-event state.
    • In practice, robustness and resiliency, testing, and planning are often easy components to leave out. This is where risks and threats are exposed.
    • See also, Episode 8. Model validation: Robustness and resilience

  • Models and antifragility (00:25:04)
    • Unlike resiliency, which is the ability to absorb damaging inputs without breaking, antifragility is the ability of a system to improve from challenging stimuli. (i.e. the human body)
    • A key question we need to ask ourselves if we are not actively building our AI systems to be antifragile, why are we using AI systems at all?

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Розділи

1. Preparing AI for the unexpected: Lessons from recent IT incidents (00:00:00)

2. Intro: Technology, incidents, and why? (00:00:03)

3. The "7P's" (00:09:05)

4. Model robustness (00:10:03)

5. Model resilience (00:16:10)

6. Models and antifragility (00:25:04)

24 епізодів

Усі епізоди

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник