Artwork

Вміст надано Daniel Bashir. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Daniel Bashir або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

Sasha Luccioni: Connecting the Dots Between AI's Environmental and Social Impacts

1:03:07
 
Поширити
 

Manage episode 413222548 series 2975159
Вміст надано Daniel Bashir. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Daniel Bashir або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.

Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:43) Sasha’s background

* (01:52) How Sasha became interested in sociotechnical work

* (03:08) Larger models and theory of change for AI/climate work

* (07:18) Quantifying emissions for ML systems

* (09:40) Aggregate inference vs training costs

* (10:22) Hardware and data center locations

* (15:10) More efficient hardware vs. bigger models — Jevons paradox

* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports

* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs

* (28:22) General vs. task-specific models

* (31:20) Architectures and efficiency

* (33:45) Sequence-to-sequence architectures vs. decoder-only

* (36:35) Hardware efficiency/utilization

* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment

* (40:50) Stable Bias

* (46:45) Understanding model biases and representations

* (52:07) Future work

* (53:45) Metaethical perspectives on benchmarking for AI ethics

* (54:30) “Moral benchmarks”

* (56:50) Reflecting on “ethicality” of systems

* (59:00) Transparency and ethics

* (1:00:05) Advice for picking research directions

* (1:02:58) Outro

Links:

* Sasha’s homepage and Twitter

* Papers read/discussed

* Climate Change / Carbon Emissions of AI Models

* Quantifying the Carbon Emissions of Machine Learning

* Power Hungry Processing: Watts Driving the Cost of AI Deployment?

* Tackling Climate Change with Machine Learning

* CodeCarbon

* Responsible AI

* Stable Bias: Analyzing Societal Representations in Diffusion Models

* Metaethical Perspectives on ‘Benchmarking’ AI Ethics

* Measuring Data

* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

150 епізодів

Artwork
iconПоширити
 
Manage episode 413222548 series 2975159
Вміст надано Daniel Bashir. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Daniel Bashir або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.

Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:43) Sasha’s background

* (01:52) How Sasha became interested in sociotechnical work

* (03:08) Larger models and theory of change for AI/climate work

* (07:18) Quantifying emissions for ML systems

* (09:40) Aggregate inference vs training costs

* (10:22) Hardware and data center locations

* (15:10) More efficient hardware vs. bigger models — Jevons paradox

* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports

* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs

* (28:22) General vs. task-specific models

* (31:20) Architectures and efficiency

* (33:45) Sequence-to-sequence architectures vs. decoder-only

* (36:35) Hardware efficiency/utilization

* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment

* (40:50) Stable Bias

* (46:45) Understanding model biases and representations

* (52:07) Future work

* (53:45) Metaethical perspectives on benchmarking for AI ethics

* (54:30) “Moral benchmarks”

* (56:50) Reflecting on “ethicality” of systems

* (59:00) Transparency and ethics

* (1:00:05) Advice for picking research directions

* (1:02:58) Outro

Links:

* Sasha’s homepage and Twitter

* Papers read/discussed

* Climate Change / Carbon Emissions of AI Models

* Quantifying the Carbon Emissions of Machine Learning

* Power Hungry Processing: Watts Driving the Cost of AI Deployment?

* Tackling Climate Change with Machine Learning

* CodeCarbon

* Responsible AI

* Stable Bias: Analyzing Societal Representations in Diffusion Models

* Metaethical Perspectives on ‘Benchmarking’ AI Ethics

* Measuring Data

* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

150 епізодів

Усі епізоди

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник

Слухайте це шоу, досліджуючи
Відтворити