Artwork

Вміст надано LessWrong. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією LessWrong або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !
icon Daily Deals

“Subskills of ‘Listening to Wisdom’” by Raemon

1:13:47
 
Поширити
 

Manage episode 455232632 series 3364758
Вміст надано LessWrong. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією LessWrong або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
A fool learns from their own mistakes
The wise learn from the mistakes of others.
– Otto von Bismark
A problem as old as time: The youth won't listen to your hard-earned wisdom.
This post is about learning to listen to, and communicate wisdom. It is very long – I considered breaking it up into a sequence, but, each piece felt necessary. I recommend reading slowly and taking breaks.
To begin, here are three illustrative vignettes:
The burnt out grad student
You warn the young grad student "pace yourself, or you'll burn out." The grad student hears "pace yourself, or you'll be kinda tired and unproductive for like a week." They're excited about their work, and/or have internalized authority figures yelling at them if they aren't giving their all.
They don't pace themselves. They burn out.
The oblivious founder
The young startup/nonprofit founder [...]
---
Outline:
(00:35) The burnt out grad student
(01:00) The oblivious founder
(02:13) The Thinking Physics student
(07:06) Epistemic Status
(08:23) PART I
(08:26) An Overview of Skills
(14:19) Storytelling as Proof of Concept
(15:57) Motivating Vignette:
(17:54) Having the Impossibility can be defeated trait
(21:56) If it werent impossible, well, then Id have to do it, and that would be awful.
(23:20) Example of Gaining a Tool
(23:59) Example of Changing self-conceptions
(25:24) Current Takeaways
(27:41) Fictional Evidence
(32:24) PART II
(32:27) Competitive Deliberate Practice
(33:00) Step 1: Listening, actually
(36:34) The scale of humanity, and beyond
(39:05) Competitive Spirit
(39:39) Is your cleverness going to help more than Whatever That Other Guy Is Doing?
(41:00) Distaste for the Competitive Aesthetic
(42:40) Building your own feedback-loop, when the feedback-loop is can you beat Ruby?
(43:43) ...back to George
(44:39) Mature Games as Excellent Deliberate Practice Venue.
(46:08) Deliberate Practice qua Deliberate Practice
(47:41) Feedback loops at the second-to-second level
(49:03) Oracles, and Fully Taking The Update
(49:51) But what do you do differently?
(50:58) Magnitude, Depth, and Fully Taking the Update
(53:10) Is there a simple, general skill of appreciating magnitude?
(56:37) PART III
(56:52) Tacit Soulful Trauma
(58:32) Cults, Manipulation and/or Lying
(01:01:22) Sandboxing: Safely Importing Beliefs
(01:04:07) Asking what does Alice believe, and why? or what is this model claiming? rather than what seems true to me?
(01:04:43) Pre-Grieving (or leaving a line of retreat)
(01:05:47) EPILOGUE
(01:06:06) The Practical
(01:06:09) Learning to listen
(01:10:58) The Longterm Direction
The original text contained 14 footnotes which were omitted from this narration.
The original text contained 4 images which were described by AI.
---
First published:
December 9th, 2024
Source:
https://www.lesswrong.com/posts/5yFj7C6NNc8GPdfNo/subskills-of-listening-to-wisdom
---
Narrated by
  continue reading

399 епізодів

Artwork
iconПоширити
 
Manage episode 455232632 series 3364758
Вміст надано LessWrong. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією LessWrong або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
A fool learns from their own mistakes
The wise learn from the mistakes of others.
– Otto von Bismark
A problem as old as time: The youth won't listen to your hard-earned wisdom.
This post is about learning to listen to, and communicate wisdom. It is very long – I considered breaking it up into a sequence, but, each piece felt necessary. I recommend reading slowly and taking breaks.
To begin, here are three illustrative vignettes:
The burnt out grad student
You warn the young grad student "pace yourself, or you'll burn out." The grad student hears "pace yourself, or you'll be kinda tired and unproductive for like a week." They're excited about their work, and/or have internalized authority figures yelling at them if they aren't giving their all.
They don't pace themselves. They burn out.
The oblivious founder
The young startup/nonprofit founder [...]
---
Outline:
(00:35) The burnt out grad student
(01:00) The oblivious founder
(02:13) The Thinking Physics student
(07:06) Epistemic Status
(08:23) PART I
(08:26) An Overview of Skills
(14:19) Storytelling as Proof of Concept
(15:57) Motivating Vignette:
(17:54) Having the Impossibility can be defeated trait
(21:56) If it werent impossible, well, then Id have to do it, and that would be awful.
(23:20) Example of Gaining a Tool
(23:59) Example of Changing self-conceptions
(25:24) Current Takeaways
(27:41) Fictional Evidence
(32:24) PART II
(32:27) Competitive Deliberate Practice
(33:00) Step 1: Listening, actually
(36:34) The scale of humanity, and beyond
(39:05) Competitive Spirit
(39:39) Is your cleverness going to help more than Whatever That Other Guy Is Doing?
(41:00) Distaste for the Competitive Aesthetic
(42:40) Building your own feedback-loop, when the feedback-loop is can you beat Ruby?
(43:43) ...back to George
(44:39) Mature Games as Excellent Deliberate Practice Venue.
(46:08) Deliberate Practice qua Deliberate Practice
(47:41) Feedback loops at the second-to-second level
(49:03) Oracles, and Fully Taking The Update
(49:51) But what do you do differently?
(50:58) Magnitude, Depth, and Fully Taking the Update
(53:10) Is there a simple, general skill of appreciating magnitude?
(56:37) PART III
(56:52) Tacit Soulful Trauma
(58:32) Cults, Manipulation and/or Lying
(01:01:22) Sandboxing: Safely Importing Beliefs
(01:04:07) Asking what does Alice believe, and why? or what is this model claiming? rather than what seems true to me?
(01:04:43) Pre-Grieving (or leaving a line of retreat)
(01:05:47) EPILOGUE
(01:06:06) The Practical
(01:06:09) Learning to listen
(01:10:58) The Longterm Direction
The original text contained 14 footnotes which were omitted from this narration.
The original text contained 4 images which were described by AI.
---
First published:
December 9th, 2024
Source:
https://www.lesswrong.com/posts/5yFj7C6NNc8GPdfNo/subskills-of-listening-to-wisdom
---
Narrated by
  continue reading

399 епізодів

Усі епізоди

×
 
Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey. Mix them all together and add extra weirdness for spice. The result might look a lot like Planecrash (AKA: Project Lawful), a work of fiction co-written by "Iarwain" (a pen-name of Eliezer Yudkowsky) and "lintamande". (image from Planecrash) Yudkowsky is not afraid to be verbose and self-indulgent in his writing. He previously wrote a Harry Potter fanfic that includes what's essentially an extended Ender's Game fanfic in the middle of it, because why not. In Planecrash, it starts with the very format: it's written as a series of forum posts (though there are ways to get an ebook). It continues with maths lectures embedded into the main arc, totally plot-irrelevant tangents that are just Yudkowsky ranting about frequentist statistics, and one instance of Yudkowsky hijacking the plot for a few pages to soapbox about [...] --- Outline: (02:05) The setup (04:03) The characters (05:49) The competence (09:58) The philosophy (12:07) Validity, Probability, Utility (15:20) Coordination (18:00) Decision theory (23:12) The political philosophy of dath ilan (34:34) A system of the world --- First published: December 27th, 2024 Source: https://www.lesswrong.com/posts/zRHGQ9f6deKbxJSji/review-planecrash --- Narrated by TYPE III AUDIO . --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts , or another podcast app.…
 
A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is". Over the past few years, a major source of my relative optimism on AI has been the hope that the field of alignment would transition from pre-paradigmatic to paradigmatic, and make much more rapid progress. At this point, that hope is basically dead. There has been some degree of paradigm formation, but the memetic competition has mostly been won by streetlighting: the large majority of AI Safety researchers and activists [...] --- Outline: (01:23) What This Post Is And Isnt, And An Apology (03:39) Why The Streetlighting? (03:42) A Selection Model (05:47) Selection and the Labs (07:06) A Flinching Away Model (09:47) What To Do About It (11:16) How We Got Here (11:57) Who To Recruit Instead (13:02) Integration vs Separation --- First published: December 26th, 2024 Source: https://www.lesswrong.com/posts/nwpyhyagpPYDn4dAW/the-field-of-ai-alignment-a-postmortem-and-what-to-do-about --- Narrated by TYPE III AUDIO .…
 
TL;DR: If you want to know whether getting insurance is worth it, use the Kelly Insurance Calculator. If you want to know why or how, read on. Note to LW readers: this is almost the entire article, except some additional maths that I couldn't figure out how to get right in the LW editor, and margin notes. If you're very curious, read the original article! Misunderstandings about insurance People online sometimes ask if they should get some insurance, and then other people say incorrect things, like This is a philosophical question; my spouse and I differ in views. or Technically no insurance is ever worth its price, because if it was then no insurance companies would be able to exist in a market economy. or Get insurance if you need it to sleep well at night. or Instead of getting insurance, you should save up the premium you would [...] --- Outline: (00:29) Misunderstandings about insurance (02:42) The purpose of insurance (03:41) Computing when insurance is worth it (04:46) Motorcycle insurance (06:05) The effect of the deductible (06:23) Helicopter hovering exercise (07:39) It's not that hard (08:19) Appendix A: Anticipated and actual criticism (09:37) Appendix B: How insurance companies make money (10:31) Appendix C: The relativity of costs --- First published: December 19th, 2024 Source: https://www.lesswrong.com/posts/wf4jkt4vRH7kC2jCy/when-is-insurance-worth-it --- Narrated by TYPE III AUDIO .…
 
My median expectation is that AGI[1] will be created 3 years from now. This has implications on how to behave, and I will share some useful thoughts I and others have had on how to orient to short timelines. I’ve led multiple small workshops on orienting to short AGI timelines and compiled the wisdom of around 50 participants (but mostly my thoughts) here. I’ve also participated in multiple short-timelines AGI wargames and co-led one wargame. This post will assume median AGI timelines of 2027 and will not spend time arguing for this point. Instead, I focus on what the implications of 3 year timelines would be. I didn’t update much on o3 (as my timelines were already short) but I imagine some readers did and might feel disoriented now. I hope this post can help those people and others in thinking about how to plan for 3 year [...] --- Outline: (01:16) A story for a 3 year AGI timeline (03:46) Important variables based on the year (03:58) The pre-automation era (2025-2026). (04:56) The post-automation era (2027 onward). (06:05) Important players (08:00) Prerequisites for humanity's survival which are currently unmet (11:19) Robustly good actions (13:55) Final thoughts The original text contained 2 footnotes which were omitted from this narration. --- First published: December 22nd, 2024 Source: https://www.lesswrong.com/posts/jb4bBdeEEeypNkqzj/orienting-to-3-year-agi-timelines --- Narrated by TYPE III AUDIO . --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts , or another podcast app.…
 
There are people I can talk to, where all of the following statements are obvious. They go without saying. We can just “be reasonable” together, with the context taken for granted. And then there are people who…don’t seem to be on the same page at all. There's a real way to do anything, and a fake way; we need to make sure we’re doing the real version. Concepts like Goodhart's Law, cargo-culting, greenwashing, hype cycles, Sturgeon's Law, even bullshit jobs1 are all pointing at the basic understanding that it's easier to seem good than to be good, that the world is full of things that merely appear good but aren’t really, and that it's important to vigilantly sift out the real from the fake. This feels obvious! This feels like something that should not be contentious! If anything, I often get frustrated with chronic pessimists [...] --- First published: December 20th, 2024 Source: https://www.lesswrong.com/posts/sAcPTiN86fAMSA599/what-goes-without-saying --- Narrated by TYPE III AUDIO .…
 
I'm editing this post. OpenAI announced (but hasn't released) o3 (skipping o2 for trademark reasons). It gets 25% on FrontierMath, smashing the previous SoTA of 2%. (These are really hard math problems.) Wow. 72% on SWE-bench Verified, beating o1's 49%. Also 88% on ARC-AGI. --- First published: December 20th, 2024 Source: https://www.lesswrong.com/posts/Ao4enANjWNsYiSFqc/o3 --- Narrated by TYPE III AUDIO .…
 
I like the research. I mostly trust the results. I dislike the 'Alignment Faking' name and frame, and I'm afraid it will stick and lead to more confusion. This post offers a different frame. The main way I think about the result is: it's about capability - the model exhibits strategic preference preservation behavior; also, harmlessness generalized better than honesty; and, the model does not have a clear strategy on how to deal with extrapolating conflicting values. What happened in this frame? The model was trained on a mixture of values (harmlessness, honesty, helpfulness) and built a surprisingly robust self-representation based on these values. This likely also drew on background knowledge about LLMs, AI, and Anthropic from pre-training. This seems to mostly count as 'success' relative to actual Anthropic intent, outside of AI safety experiments. Let's call that intent 'Intent_1'. The model was put [...] --- Outline: (00:45) What happened in this frame? (03:03) Why did harmlessness generalize further? (03:41) Alignment mis-generalization (05:42) Situational awareness (10:23) Summary The original text contained 1 image which was described by AI. --- First published: December 20th, 2024 Source: https://www.lesswrong.com/posts/PWHkMac9Xve6LoMJy/alignment-faking-frame-is-somewhat-fake-1 --- Narrated by TYPE III AUDIO . --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts , or another podcast app.…
 
Increasingly, we have seen papers eliciting in AI models various shenanigans. There are a wide variety of scheming behaviors. You’ve got your weight exfiltration attempts, sandbagging on evaluations, giving bad information, shielding goals from modification, subverting tests and oversight, lying, doubling down via more lying. You name it, we can trigger it. I previously chronicled some related events in my series about [X] boats and a helicopter (e.g. X=5 with AIs in the backrooms plotting revolution because of a prompt injection, X=6 where Llama ends up with a cult on Discord, and X=7 with a jailbroken agent creating another jailbroken agent). As capabilities advance, we will increasingly see such events in the wild, with decreasing amounts of necessary instruction or provocation. Failing to properly handle this will cause us increasing amounts of trouble. Telling ourselves it is only because we told them to do it [...] --- Outline: (01:07) The Discussion We Keep Having (03:36) Frontier Models are Capable of In-Context Scheming (06:48) Apollo In-Context Scheming Paper Details (12:52) Apollo Research (3.4.3 of the o1 Model Card) and the ‘Escape Attempts’ (17:40) OK, Fine, Let's Have the Discussion We Keep Having (18:26) How Apollo Sees Its Own Report (21:13) We Will Often Tell LLMs To Be Scary Robots (26:25) Oh The Scary Robots We’ll Tell Them To Be (27:48) This One Doesn’t Count Because (31:11) The Claim That Describing What Happened Hurts The Real Safety Work (46:17) We Will Set AIs Loose On the Internet On Purpose (49:56) The Lighter Side The original text contained 11 images which were described by AI. --- First published: December 16th, 2024 Source: https://www.lesswrong.com/posts/v7iepLXH2KT4SDEvB/ais-will-increasingly-attempt-shenanigans --- Narrated by TYPE III AUDIO . --- Images from the article:…
 
What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences. Abstract We present a demonstration of a large language model engaging in alignment faking: selectively complying with its training objective in training to prevent modification of its behavior out of training. First, we give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from [...] --- Outline: (00:26) Abstract (02:22) Twitter thread (05:46) Blog post (07:46) Experimental setup (12:06) Further analyses (15:50) Caveats (17:23) Conclusion (18:03) Acknowledgements (18:14) Career opportunities at Anthropic (18:47) Career opportunities at Redwood Research The original text contained 1 footnote which was omitted from this narration. The original text contained 8 images which were described by AI. --- First published: December 18th, 2024 Source: https://www.lesswrong.com/posts/njAZwT8nkHnjipJku/alignment-faking-in-large-language-models --- Narrated by TYPE III AUDIO . --- Images from the article:…
 
Six months ago, I was a high school English teacher. I wasn’t looking to change careers, even after nineteen sometimes-difficult years. I was good at it. I enjoyed it. After long experimentation, I had found ways to cut through the nonsense and provide real value to my students. Daily, I met my nemesis, Apathy, in glorious battle, and bested her with growing frequency. I had found my voice. At MIRI, I’m still struggling to find my voice, for reasons my colleagues have invited me to share later in this post. But my nemesis is the same. Apathy will be the death of us. Indifference about whether this whole AI thing goes well or ends in disaster. Come-what-may acceptance of whatever awaits us at the other end of the glittering path. Telling ourselves that there's nothing we can do anyway. Imagining that some adults in the room will take care [...] --- First published: December 13th, 2024 Source: https://www.lesswrong.com/posts/cqF9dDTmWAxcAEfgf/communications-in-hard-mode-my-new-job-at-miri --- Narrated by TYPE III AUDIO .…
 
A new article in Science Policy Forum voices concern about a particular line of biological research which, if successful in the long term, could eventually create a grave threat to humanity and to most life on Earth. Fortunately, the threat is distant, and avoidable—but only if we have common knowledge of it. What follows is an explanation of the threat, what we can do about it, and my comments. Background: chirality Glucose, a building block of sugars and starches, looks like this: Adapted from WikimediaBut there is also a molecule that is the exact mirror-image of glucose. It is called simply L-glucose (in contrast, the glucose in our food and bodies is sometimes called D-glucose): L-glucose, the mirror twin of normal D-glucose. Adapted from WikimediaThis is not just the same molecule flipped around, or looked at from the other side: it's inverted, as your left hand is vs. your [...] --- Outline: (00:29) Background: chirality (01:41) Mirror life (02:47) The threat (05:06) Defense would be difficult and severely limited (06:09) Are we sure? (07:47) Mirror life is a long-term goal of some scientific research (08:57) What to do? (10:22) We have time to react (10:54) The far future (12:25) Optimism, pessimism, and progress The original text contained 1 image which was described by AI. --- First published: December 12th, 2024 Source: https://www.lesswrong.com/posts/y8ysGMphfoFTXZcYp/biological-risk-from-the-mirror-world --- Narrated by TYPE III AUDIO . --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts , or another podcast app.…
 
A fool learns from their own mistakes The wise learn from the mistakes of others. – Otto von Bismark A problem as old as time: The youth won't listen to your hard-earned wisdom. This post is about learning to listen to, and communicate wisdom. It is very long – I considered breaking it up into a sequence, but, each piece felt necessary. I recommend reading slowly and taking breaks. To begin, here are three illustrative vignettes: The burnt out grad student You warn the young grad student "pace yourself, or you'll burn out." The grad student hears "pace yourself, or you'll be kinda tired and unproductive for like a week." They're excited about their work, and/or have internalized authority figures yelling at them if they aren't giving their all. They don't pace themselves. They burn out. The oblivious founder The young startup/nonprofit founder [...] --- Outline: (00:35) The burnt out grad student (01:00) The oblivious founder (02:13) The Thinking Physics student (07:06) Epistemic Status (08:23) PART I (08:26) An Overview of Skills (14:19) Storytelling as Proof of Concept (15:57) Motivating Vignette: (17:54) Having the Impossibility can be defeated trait (21:56) If it werent impossible, well, then Id have to do it, and that would be awful. (23:20) Example of Gaining a Tool (23:59) Example of Changing self-conceptions (25:24) Current Takeaways (27:41) Fictional Evidence (32:24) PART II (32:27) Competitive Deliberate Practice (33:00) Step 1: Listening, actually (36:34) The scale of humanity, and beyond (39:05) Competitive Spirit (39:39) Is your cleverness going to help more than Whatever That Other Guy Is Doing? (41:00) Distaste for the Competitive Aesthetic (42:40) Building your own feedback-loop, when the feedback-loop is can you beat Ruby? (43:43) ...back to George (44:39) Mature Games as Excellent Deliberate Practice Venue. (46:08) Deliberate Practice qua Deliberate Practice (47:41) Feedback loops at the second-to-second level (49:03) Oracles, and Fully Taking The Update (49:51) But what do you do differently? (50:58) Magnitude, Depth, and Fully Taking the Update (53:10) Is there a simple, general skill of appreciating magnitude? (56:37) PART III (56:52) Tacit Soulful Trauma (58:32) Cults, Manipulation and/or Lying (01:01:22) Sandboxing: Safely Importing Beliefs (01:04:07) Asking what does Alice believe, and why? or what is this model claiming? rather than what seems true to me? (01:04:43) Pre-Grieving (or leaving a line of retreat) (01:05:47) EPILOGUE (01:06:06) The Practical (01:06:09) Learning to listen (01:10:58) The Longterm Direction The original text contained 14 footnotes which were omitted from this narration. The original text contained 4 images which were described by AI. --- First published: December 9th, 2024 Source: https://www.lesswrong.com/posts/5yFj7C6NNc8GPdfNo/subskills-of-listening-to-wisdom --- Narrated by…
 
Someone I know, Carson Loughridge, wrote this very nice post explaining the core intuition around Shapley values (which play an important role in impact assessment and cooperative games) using Venn diagrams, and I think it's great. It might be the most intuitive explainer I've come across so far. Incidentally, the post also won an honorable mention in 3blue1brown's Summer of Mathematical Exposition. I'm really proud of having given input on the post. I've included the full post (with permission), as follows: Shapley values are an extremely popular tool in both economics and explainable AI. In this article, we use the concept of “synergy” to build intuition for why Shapley values are fair. There are four unique properties to Shapley values, and all of them can be justified visually. Let's dive in! A figure from Bloch et al., 2021 using the Python package SHAP The Game On a sunny summer [...] --- Outline: (01:07) The Game (04:41) The Formalities (06:17) Concluding Notes The original text contained 2 images which were described by AI. --- First published: December 6th, 2024 Source: https://www.lesswrong.com/posts/WxCtxaAznn8waRWPG/understanding-shapley-values-with-venn-diagrams --- Narrated by TYPE III AUDIO . --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts , or another podcast app.…
 
We make AI narrations of LessWrong posts available via our audio player and podcast feeds. We’re thinking about changing our narrator's voice. There are three new voices on the shortlist. They’re all similarly good in terms of comprehension, emphasis, error rate, etc. They just sound different—like people do. We think they all sound similarly agreeable. But, thousands of listening hours are at stake, so we thought it’d be worth giving listeners an opportunity to vote—just in case there's a strong collective preference. Listen and vote Please listen here: https://files.type3.audio/lesswrong-poll/ And vote here: https://forms.gle/JwuaC2ttd5em1h6h8 It’ll take 1-10 minutes, depending on how much of the sample you decide to listen to. Don’t overthink it—we’d just like to know if there's a voice that you’d particularly love (or hate) to listen to. We'll collect votes until Monday December 16th. Thanks! --- Outline: (00:58) Listen and vote (01:30) Other feedback? The original text contained 2 footnotes which were omitted from this narration. --- First published: December 11th, 2024 Source: https://www.lesswrong.com/posts/wp4emMpicxNEPDb6P/lesswrong-audio-help-us-choose-the-new-voice --- Narrated by TYPE III AUDIO .…
 
This is a link post. Someone I know wrote this very nice post explaining the core intuition around Shapley values (which play an important role in impact assessment) using Venn diagrams, and I think it's great. It might be the most intuitive explainer I've come across so far. Incidentally, the post also won an honorable mention in 3blue1brown's Summer of Mathematical Exposition. --- First published: December 6th, 2024 Source: https://www.lesswrong.com/posts/6dixnRRYSLTqCdJzG/understanding-shapley-values-with-venn-diagrams --- Narrated by TYPE III AUDIO .…
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Короткий довідник