“the case for CoT unfaithfulness is overstated” by nostalgebraist
MP3•Episod hem
Manage episode 442883344 series 3364758
Innehåll tillhandahållet av LessWrong. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av LessWrong eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
[Meta note: quickly written, unpolished. Also, it's possible that there's some more convincing work on this topic that I'm unaware of – if so, let me know]
In research discussions about LLMs, I often pick up a vibe of casual, generalized skepticism about model-generated CoT (chain-of-thought) explanations.
CoTs (people say) are not trustworthy in general. They don't always reflect what the model is "actually" thinking or how it has "actually" solved a given problem.
This claim is true as far as it goes. But people sometimes act like it goes much further than (IMO) it really does.
Sometimes it seems to license an attitude of "oh, it's no use reading what the model says in the CoT, you're a chump if you trust that stuff." Or, more insidiously, a failure to even ask the question "what, if anything, can we learn about the model's reasoning process by reading the [...]
The original text contained 1 footnote which was omitted from this narration.
---
First published:
September 29th, 2024
Source:
https://www.lesswrong.com/posts/HQyWGE2BummDCc2Cx/the-case-for-cot-unfaithfulness-is-overstated
---
Narrated by TYPE III AUDIO.
…
continue reading
In research discussions about LLMs, I often pick up a vibe of casual, generalized skepticism about model-generated CoT (chain-of-thought) explanations.
CoTs (people say) are not trustworthy in general. They don't always reflect what the model is "actually" thinking or how it has "actually" solved a given problem.
This claim is true as far as it goes. But people sometimes act like it goes much further than (IMO) it really does.
Sometimes it seems to license an attitude of "oh, it's no use reading what the model says in the CoT, you're a chump if you trust that stuff." Or, more insidiously, a failure to even ask the question "what, if anything, can we learn about the model's reasoning process by reading the [...]
The original text contained 1 footnote which was omitted from this narration.
---
First published:
September 29th, 2024
Source:
https://www.lesswrong.com/posts/HQyWGE2BummDCc2Cx/the-case-for-cot-unfaithfulness-is-overstated
---
Narrated by TYPE III AUDIO.
365 episoder