Show notes are at https://stevelitchfield.com/sshow/chat.html
…
continue reading
Innehåll tillhandahållet av LessWrong. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av LessWrong eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !
Gå offline med appen Player FM !
“Shallow review of technical AI safety, 2024” by technicalities, Stag, Stephen McAleese, jordine, Dr. David Mathers
MP3•Episod hem
Manage episode 458257246 series 3364760
Innehåll tillhandahållet av LessWrong. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av LessWrong eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
from aisafety.world
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
…
continue reading
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
402 episoder
MP3•Episod hem
Manage episode 458257246 series 3364760
Innehåll tillhandahållet av LessWrong. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av LessWrong eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
from aisafety.world
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
…
continue reading
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
402 episoder
Alla avsnitt
×Välkommen till Player FM
Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.