Artwork

Innehåll tillhandahållet av LessWrong. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av LessWrong eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

“What’s the short timeline plan?” by Marius Hobbhahn

44:21
 
Dela
 

Manage episode 459021511 series 3364760
Innehåll tillhandahållet av LessWrong. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av LessWrong eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
This is a low-effort post. I mostly want to get other people's takes and express concern about the lack of detailed and publicly available plans so far. This post reflects my personal opinion and not necessarily that of other members of Apollo Research. I’d like to thank Ryan Greenblatt, Bronson Schoen, Josh Clymer, Buck Shlegeris, Dan Braun, Mikita Balesni, Jérémy Scheurer, and Cody Rushing for comments and discussion.
I think short timelines, e.g. AIs that can replace a top researcher at an AGI lab without losses in capabilities by 2027, are plausible. Some people have posted ideas on what a reasonable plan to reduce AI risk for such timelines might look like (e.g. Sam Bowman's checklist, or Holden Karnofsky's list in his 2022 nearcast), but I find them insufficient for the magnitude of the stakes (to be clear, I don’t think these example lists were intended to be an [...]
---
Outline:
(02:36) Short timelines are plausible
(07:10) What do we need to achieve at a minimum?
(10:50) Making conservative assumptions for safety progress
(12:33) So whats the plan?
(14:31) Layer 1
(15:41) Keep a paradigm with faithful and human-legible CoT
(18:15) Significantly better (CoT, action and white-box) monitoring
(21:19) Control (that doesn't assume human-legible CoT)
(24:16) Much deeper understanding of scheming
(26:43) Evals
(29:56) Security
(31:52) Layer 2
(32:02) Improved near-term alignment strategies
(34:06) Continued work on interpretability, scalable oversight, superalignment and co
(36:12) Reasoning transparency
(38:36) Safety first culture
(41:49) Known limitations and open questions
---
First published:
January 2nd, 2025
Source:
https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan
---
Narrated by TYPE III AUDIO.
  continue reading

400 episoder

Artwork
iconDela
 
Manage episode 459021511 series 3364760
Innehåll tillhandahållet av LessWrong. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av LessWrong eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
This is a low-effort post. I mostly want to get other people's takes and express concern about the lack of detailed and publicly available plans so far. This post reflects my personal opinion and not necessarily that of other members of Apollo Research. I’d like to thank Ryan Greenblatt, Bronson Schoen, Josh Clymer, Buck Shlegeris, Dan Braun, Mikita Balesni, Jérémy Scheurer, and Cody Rushing for comments and discussion.
I think short timelines, e.g. AIs that can replace a top researcher at an AGI lab without losses in capabilities by 2027, are plausible. Some people have posted ideas on what a reasonable plan to reduce AI risk for such timelines might look like (e.g. Sam Bowman's checklist, or Holden Karnofsky's list in his 2022 nearcast), but I find them insufficient for the magnitude of the stakes (to be clear, I don’t think these example lists were intended to be an [...]
---
Outline:
(02:36) Short timelines are plausible
(07:10) What do we need to achieve at a minimum?
(10:50) Making conservative assumptions for safety progress
(12:33) So whats the plan?
(14:31) Layer 1
(15:41) Keep a paradigm with faithful and human-legible CoT
(18:15) Significantly better (CoT, action and white-box) monitoring
(21:19) Control (that doesn't assume human-legible CoT)
(24:16) Much deeper understanding of scheming
(26:43) Evals
(29:56) Security
(31:52) Layer 2
(32:02) Improved near-term alignment strategies
(34:06) Continued work on interpretability, scalable oversight, superalignment and co
(36:12) Reasoning transparency
(38:36) Safety first culture
(41:49) Known limitations and open questions
---
First published:
January 2nd, 2025
Source:
https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan
---
Narrated by TYPE III AUDIO.
  continue reading

400 episoder

Tous les épisodes

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide

Lyssna på det här programmet medan du utforskar
Spela