Gå offline med appen Player FM !
Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building
Manage episode 362006651 series 2966339
Siméon Campos is the founder of EffiSciences and SaferAI, mostly focusing on alignment field building and AI Governance. More recently, he started the newsletter Navigating AI Risk on AI Governance, with a first post on slowing down AI. Note: this episode was recorded in October 2022 so a lot of the content being discussed references what was known at the time, in particular when discussing GPT-3 (insteaed of GPT-3) or ACT-1 (instead of more recent things like AutoGPT).
Transcript: https://theinsideview.ai/simeon
Host: https://twitter.com/MichaelTrazzi
Simeon: https://twitter.com/Simeon_Cps OUTLINE
(00:00) Introduction
(01:12) EffiSciences, SaferAI
(02:31) Concrete AI Auditing Proposals
(04:56) We Need 10K People Working On Alignment
(11:08) What's AI Alignment
(13:07) GPT-3 Is Already Decent At Reasoning
(17:11) AI Regulation Is Easier In Short Timelines
(24:33) Why Is Awareness About Alignment Not Widespread?
(32:02) Coding AIs Enable Feedback Loops In AI Research
(36:08) Technical Talent Is The Bottleneck In AI Research
(37:58): 'Fast Takeoff' Is Asymptotic Improvement In AI Capabilities
(43:52) Bear Market Can Somewhat Delay The Arrival Of AGI
(45:55) AGI Need Not Require Much Intelligence To Do Damage
(49:38) Putting Numbers On Confidence
(54:36) RL On Top Of Coding AIs
(58:21) Betting On Arrival Of AGI
(01:01:47) Power-Seeking AIs Are The Objects Of Concern
(01:06:43) Scenarios & Probability Of Longer Timelines
(01:12:43) Coordination
(01:22:49) Compute Governance Seems Relatively Feasible
(01:32:32) The Recent Ban On Chips Export To China
(01:38:20) AI Governance & Fieldbuilding Were Very Neglected
(01:44:42) Students Are More Likely To Change Their Minds About Things
(01:53:04) Bootcamps Are Better Medium Of Outreach
(02:01:33) Concluding Thoughts
55 episoder
Manage episode 362006651 series 2966339
Siméon Campos is the founder of EffiSciences and SaferAI, mostly focusing on alignment field building and AI Governance. More recently, he started the newsletter Navigating AI Risk on AI Governance, with a first post on slowing down AI. Note: this episode was recorded in October 2022 so a lot of the content being discussed references what was known at the time, in particular when discussing GPT-3 (insteaed of GPT-3) or ACT-1 (instead of more recent things like AutoGPT).
Transcript: https://theinsideview.ai/simeon
Host: https://twitter.com/MichaelTrazzi
Simeon: https://twitter.com/Simeon_Cps OUTLINE
(00:00) Introduction
(01:12) EffiSciences, SaferAI
(02:31) Concrete AI Auditing Proposals
(04:56) We Need 10K People Working On Alignment
(11:08) What's AI Alignment
(13:07) GPT-3 Is Already Decent At Reasoning
(17:11) AI Regulation Is Easier In Short Timelines
(24:33) Why Is Awareness About Alignment Not Widespread?
(32:02) Coding AIs Enable Feedback Loops In AI Research
(36:08) Technical Talent Is The Bottleneck In AI Research
(37:58): 'Fast Takeoff' Is Asymptotic Improvement In AI Capabilities
(43:52) Bear Market Can Somewhat Delay The Arrival Of AGI
(45:55) AGI Need Not Require Much Intelligence To Do Damage
(49:38) Putting Numbers On Confidence
(54:36) RL On Top Of Coding AIs
(58:21) Betting On Arrival Of AGI
(01:01:47) Power-Seeking AIs Are The Objects Of Concern
(01:06:43) Scenarios & Probability Of Longer Timelines
(01:12:43) Coordination
(01:22:49) Compute Governance Seems Relatively Feasible
(01:32:32) The Recent Ban On Chips Export To China
(01:38:20) AI Governance & Fieldbuilding Were Very Neglected
(01:44:42) Students Are More Likely To Change Their Minds About Things
(01:53:04) Bootcamps Are Better Medium Of Outreach
(02:01:33) Concluding Thoughts
55 episoder
Wszystkie odcinki
×Välkommen till Player FM
Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.