Artwork

Innehåll tillhandahållet av Rob Wiblin and Keiran Harris and The 80000 Hours team. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Rob Wiblin and Keiran Harris and The 80000 Hours team eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks

22:54
 
Dela
 

Manage episode 440570506 series 3320433
Innehåll tillhandahållet av Rob Wiblin and Keiran Harris and The 80000 Hours team. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Rob Wiblin and Keiran Harris and The 80000 Hours team eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:

Ezra Karger on what superforecasters and experts think about existential risks

And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

Highlights:

  • Luisa’s intro (00:00:00)
  • Why we need forecasts about existential risks (00:00:26)
  • Headline estimates of existential and catastrophic risks (00:02:43)
  • What explains disagreements about AI risks? (00:06:18)
  • Learning more doesn't resolve disagreements about AI risks (00:08:59)
  • A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)
  • Cruxes about AI risks (00:15:17)
  • Is forecasting actually useful in the real world? (00:18:24)

Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

  continue reading

90 episoder

Artwork
iconDela
 
Manage episode 440570506 series 3320433
Innehåll tillhandahållet av Rob Wiblin and Keiran Harris and The 80000 Hours team. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Rob Wiblin and Keiran Harris and The 80000 Hours team eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:

Ezra Karger on what superforecasters and experts think about existential risks

And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

Highlights:

  • Luisa’s intro (00:00:00)
  • Why we need forecasts about existential risks (00:00:26)
  • Headline estimates of existential and catastrophic risks (00:02:43)
  • What explains disagreements about AI risks? (00:06:18)
  • Learning more doesn't resolve disagreements about AI risks (00:08:59)
  • A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)
  • Cruxes about AI risks (00:15:17)
  • Is forecasting actually useful in the real world? (00:18:24)

Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

  continue reading

90 episoder

Alla avsnitt

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide