Artwork

Innehåll tillhandahållet av John Danaher. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av John Danaher eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Episode #35 – Brundage on the Case for Conditional Optimism about AI

 
Dela
 

Manage episode 195929978 series 1328245
Innehåll tillhandahållet av John Danaher. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av John Danaher eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

IzBNIN-z.jpgIn this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford’s Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal of Responsible Innovation (JRI). His research focuses on the societal implications of artificial intelligence. We discuss the case for conditional optimism about AI.

You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).

Show Notes

  • 0:00 – Introduction
  • 1:00 – Why did Miles write the conditional case for AI optimism?
  • 5:07 – What is AI anyway?
  • 8:26 – The difference between broad and narrow forms of AI
  • 12:00 – Is the current excitement around AI hype or reality?
  • 16:13 – What is the conditional case for AI conditional upon?
  • 22:00 – The First Argument: The Value of Task Expedition
  • 29:30 – The downsides of task expedition and the problem of speed mismatches
  • 33:28 – How AI changes our cognitive ecology
  • 36:00 – The Second Argument: The Value of Improved Coordination
  • 40:50 – Wouldn’t AI be used for malicious purposes too?
  • 45:00 – Can we create safe AI in the absence of global coordination?
  • 48:03 – The Third Argument: The Value of a Leisure Society
  • 52:30 – Would a leisure society really be utopian?
  • 56:24 – How were Miles’s arguments received when presented at the EU parliament?

Relevant Links

  continue reading

64 episoder

Artwork
iconDela
 
Manage episode 195929978 series 1328245
Innehåll tillhandahållet av John Danaher. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av John Danaher eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

IzBNIN-z.jpgIn this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford’s Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal of Responsible Innovation (JRI). His research focuses on the societal implications of artificial intelligence. We discuss the case for conditional optimism about AI.

You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).

Show Notes

  • 0:00 – Introduction
  • 1:00 – Why did Miles write the conditional case for AI optimism?
  • 5:07 – What is AI anyway?
  • 8:26 – The difference between broad and narrow forms of AI
  • 12:00 – Is the current excitement around AI hype or reality?
  • 16:13 – What is the conditional case for AI conditional upon?
  • 22:00 – The First Argument: The Value of Task Expedition
  • 29:30 – The downsides of task expedition and the problem of speed mismatches
  • 33:28 – How AI changes our cognitive ecology
  • 36:00 – The Second Argument: The Value of Improved Coordination
  • 40:50 – Wouldn’t AI be used for malicious purposes too?
  • 45:00 – Can we create safe AI in the absence of global coordination?
  • 48:03 – The Third Argument: The Value of a Leisure Society
  • 52:30 – Would a leisure society really be utopian?
  • 56:24 – How were Miles’s arguments received when presented at the EU parliament?

Relevant Links

  continue reading

64 episoder

所有剧集

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide