Artwork

Innehåll tillhandahållet av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

How to Think About AI Consciousness With Anil Seth

47:58
 
Dela
 

Manage episode 427092085 series 2503772
Innehåll tillhandahållet av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

RECOMMENDED MEDIA

Frankenstein by Mary Shelley

A free, plain text version of the Shelley’s classic of gothic literature.

OpenAI’s GPT4o Demo

A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma.

What It’s Like to Be a Bat

Thomas Nagel’s essay on the nature of consciousness.

Are You Living in a Computer Simulation?

Philosopher Nick Bostrom’s essay on the simulation hypothesis.

Anthropic’s Golden Gate Claude

A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

RECOMMENDED YUA EPISODES

Esther Perel on Artificial Intimacy

Talking With Animals... Using AI

Synthetic Humanity: AI & What’s At Stake

  continue reading

116 episoder

Artwork
iconDela
 
Manage episode 427092085 series 2503772
Innehåll tillhandahållet av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

RECOMMENDED MEDIA

Frankenstein by Mary Shelley

A free, plain text version of the Shelley’s classic of gothic literature.

OpenAI’s GPT4o Demo

A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma.

What It’s Like to Be a Bat

Thomas Nagel’s essay on the nature of consciousness.

Are You Living in a Computer Simulation?

Philosopher Nick Bostrom’s essay on the simulation hypothesis.

Anthropic’s Golden Gate Claude

A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

RECOMMENDED YUA EPISODES

Esther Perel on Artificial Intimacy

Talking With Animals... Using AI

Synthetic Humanity: AI & What’s At Stake

  continue reading

116 episoder

Alla avsnitt

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide