Artwork

Innehåll tillhandahållet av Sentience Institute. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Sentience Institute eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Eric Schwitzgebel on user perception of the moral status of AI

57:47
 
Dela
 

Manage episode 401251934 series 2596584
Innehåll tillhandahållet av Sentience Institute. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Sentience Institute eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they'd be willing to make excessive sacrifices for this thing that isn't really sentient.

  • Eric Schwitzgebel

Why should AI systems be designed so as to not confuse users about their moral status? What would make an AI system sentience or moral standing clear? Are there downsides to treating an AI as not sentient even if it’s not sentient? What happens when some theories of consciousness disagree about AI consciousness? Have the developments in large language models in the last few years come faster or slower than Eric expected? Where does Eric think we will see sentience first in AI if we do?

Eric Schwitzgebel is professor of philosophy at University of California, Berkeley, specializing in philosophy of mind and moral psychology. His books include Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt), Perplexities of Consciousness, A Theory of Jerks and Other Philosophical Misadventures, and most recently The Weirdness of the World. He blogs at The Splintered Mind.
Topics discussed in the episode:

  • Introduction (0:00)
  • AI systems must not confuse users about their sentience or moral status introduction (3:14)
  • Not confusing experts (5:30)
  • Not confusing general users (9:12)
  • What would make an AI system sentience or moral standing clear? (13:21)
  • Are there downsides to treating an AI as not sentient even if it’s not sentient? (16:33)
  • How would we implement this solution at a policy level? (25:19)
  • What happens when some theories of consciousness disagree about AI consciousness? (28:24)
  • How does this approach to uncertainty in AI consciousness relate to Jeff Sebo’s approach? (34:15)
  • Consciousness and artificial intelligence insights from the science of consciousness introduction (36:38)
  • How does the indicator properties approach account for factors relating to consciousness that we might be missing? (39:37)
  • What was the process for determining what indicator properties to include? (42:58)
  • Advantages of the indicator properties approach (44:49)
  • Have the developments in large language models in the last few years come faster or slower than Eric expected? (46:25)
  • Where does Eric think we will see sentience first in AI if we do? (50:17)
  • Are things like grounding or embodiment essential for understanding and consciousness? (53:35)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 episoder

Artwork
iconDela
 
Manage episode 401251934 series 2596584
Innehåll tillhandahållet av Sentience Institute. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Sentience Institute eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they'd be willing to make excessive sacrifices for this thing that isn't really sentient.

  • Eric Schwitzgebel

Why should AI systems be designed so as to not confuse users about their moral status? What would make an AI system sentience or moral standing clear? Are there downsides to treating an AI as not sentient even if it’s not sentient? What happens when some theories of consciousness disagree about AI consciousness? Have the developments in large language models in the last few years come faster or slower than Eric expected? Where does Eric think we will see sentience first in AI if we do?

Eric Schwitzgebel is professor of philosophy at University of California, Berkeley, specializing in philosophy of mind and moral psychology. His books include Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt), Perplexities of Consciousness, A Theory of Jerks and Other Philosophical Misadventures, and most recently The Weirdness of the World. He blogs at The Splintered Mind.
Topics discussed in the episode:

  • Introduction (0:00)
  • AI systems must not confuse users about their sentience or moral status introduction (3:14)
  • Not confusing experts (5:30)
  • Not confusing general users (9:12)
  • What would make an AI system sentience or moral standing clear? (13:21)
  • Are there downsides to treating an AI as not sentient even if it’s not sentient? (16:33)
  • How would we implement this solution at a policy level? (25:19)
  • What happens when some theories of consciousness disagree about AI consciousness? (28:24)
  • How does this approach to uncertainty in AI consciousness relate to Jeff Sebo’s approach? (34:15)
  • Consciousness and artificial intelligence insights from the science of consciousness introduction (36:38)
  • How does the indicator properties approach account for factors relating to consciousness that we might be missing? (39:37)
  • What was the process for determining what indicator properties to include? (42:58)
  • Advantages of the indicator properties approach (44:49)
  • Have the developments in large language models in the last few years come faster or slower than Eric expected? (46:25)
  • Where does Eric think we will see sentience first in AI if we do? (50:17)
  • Are things like grounding or embodiment essential for understanding and consciousness? (53:35)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 episoder

Alla avsnitt

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide

Lyssna på det här programmet medan du utforskar
Spela