Artwork

Innehåll tillhandahållet av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Explainable AI Explained

25:49
 
Dela
 

Manage episode 328623399 series 2487640
Innehåll tillhandahållet av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

As the field of artificial intelligence (AI) has matured, increasingly complex opaque models have been developed and deployed to solve hard problems. Unlike many predecessor models, these models, by the nature of their architecture, are harder to understand and oversee. When such models fail or do not behave as expected or hoped, it can be hard for developers and end-users to pinpoint why or determine methods for addressing the problem. Explainable AI (XAI) meets the emerging demands of AI engineering by providing insight into the inner workings of these opaque models. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Violet Turri and Rachel Dzombak, both with the SEI's AI Division, discuss explainable AI, which encompasses all the techniques that make the decision-making processes of AI systems understandable to humans.

  continue reading

418 episoder

Artwork
iconDela
 
Manage episode 328623399 series 2487640
Innehåll tillhandahållet av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

As the field of artificial intelligence (AI) has matured, increasingly complex opaque models have been developed and deployed to solve hard problems. Unlike many predecessor models, these models, by the nature of their architecture, are harder to understand and oversee. When such models fail or do not behave as expected or hoped, it can be hard for developers and end-users to pinpoint why or determine methods for addressing the problem. Explainable AI (XAI) meets the emerging demands of AI engineering by providing insight into the inner workings of these opaque models. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Violet Turri and Rachel Dzombak, both with the SEI's AI Division, discuss explainable AI, which encompasses all the techniques that make the decision-making processes of AI systems understandable to humans.

  continue reading

418 episoder

Alla avsnitt

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide