Artwork

Innehåll tillhandahållet av Dev and Doc. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Dev and Doc eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

#22 Explaining Explainable AI (for healthcare) with Dr Annabelle Painter (RSM digital health section Podcast)

58:40
 
Dela
 

Manage episode 434385253 series 3585389
Innehåll tillhandahållet av Dev and Doc. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Dev and Doc eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Dev and Doc is joined by guest Annabelle Painter, doctor, CMO, and podcaster for the Royal Society of Medicine Digital Health Podcast. We deep dive into explainability and interpretability with concrete healthcare examples.

Check out Dr. Painter's Podcast here, she has some amazing guests and great insights into AI in healthcare! - https://spotify.link/pzSgxmpD5yb

👋 Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

👨🏻‍⚕️ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/

🤖 Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr

LinkedIn Newsletter

YouTube Channel

Spotify

Apple Podcasts

Substack

For enquiries - 📧 [email protected]

🎞️ Editor - Dragan Kraljević - https://www.instagram.com/dragan_kraljevic/

🎨 Brand design and art direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d

Timestamps:

  • 00:00 - Start + highlights
  • 03:47 - Intro
  • 08:16 - Does all AI in healthcare need to be explainable?
  • 15:56 - History and explanation of Explainable/Interpretable AI
  • 20:43 - Gradient-based saliency and heat maps
  • 24:14 - LIME - Local Interpretable Model-agnostic Explanations
  • 30:09 - Nonsensical correlations - When explainability goes wrong
  • 33:57 - Modern explainability - Anthropic
  • 37:15 - Comparing LLMs with the human brain
  • 40:02 - Clinician-AI interaction
  • 47:11 - Where is this all going? Aligning models to ground truth and teaching them to say "I don't know"

References:

  continue reading

28 episoder

Artwork
iconDela
 
Manage episode 434385253 series 3585389
Innehåll tillhandahållet av Dev and Doc. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Dev and Doc eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Dev and Doc is joined by guest Annabelle Painter, doctor, CMO, and podcaster for the Royal Society of Medicine Digital Health Podcast. We deep dive into explainability and interpretability with concrete healthcare examples.

Check out Dr. Painter's Podcast here, she has some amazing guests and great insights into AI in healthcare! - https://spotify.link/pzSgxmpD5yb

👋 Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

👨🏻‍⚕️ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/

🤖 Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr

LinkedIn Newsletter

YouTube Channel

Spotify

Apple Podcasts

Substack

For enquiries - 📧 [email protected]

🎞️ Editor - Dragan Kraljević - https://www.instagram.com/dragan_kraljevic/

🎨 Brand design and art direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d

Timestamps:

  • 00:00 - Start + highlights
  • 03:47 - Intro
  • 08:16 - Does all AI in healthcare need to be explainable?
  • 15:56 - History and explanation of Explainable/Interpretable AI
  • 20:43 - Gradient-based saliency and heat maps
  • 24:14 - LIME - Local Interpretable Model-agnostic Explanations
  • 30:09 - Nonsensical correlations - When explainability goes wrong
  • 33:57 - Modern explainability - Anthropic
  • 37:15 - Comparing LLMs with the human brain
  • 40:02 - Clinician-AI interaction
  • 47:11 - Where is this all going? Aligning models to ground truth and teaching them to say "I don't know"

References:

  continue reading

28 episoder

Tất cả các tập

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide

Lyssna på det här programmet medan du utforskar
Spela