Artwork

Innehåll tillhandahållet av MLSecOps.com. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av MLSecOps.com eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

AI Audits: Uncovering Risks in ML Systems; With Guest: Shea Brown, PhD

41:02
 
Dela
 

Manage episode 362311986 series 3461851
Innehåll tillhandahållet av MLSecOps.com. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av MLSecOps.com eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Send us a text

Shea Brown, PhD explores with us the “W’s” and security practices related to AI and algorithm audits.

What is included in an AI audit?

Who is requesting AI audits and, conversely, who isn’t requesting them but should be?

When should organizations request a third party audit of their AI/ML systems and machine learning algorithms?

Why should they do so? What are some organizational risks and potential public harms that could result from not auditing AI/ML systems?

What are some next steps to take if the results of your audit are unsatisfactory or noncompliant?

Shea Brown, PhD; is the Founder and CEO of BABL AI, and a faculty member in the Department of Physics & Astronomy at the University of Iowa.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 episoder

Artwork
iconDela
 
Manage episode 362311986 series 3461851
Innehåll tillhandahållet av MLSecOps.com. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av MLSecOps.com eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Send us a text

Shea Brown, PhD explores with us the “W’s” and security practices related to AI and algorithm audits.

What is included in an AI audit?

Who is requesting AI audits and, conversely, who isn’t requesting them but should be?

When should organizations request a third party audit of their AI/ML systems and machine learning algorithms?

Why should they do so? What are some organizational risks and potential public harms that could result from not auditing AI/ML systems?

What are some next steps to take if the results of your audit are unsatisfactory or noncompliant?

Shea Brown, PhD; is the Founder and CEO of BABL AI, and a faculty member in the Department of Physics & Astronomy at the University of Iowa.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 episoder

Alla avsnitt

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide