Artwork

Innehåll tillhandahållet av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Best Practices and Lessons Learned in Standing Up an AISIRT

38:29
 
Dela
 

Manage episode 438972688 series 2487640
Innehåll tillhandahållet av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

In the wake of widespread adoption of artificial intelligence (AI) in critical infrastructure, education, government, and national security entities, adversaries are working to disrupt these systems and attack AI-enabled assets. With nearly four decades in vulnerability management, the Carnegie Mellon University Software Engineering Institute (SEI) recognized a need to create an entity that would identify, research, and identify mitigation strategies for AI vulnerabilities to protect national assets against traditional cybersecurity, adversarial machine learning, and joint cyber-AI attacks. In this SEI podcast, Lauren McIlvenny, director of threat analysis in the SEI’s CERT Division, discusses best practices and lessons learned in standing up an AI Security Incident Response Team (AISIRT).

  continue reading

428 episoder

Artwork
iconDela
 
Manage episode 438972688 series 2487640
Innehåll tillhandahållet av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

In the wake of widespread adoption of artificial intelligence (AI) in critical infrastructure, education, government, and national security entities, adversaries are working to disrupt these systems and attack AI-enabled assets. With nearly four decades in vulnerability management, the Carnegie Mellon University Software Engineering Institute (SEI) recognized a need to create an entity that would identify, research, and identify mitigation strategies for AI vulnerabilities to protect national assets against traditional cybersecurity, adversarial machine learning, and joint cyber-AI attacks. In this SEI podcast, Lauren McIlvenny, director of threat analysis in the SEI’s CERT Division, discusses best practices and lessons learned in standing up an AI Security Incident Response Team (AISIRT).

  continue reading

428 episoder

All episodes

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide