Artwork

Innehåll tillhandahållet av Anton Chuvakin. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Anton Chuvakin eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models

29:04
 
Dela
 

Manage episode 380593266 series 2892548
Innehåll tillhandahållet av Anton Chuvakin. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Anton Chuvakin eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Guest:

  • Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security

Topics:

  • Could you give our audience the quick version of what is an LLM and what things can they do vs not do? Is this “baby AGI” or is this a glorified “autocomplete”?

  • Let’s talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data?

  • Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one?

  • How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they’re a) talking nonsense or b) actually keeping their output safe?

  • Are hallucinations inherent to LLMs and can they ever be fixed?

  • So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?

Resources:

  continue reading

174 episoder

Artwork
iconDela
 
Manage episode 380593266 series 2892548
Innehåll tillhandahållet av Anton Chuvakin. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Anton Chuvakin eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Guest:

  • Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security

Topics:

  • Could you give our audience the quick version of what is an LLM and what things can they do vs not do? Is this “baby AGI” or is this a glorified “autocomplete”?

  • Let’s talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data?

  • Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one?

  • How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they’re a) talking nonsense or b) actually keeping their output safe?

  • Are hallucinations inherent to LLMs and can they ever be fixed?

  • So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?

Resources:

  continue reading

174 episoder

Alla avsnitt

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide