Artwork

Innehåll tillhandahållet av Jay Shah. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Jay Shah eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

P1 Adversarial robustness in Neural Networks, Quantization and working at DeepMind | David Stutz

1:32:28
 
Dela
 

Manage episode 370883806 series 2859018
Innehåll tillhandahållet av Jay Shah. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Jay Shah eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Part-1 of my podcast with David Stutz. (Part-2: https://youtu.be/IumJcB7bE20) David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind, he was a Ph.D. student at the Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there. Check out Rora: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-202300:00:00 Highlights and Sponsors 00:01:22 Intro 00:02:14 Interest in AI 00:12:26 Finding research interests 00:22:41 Robustness vs Generalization in deep neural networks 00:28:03 Generalization vs model performance trade-off 00:37:30 On-manifold adversarial examples for better generalization 00:48:20 Vision transformers 00:49:45 Confidence-calibrated adversarial training 00:59:25 Improving hardware architecture for deep neural networks 01:08:45 What's the tradeoff in quantization? 01:19:07 Amazing aspects of working at DeepMind 01:27:38 Learning the skills of Abstraction when collaborating David's Homepage: https://davidstutz.de/ And his blog: https://davidstutz.de/category/blog/ Research work: https://scholar.google.com/citations?user=TxEy3cwAAAAJ&hl=en About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

  continue reading

92 episoder

Artwork
iconDela
 
Manage episode 370883806 series 2859018
Innehåll tillhandahållet av Jay Shah. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Jay Shah eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Part-1 of my podcast with David Stutz. (Part-2: https://youtu.be/IumJcB7bE20) David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind, he was a Ph.D. student at the Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there. Check out Rora: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-202300:00:00 Highlights and Sponsors 00:01:22 Intro 00:02:14 Interest in AI 00:12:26 Finding research interests 00:22:41 Robustness vs Generalization in deep neural networks 00:28:03 Generalization vs model performance trade-off 00:37:30 On-manifold adversarial examples for better generalization 00:48:20 Vision transformers 00:49:45 Confidence-calibrated adversarial training 00:59:25 Improving hardware architecture for deep neural networks 01:08:45 What's the tradeoff in quantization? 01:19:07 Amazing aspects of working at DeepMind 01:27:38 Learning the skills of Abstraction when collaborating David's Homepage: https://davidstutz.de/ And his blog: https://davidstutz.de/category/blog/ Research work: https://scholar.google.com/citations?user=TxEy3cwAAAAJ&hl=en About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

  continue reading

92 episoder

Alle episoder

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide