Artwork

Innehåll tillhandahållet av Machine Learning Street Talk (MLST). Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Machine Learning Street Talk (MLST) eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

MULTI AGENT LEARNING - LANCELOT DA COSTA

49:56
 
Dela
 

Manage episode 382242090 series 2803422
Innehåll tillhandahållet av Machine Learning Street Talk (MLST). Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Machine Learning Street Talk (MLST) eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Please support us https://www.patreon.com/mlst

https://discord.gg/aNPkGUQtc5

https://twitter.com/MLStreetTalk

Lance Da Costa aims to advance our understanding of intelligent systems by modelling cognitive systems and improving artificial systems.

He's a PhD candidate with Greg Pavliotis and Karl Friston jointly at Imperial College London and UCL, and a student in the Mathematics of Random Systems CDT run by Imperial College London and the University of Oxford. He completed an MRes in Brain Sciences at UCL with Karl Friston and Biswa Sengupta, an MASt in Pure Mathematics at the University of Cambridge with Oscar Randal-Williams, and a BSc in Mathematics at EPFL and the University of Toronto.

Summary:

Lance did pure math originally but became interested in the brain and AI. He started working with Karl Friston on the free energy principle, which claims all intelligent agents minimize free energy for perception, action, and decision-making. Lance has worked to provide mathematical foundations and proofs for why the free energy principle is true, starting from basic assumptions about agents interacting with their environment. This aims to justify the principle from first physics principles. Dr. Scarfe and Da Costa discuss different approaches to AI - the free energy/active inference approach focused on mimicking human intelligence vs approaches focused on maximizing capability like deep reinforcement learning. Lance argues active inference provides advantages for explainability and safety compared to black box AI systems. It provides a simple, sparse description of intelligence based on a generative model and free energy minimization. They discuss the need for structured learning and acquiring core knowledge to achieve more human-like intelligence. Lance highlights work from Josh Tenenbaum's lab that shows similar learning trajectories to humans in a simple Atari-like environment.

Incorporating core knowledge constraints the space of possible generative models the agent can use to represent the world, making learning more sample efficient. Lance argues active inference agents with core knowledge can match human learning capabilities.

They discuss how to make generative models interpretable, such as through factor graphs. The goal is to be able to understand the representations and message passing in the model that leads to decisions.

In summary, Lance argues active inference provides a principled approach to AI with advantages for explainability, safety, and human-like learning. Combining it with core knowledge and structural learning aims to achieve more human-like artificial intelligence.

https://www.lancelotdacosta.com/

https://twitter.com/lancelotdacosta

Interviewer: Dr. Tim Scarfe

TOC

00:00:00 - Start

00:09:27 - Intelligence

00:12:37 - Priors / structure learning

00:17:21 - Core knowledge

00:29:05 - Intelligence is specialised

00:33:21 - The magic of agents

00:39:30 - Intelligibility of structure learning

#artificialintelligence #activeinference

  continue reading

154 episoder

Artwork
iconDela
 
Manage episode 382242090 series 2803422
Innehåll tillhandahållet av Machine Learning Street Talk (MLST). Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Machine Learning Street Talk (MLST) eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Please support us https://www.patreon.com/mlst

https://discord.gg/aNPkGUQtc5

https://twitter.com/MLStreetTalk

Lance Da Costa aims to advance our understanding of intelligent systems by modelling cognitive systems and improving artificial systems.

He's a PhD candidate with Greg Pavliotis and Karl Friston jointly at Imperial College London and UCL, and a student in the Mathematics of Random Systems CDT run by Imperial College London and the University of Oxford. He completed an MRes in Brain Sciences at UCL with Karl Friston and Biswa Sengupta, an MASt in Pure Mathematics at the University of Cambridge with Oscar Randal-Williams, and a BSc in Mathematics at EPFL and the University of Toronto.

Summary:

Lance did pure math originally but became interested in the brain and AI. He started working with Karl Friston on the free energy principle, which claims all intelligent agents minimize free energy for perception, action, and decision-making. Lance has worked to provide mathematical foundations and proofs for why the free energy principle is true, starting from basic assumptions about agents interacting with their environment. This aims to justify the principle from first physics principles. Dr. Scarfe and Da Costa discuss different approaches to AI - the free energy/active inference approach focused on mimicking human intelligence vs approaches focused on maximizing capability like deep reinforcement learning. Lance argues active inference provides advantages for explainability and safety compared to black box AI systems. It provides a simple, sparse description of intelligence based on a generative model and free energy minimization. They discuss the need for structured learning and acquiring core knowledge to achieve more human-like intelligence. Lance highlights work from Josh Tenenbaum's lab that shows similar learning trajectories to humans in a simple Atari-like environment.

Incorporating core knowledge constraints the space of possible generative models the agent can use to represent the world, making learning more sample efficient. Lance argues active inference agents with core knowledge can match human learning capabilities.

They discuss how to make generative models interpretable, such as through factor graphs. The goal is to be able to understand the representations and message passing in the model that leads to decisions.

In summary, Lance argues active inference provides a principled approach to AI with advantages for explainability, safety, and human-like learning. Combining it with core knowledge and structural learning aims to achieve more human-like artificial intelligence.

https://www.lancelotdacosta.com/

https://twitter.com/lancelotdacosta

Interviewer: Dr. Tim Scarfe

TOC

00:00:00 - Start

00:09:27 - Intelligence

00:12:37 - Priors / structure learning

00:17:21 - Core knowledge

00:29:05 - Intelligence is specialised

00:33:21 - The magic of agents

00:39:30 - Intelligibility of structure learning

#artificialintelligence #activeinference

  continue reading

154 episoder

Wszystkie odcinki

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide