Artwork

Innehåll tillhandahållet av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish

38:44
 
Dela
 

Manage episode 385014002 series 2503772
Innehåll tillhandahållet av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech?

Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

RECOMMENDED MEDIA

Open-Sourcing Highly Capable Foundation Models

This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

Centre for the Governance of AI

Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

AI: Futures and Responsibility (AI:FAR)

Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

Palisade Research

Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom Wheeler

No One is Immune to AI Harms with Dr. Joy Buolamwini

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

  continue reading

124 episoder

Artwork
iconDela
 
Manage episode 385014002 series 2503772
Innehåll tillhandahållet av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech?

Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

RECOMMENDED MEDIA

Open-Sourcing Highly Capable Foundation Models

This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

Centre for the Governance of AI

Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

AI: Futures and Responsibility (AI:FAR)

Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

Palisade Research

Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom Wheeler

No One is Immune to AI Harms with Dr. Joy Buolamwini

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

  continue reading

124 episoder

Alle Folgen

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide