Artwork

Innehåll tillhandahållet av Nathan Lambert. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Nathan Lambert eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Interviewing Riley Goodside on the science of prompting

1:08:39
 
Dela
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 02, 2024 13:35 (4d ago)

What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 442793319 series 3590272
Innehåll tillhandahållet av Nathan Lambert. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Nathan Lambert eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

More information: https://www.interconnects.ai/p/riley-goodside-on-science-of-prompting

Riley Goodside is a staff prompting engineer at Scale AI. Previously working in data science, he is often seen as the default for the new role of a “prompt engineer.” He regularly posts incisive prompts that illicit notable behavior from the most popular AI models.

I really resonated with this saying from Anthropic’s recent podcast on prompt engineering — “now we write essays and treat them as code.” In order to be good at prompting, you need to understand that natural language operates as our code used to.

This episode is a masterclass on why you should care about prompting and how it impacts results. Of course, there’s a bunch of great discussion on recent models that reflect the need for different and or better prompting. Enjoy it!

00:00:09 Introduction
00:02:40 Riley's path to LLMs
00:07:54 Impact of ChatGPT on prompt engineering
00:12:03 OpenAI's o1
00:18:21 Autoregressive inference and prompting sensitivities
00:24:48 Reflection 70B model and its implications
00:28:00 Impact of prompting on evaluation
00:32:43 Prompting vs. Google search
00:46:55 Prompting and RLHF/post-training
00:56:57 Prompting of AI agents
01:01:20 Importance of hands-on experience with language models
01:05:00 Importance and challenges of AI model evaluation

  continue reading

58 episoder

Artwork
iconDela
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 02, 2024 13:35 (4d ago)

What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 442793319 series 3590272
Innehåll tillhandahållet av Nathan Lambert. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Nathan Lambert eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

More information: https://www.interconnects.ai/p/riley-goodside-on-science-of-prompting

Riley Goodside is a staff prompting engineer at Scale AI. Previously working in data science, he is often seen as the default for the new role of a “prompt engineer.” He regularly posts incisive prompts that illicit notable behavior from the most popular AI models.

I really resonated with this saying from Anthropic’s recent podcast on prompt engineering — “now we write essays and treat them as code.” In order to be good at prompting, you need to understand that natural language operates as our code used to.

This episode is a masterclass on why you should care about prompting and how it impacts results. Of course, there’s a bunch of great discussion on recent models that reflect the need for different and or better prompting. Enjoy it!

00:00:09 Introduction
00:02:40 Riley's path to LLMs
00:07:54 Impact of ChatGPT on prompt engineering
00:12:03 OpenAI's o1
00:18:21 Autoregressive inference and prompting sensitivities
00:24:48 Reflection 70B model and its implications
00:28:00 Impact of prompting on evaluation
00:32:43 Prompting vs. Google search
00:46:55 Prompting and RLHF/post-training
00:56:57 Prompting of AI agents
01:01:20 Importance of hands-on experience with language models
01:05:00 Importance and challenges of AI model evaluation

  continue reading

58 episoder

Alla avsnitt

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide