Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma. If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
…
continue reading
1
“Parkinson’s Law and the Ideology of Statistics” by Benquo
14:50
14:50
Spela senare
Spela senare
Listor
Gilla
Gillad
14:50
The anonymous review of The Anti-Politics Machine published on Astral Codex X focuses on a case study of a World Bank intervention in Lesotho, and tells a story about it: The World Bank staff drew reasonable-seeming conclusions from sparse data, and made well-intentioned recommendations on that basis. However, the recommended programs failed, due t…
…
continue reading
1
“Capital Ownership Will Not Prevent Human Disempowerment” by beren
25:11
25:11
Spela senare
Spela senare
Listor
Gilla
Gillad
25:11
Crossposted from my personal blog. I was inspired to cross-post this here given the discussion that this post on the role of capital in an AI future elicited. When discussing the future of AI, I semi-often hear an argument along the lines that in a slow takeoff world, despite AIs automating increasingly more of the economy, humanity will remain in …
…
continue reading
1
“Activation space interpretability may be doomed” by bilalchughtai, Lucius Bushnaq
15:56
15:56
Spela senare
Spela senare
Listor
Gilla
Gillad
15:56
TL;DR: There may be a fundamental problem with interpretability work that attempts to understand neural networks by decomposing their individual activation spaces in isolation: It seems likely to find features of the activations - features that help explain the statistical structure of activation spaces, rather than features of the model - the feat…
…
continue reading
1
“What o3 Becomes by 2028” by Vladimir_Nesov
8:40
8:40
Spela senare
Spela senare
Listor
Gilla
Gillad
8:40
Funding for $150bn training systems just turned less speculative, with OpenAI o3 reaching 25% on FrontierMath, 70% on SWE-Verified, 2700 on Codeforces, and 80% on ARC-AGI. These systems will be built in 2026-2027 and enable pretraining models for 5e28 FLOPs, while o3 itself is plausibly based on an LLM pretrained only for 8e25-4e26 FLOPs. The natur…
…
continue reading
1
“What Indicators Should We Watch to Disambiguate AGI Timelines?” by snewman
25:26
25:26
Spela senare
Spela senare
Listor
Gilla
Gillad
25:26
(Cross-post from https://amistrongeryet.substack.com/p/are-we-on-the-brink-of-agi, lightly edited for LessWrong. The original has a lengthier introduction and a bit more explanation of jargon.) No one seems to know whether transformational AGI is coming within a few short years. Or rather, everyone seems to know, but they all have conflicting opini…
…
continue reading
1
“How will we update about scheming?” by ryan_greenblatt
1:18:48
1:18:48
Spela senare
Spela senare
Listor
Gilla
Gillad
1:18:48
I mostly work on risks from scheming (that is, misaligned, power-seeking AIs that plot against their creators such as by faking alignment). Recently, I (and co-authors) released "Alignment Faking in Large Language Models", which provides empirical evidence for some components of the scheming threat model. One question that's really important is how…
…
continue reading
This week, Altman offers a post called Reflections, and he has an interview in Bloomberg. There's a bunch of good and interesting answers in the interview about past events that I won’t mention or have to condense a lot here, such as his going over his calendar and all the meetings he constantly has, so consider reading the whole thing. Table of Co…
…
continue reading
1
“Maximizing Communication, not Traffic” by jefftk
2:15
2:15
Spela senare
Spela senare
Listor
Gilla
Gillad
2:15
As someone who writes for fun, I don't need to get people onto my site: If I write a post and some people are able to get the core ideajust from the title or a tweet-length summary, great! I can include the full contents of my posts in my RSS feed andon FB, because so what if people read the whole post there and neverclick though to my site? It wou…
…
continue reading
1
“What’s the short timeline plan?” by Marius Hobbhahn
44:21
44:21
Spela senare
Spela senare
Listor
Gilla
Gillad
44:21
This is a low-effort post. I mostly want to get other people's takes and express concern about the lack of detailed and publicly available plans so far. This post reflects my personal opinion and not necessarily that of other members of Apollo Research. I’d like to thank Ryan Greenblatt, Bronson Schoen, Josh Clymer, Buck Shlegeris, Dan Braun, Mikit…
…
continue reading
1
“Shallow review of technical AI safety, 2024” by technicalities, Stag, Stephen McAleese, jordine, Dr. David Mathers
1:57:07
1:57:07
Spela senare
Spela senare
Listor
Gilla
Gillad
1:57:07
from aisafety.world The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor. The poi…
…
continue reading
1
“By default, capital will matter more than ever after AGI” by L Rudolf L
28:44
28:44
Spela senare
Spela senare
Listor
Gilla
Gillad
28:44
I've heard many people say something like "money won't matter post-AGI". This has always struck me as odd, and as most likely completely incorrect. First: labour means human mental and physical effort that produces something of value. Capital goods are things like factories, data centres, and software—things humans have built that are used in the p…
…
continue reading
1
“Review: Planecrash” by L Rudolf L
39:20
39:20
Spela senare
Spela senare
Listor
Gilla
Gillad
39:20
Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey. Mix them all together and add extra weirdness for spice. The result might look a lot like Planecrash (AKA: Project Lawful), a work of fiction co-written by "Iarwain" (a pen-name of Eliezer Yudkowsky) and "lintamande". (image from Planecrash) Yudkowsky is…
…
continue reading
1
“The Field of AI Alignment: A Postmortem, and What To Do About It” by johnswentworth
14:03
14:03
Spela senare
Spela senare
Listor
Gilla
Gillad
14:03
A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is sear…
…
continue reading
1
“When Is Insurance Worth It?” by kqr
11:20
11:20
Spela senare
Spela senare
Listor
Gilla
Gillad
11:20
TL;DR: If you want to know whether getting insurance is worth it, use the Kelly Insurance Calculator. If you want to know why or how, read on. Note to LW readers: this is almost the entire article, except some additional maths that I couldn't figure out how to get right in the LW editor, and margin notes. If you're very curious, read the original a…
…
continue reading
1
“Orienting to 3 year AGI timelines” by Nikola Jurkovic
14:58
14:58
Spela senare
Spela senare
Listor
Gilla
Gillad
14:58
My median expectation is that AGI[1] will be created 3 years from now. This has implications on how to behave, and I will share some useful thoughts I and others have had on how to orient to short timelines. I’ve led multiple small workshops on orienting to short AGI timelines and compiled the wisdom of around 50 participants (but mostly my thought…
…
continue reading
1
“What Goes Without Saying” by sarahconstantin
9:26
9:26
Spela senare
Spela senare
Listor
Gilla
Gillad
9:26
There are people I can talk to, where all of the following statements are obvious. They go without saying. We can just “be reasonable” together, with the context taken for granted. And then there are people who…don’t seem to be on the same page at all. There's a real way to do anything, and a fake way; we need to make sure we’re doing the real vers…
…
continue reading
I'm editing this post. OpenAI announced (but hasn't released) o3 (skipping o2 for trademark reasons). It gets 25% on FrontierMath, smashing the previous SoTA of 2%. (These are really hard math problems.) Wow. 72% on SWE-bench Verified, beating o1's 49%. Also 88% on ARC-AGI. --- First published: December 20th, 2024 Source: https://www.lesswrong.com/…
…
continue reading
1
“‘Alignment Faking’ frame is somewhat fake” by Jan_Kulveit
11:40
11:40
Spela senare
Spela senare
Listor
Gilla
Gillad
11:40
I like the research. I mostly trust the results. I dislike the 'Alignment Faking' name and frame, and I'm afraid it will stick and lead to more confusion. This post offers a different frame. The main way I think about the result is: it's about capability - the model exhibits strategic preference preservation behavior; also, harmlessness generalized…
…
continue reading
1
“AIs Will Increasingly Attempt Shenanigans” by Zvi
51:06
51:06
Spela senare
Spela senare
Listor
Gilla
Gillad
51:06
Increasingly, we have seen papers eliciting in AI models various shenanigans. There are a wide variety of scheming behaviors. You’ve got your weight exfiltration attempts, sandbagging on evaluations, giving bad information, shielding goals from modification, subverting tests and oversight, lying, doubling down via more lying. You name it, we can tr…
…
continue reading
1
“Alignment Faking in Large Language Models” by ryan_greenblatt, evhub, Carson Denison, Benjamin Wright, Fabien Roger, Monte M, Sam Marks, Johannes Treutlein, Sam Bowman, Buck
19:35
19:35
Spela senare
Spela senare
Listor
Gilla
Gillad
19:35
What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences. Abstract We present a …
…
continue reading
1
“Communications in Hard Mode (My new job at MIRI)” by tanagrabeast
10:24
10:24
Spela senare
Spela senare
Listor
Gilla
Gillad
10:24
Six months ago, I was a high school English teacher. I wasn’t looking to change careers, even after nineteen sometimes-difficult years. I was good at it. I enjoyed it. After long experimentation, I had found ways to cut through the nonsense and provide real value to my students. Daily, I met my nemesis, Apathy, in glorious battle, and bested her wi…
…
continue reading
1
“Biological risk from the mirror world” by jasoncrawford
14:01
14:01
Spela senare
Spela senare
Listor
Gilla
Gillad
14:01
A new article in Science Policy Forum voices concern about a particular line of biological research which, if successful in the long term, could eventually create a grave threat to humanity and to most life on Earth. Fortunately, the threat is distant, and avoidable—but only if we have common knowledge of it. What follows is an explanation of the t…
…
continue reading
1
“Subskills of ‘Listening to Wisdom’” by Raemon
1:13:47
1:13:47
Spela senare
Spela senare
Listor
Gilla
Gillad
1:13:47
A fool learns from their own mistakes The wise learn from the mistakes of others. – Otto von Bismark A problem as old as time: The youth won't listen to your hard-earned wisdom. This post is about learning to listen to, and communicate wisdom. It is very long – I considered breaking it up into a sequence, but, each piece felt necessary. I recommend…
…
continue reading
1
“Understanding Shapley Values with Venn Diagrams” by Carson L
7:46
7:46
Spela senare
Spela senare
Listor
Gilla
Gillad
7:46
Someone I know, Carson Loughridge, wrote this very nice post explaining the core intuition around Shapley values (which play an important role in impact assessment and cooperative games) using Venn diagrams, and I think it's great. It might be the most intuitive explainer I've come across so far. Incidentally, the post also won an honorable mention…
…
continue reading
1
“LessWrong audio: help us choose the new voice” by PeterH
1:43
1:43
Spela senare
Spela senare
Listor
Gilla
Gillad
1:43
We make AI narrations of LessWrong posts available via our audio player and podcast feeds. We’re thinking about changing our narrator's voice. There are three new voices on the shortlist. They’re all similarly good in terms of comprehension, emphasis, error rate, etc. They just sound different—like people do. We think they all sound similarly agree…
…
continue reading