• "On today’s panel with Bernie Sanders" by David Scott Krueger
    May 1 2026
    It's sort of easy to forget how close Bernie Sanders was to becoming the most powerful person in the world. The world we live in feels so much not like that place.

    I’m in Washington DC for the next week, and I’ve just finished a public appearance with Senator Sanders (should I call him Bernie? Or Sanders? or…) You won’t often see me so dressed up and polished. But this is important!

    There are politicians who have principles and character, who really believe in doing what's right. I think you have to respect them whether you agree with their views or not, and I think Senator Bernie Sanders is one of them.

    Never has my belief been so validated as when I saw him start to speak, loudly, CLEARLY, publicly about the risk of human extinction from AI. It's the latest in a long line of “well, I’m clearly living in a simulation” moments.

    In retrospect, it's not surprising that Sanders would take a stance here. You don’t have to be an expert to understand the risk from AI. You just need to care enough to spend the time looking into it, and to speak out even [...]

    ---

    First published:
    April 29th, 2026

    Source:
    https://www.lesswrong.com/posts/zWfaSnxM3n5wsX9vh/on-today-s-panel-with-bernie-sanders

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    4 mins
  • "Not a Paper: “Frontier Lab CEOs are Capable of In-Context Scheming”" by LawrenceC
    Apr 29 2026
    (Fragments from a research paper that will never be written)

    Extended Abstract.

    The frontier AI developers are becoming increasingly powerful and wealthy, significantly increasing their potential for risks. One concern is that of executive misalignment: when the CEO has different incentives and goals than that of the board of directors, or of humanity as a whole. Our work proposes three different threat models, under which executive misalignment can lead to concrete harm.

    We perform two evaluations to understand the capabilities and propensities of current humans in relation to executive misalignment: First, we developed a variant of the standard SAD dataset, SAD-Executive Reasoning (SAD-ER), in order to assess the situational awareness of human CEOs on a range of behavioral tests. We find that n=6 current CEOs can (i) recognize their previous public statements, (ii) understand their roles and responsibilities, (iii) determine if an interviewer is friendly or hostile, and (iv) follow instructions that depend on self knowledge. Second, we stress-tested the same 6 leading AI developers in hypothetical corporate environments to identify potentially risky behaviors before they cause real harm. We find that, even without explicit instructions, all 6 developers are willing to engage in strategic behavior (such as [...]

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:
    April 28th, 2026

    Source:
    https://www.lesswrong.com/posts/FuauQjjbTCS5QFLk8/not-a-paper-frontier-lab-ceos-are-capable-of-in-context

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    15 mins
  • "llm assistant personas seem increasingly incoherent (some subjective observations)" by nostalgebraist
    Apr 29 2026
    (This was originally going to be a "quick take" but then it got a bit long. Just FYI.)

    There's this weird trend I perceive with the personas of LLM assistants over time. It feels like they're getting less "coherent" in a certain sense, even as the models get more capable.

    When I read samples from older chat-tuned models, it's striking how "mode-collapsed" they feel relative to recent models like Claude Opus 4.6 or GPT-5.4.[1]

    This is most straightforwardly obvious when it comes to textual style and structure: outputs from older models feel more templated and generic, with less variability in sentence/paragraph length, and have a tendency to feel as though they were written by someone who's "merely going through the motions" of conversation rather than deeply engaging with the material. There are a lot fewer of the sudden pivots you'll often see with recent models, the "wait"s and "a-ha"s and "actually, I want to try something completely different"s.[2]

    And I think this generalizes beyond mere style: there's a similar quality to the personality I see in the outputs. The older models can display a surprising behavioral range (relative to naive expectations based on default-assistant-basin behavior), but even across that [...]

    The original text contained 7 footnotes which were omitted from this narration.

    ---

    First published:
    April 28th, 2026

    Source:
    https://www.lesswrong.com/posts/f5DKLsTsRRhbipH4r/llm-assistant-personas-seem-increasingly-incoherent-some

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    16 mins
  • "LessWrong Shows You Social Signals Before the Comment" by TurnTrout
    Apr 28 2026
    When reading comments, you see is what other people think before reading the comment. As shown in an RCT, that information anchors your opinion, reducing your ability to form your own opinion and making the site's karma rankings less related to the comment's true value. I think the problem is fixable and float some ideas for consideration.

    The LessWrong interface prioritizes social information

    You read a comment. What information is presented, and in what order?

    The order of information:

    1. Who wrote the comment (in bold);
    2. How much other people like this comment (as shown by the karma indicator);
    3. How much other people agree with this comment (as shown by the agreement score);
    4. The actual content.
    This is unwise design for a website which emphasizes truth-seeking. You don't have a chance to read the comment and form your own opinion first. However, you can opt in to hiding usernames (until moused over) via your account settings page.

    A 2013 RCT supports the upvote-anchoring concern

    From Social Influence Bias: A Randomized Experiment (Muchnik et al., 2013):[1]

    We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates [...]

    ---

    Outline:

    (00:30) The LessWrong interface prioritizes social information

    [... 6 more sections]

    ---

    First published:
    April 27th, 2026

    Source:
    https://www.lesswrong.com/posts/YSsp9x8qrBucLoiWT/lesswrong-shows-you-social-signals-before-the-comment

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    9 mins
  • "Update on the Alex Bores campaign" by Eric Neyman
    Apr 27 2026
    In October, I wrote a post arguing that donating to Alex Bores's campaign for Congress was among the most cost-effective opportunities that I'd ever encountered.

    (A bit of context: Bores is a state legislator in New York who championed the RAISE Act, which was signed into law last December.[1] He's now running for Congress in New York's 12th Congressional district, which runs from about 17th Street to 100th Street in Manhattan. If elected to Congress, I think he'd be a strong champion for AI safety legislation, with a focus on catastrophic and existential risk.)

    It's been six months since then, and the election is just two months away (June 23rd), so I thought I'd revisit that post and give an update on my view of how things are going.




    How is Alex Bores doing?

    When I wrote my post, I expected Bores to talk little about AI during the campaign, just because it wasn't a high-salience issue to voters. But that changed in November, when Leading the Future (the AI accelerationist super PAC) declared Bores their #1 target. Since then, they've spend about $2.5 million on attack ads against him.

    LTF's theory of change isn't actually to [...]



    ---

    Outline:

    (00:54) How is Alex Bores doing?

    (04:02) How to help

    (06:02) A quick note about other opportunities

    The original text contained 9 footnotes which were omitted from this narration.

    ---

    First published:
    April 27th, 2026

    Source:
    https://www.lesswrong.com/posts/pjSKdcBjfvjGexr6A/update-on-the-alex-bores-campaign

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    7 mins
  • "Community misconduct disputes are not about facts" by mingyuan
    Apr 27 2026
    In criminal law, the prosecution and the defense each try to establish a timeline — what happened, where, when, who was involved — and thereby determine whether the defendant is actually guilty of a crime.[1]

    Community misconduct disputes are nothing like this.

    There is only rarely disagreement over facts, and even when there is, it is not the crux of the matter. Community disputes are not for litigating facts. What they are for[2] is litigating three things:

    1. The character of the accused
    2. The character of the accuser
    3. The importance of the accusation, in light of points 1 & 2
    I think basically all the terrible things that happen in community disputes are a result of this.

    When what's being ruled on is a person — their place in their community, their continued access to resources, their worth as a human being — the situation feels all-or-nothing, and often escalates out of control.

    This dynamic:

    • discourages people from speaking out about their experiences, both because they may be reluctant to ‘ruin the person's life’ over something non-catastrophic, and because they know that they will be opening themselves up to a punishing level of scrutiny and criticism, and may [...]
    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:
    April 22nd, 2026

    Source:
    https://www.lesswrong.com/posts/cekDpXqjugt5Q3JnC/community-misconduct-disputes-are-not-about-facts

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    3 mins
  • "The paper that killed deep learning theory" by LawrenceC
    Apr 27 2026
    Around 10 years ago, a paper came out that arguably killed classical deep learning theory: Zhang et al. 's aptly titled Understanding deep learning requires rethinking generalization.

    Of course, this is a bit of an exaggeration. No single paper ever kills a field of research on its own, and deep learning theory was not exactly the most productive and healthy field at the time this was published. But if I had to point to a single paper that shattered the feeling of optimism at the time, it would be Zhang et al. 2016.[1]

    Caption: believe it or not, this unassuming table rocked the field of deep learning theory back in 2016, despite probably involving fewer computational resources than what Claude 4.7 Opus consumed when I clicked the “Claude” button embedded into the LessWrong editor.



    Let's start by answering a question: what, exactly, do I mean by deep learning theory?

    At least in 2016, the answer was: “extending statistical learning theory to deep neural networks trained with SGD, in order to derive generalization bounds that would explain their behavior in practice”.



    Since its conception in the mid 1980s, statistical learning theory had been the dominant approach for [...]

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:
    April 25th, 2026

    Source:
    https://www.lesswrong.com/posts/ZvQfcLbcNHYqmvWyo/the-paper-that-killed-deep-learning-theory

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    11 mins
  • "Forecasting is Way Overrated, and We Should Stop Funding It" by mabramov
    Apr 26 2026
    Summary

    EA and rationalists got enamoured with forecasting and prediction markets and made them part of the culture, but this hasn’t proven very useful, yet it continues to receive substantial EA funding. We should cut it off.

    My Experience with Forecasting

    For a while, I was the number one forecaster on Manifold. This lasted for about a year until I stopped just over 2 years ago. To this day, despite quitting, I’m still #8 on the platform. Additionally, I have done well on real-money prediction markets (Polymarket), earning mid-5 figures and winning a few AI bets. I say this to suggest that I would gain status from forecasting being seen as useful, but I think, to the contrary, that the EA community should stop funding it.

    I’ve written a few comments throughout the years that I didn’t think forecasting was worth funding. You can see some of these here and here. Finally, I have gotten around to making this full post.

    Solution Seeking a Problem

    When talking about forecasting, people often ask questions like “How can we leverage forecasting into better decisions?” This is the wrong way to go about solving problems. You solve problems by starting with [...]

    ---

    First published:
    April 25th, 2026

    Source:
    https://www.lesswrong.com/posts/WCutvyr9rr3cpF6hx/forecasting-is-way-overrated-and-we-should-stop-funding-it

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    9 mins