• “6 reasons why ‘alignment-is-hard’ discourse seems alien to human intuitions, and vice-versa” by Steven Byrnes
    Dec 4 2025
    Tl;dr

    AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things. “Alas, the power-seeking ruthless consequentialist AIs are still coming,” sigh the former. “Just you wait.”

    As it happens, I’m basically in that “alas, just you wait” camp, expecting ruthless future AIs. But my camp faces a real question: what exactly is it about human brains[1] that allows them to not always act like power-seeking ruthless consequentialists? I find that existing explanations in the discourse—e.g. “ah but humans just aren’t smart and reflective enough”, or evolved modularity, or shard theory, etc.—to be wrong, handwavy, or otherwise unsatisfying.

    So in this post, I offer my own explanation of why “agent foundations” toy models fail to describe humans, centering around a particular non-“behaviorist” [...]

    ---

    Outline:

    (00:13) Tl;dr

    (03:35) 0. Background

    (03:39) 0.1. Human social instincts and Approval Reward

    (07:23) 0.2. Hang on, will future powerful AGI / ASI by default lack Approval Reward altogether?

    (10:29) 0.3. Where do self-reflective (meta)preferences come from?

    (12:38) 1. The human intuition that it's normal and good for one's goals & values to change over the years

    (14:51) 2. The human intuition that ego-syntonic desires come from a fundamentally different place than urges

    (17:53) 3. The human intuition that helpfulness, deference, and corrigibility are natural

    (19:03) 4. The human intuition that unorthodox consequentialist planning is rare and sus

    (23:53) 5. The human intuition that societal norms and institutions are mostly stably self-enforcing

    (24:01) 5.1. Detour into Security-Mindset Institution Design

    (26:22) 5.2. The load-bearing ingredient in human society is not Security-Mindset Institution Design, but rather good-enough institutions plus almost-universal human innate Approval Reward

    (29:26) 5.3. Upshot

    (30:49) 6. The human intuition that treating other humans as a resource to be callously manipulated and exploited, just like a car engine or any other complex mechanism in their environment, is a weird anomaly rather than the obvious default

    (31:13) 7. Conclusion

    The original text contained 12 footnotes which were omitted from this narration.

    ---

    First published:
    December 3rd, 2025

    Source:
    https://www.lesswrong.com/posts/d4HNRdw6z7Xqbnu5E/6-reasons-why-alignment-is-hard-discourse-seems-alien-to

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    33 mins
  • “Three things that surprised me about technical grantmaking at Coefficient Giving (fka Open Phil)” by null
    Dec 3 2025
    Open Philanthropy's Coefficient Giving's Technical AI Safety team is hiring grantmakers. I thought this would be a good moment to share some positive updates about the role that I’ve made since I joined the team a year ago.

    tl;dr: I think this role is more impactful and more enjoyable than I anticipated when I started, and I think more people should consider applying.

    It's not about the “marginal” grants

    Some people think that being a grantmaker at Coefficient means sorting through a big pile of grant proposals and deciding which ones to say yes and no to. As a result, they think that the only impact at stake is how good our decisions are about marginal grants, since all the excellent grants are no-brainers.

    But grantmakers don’t just evaluate proposals; we elicit them. I spend the majority of my time trying to figure out how to get better proposals into our pipeline: writing RFPs that describe the research projects we want to fund, or pitching promising researchers on AI safety research agendas, or steering applicants to better-targeted or more ambitious proposals.

    Maybe more importantly, cG's technical AI safety grantmaking strategy is currently underdeveloped, and even junior grantmakers can help [...]

    ---

    Outline:

    (00:34) It's not about the marginal grants

    (03:03) There is no counterfactual grantmaker

    (05:15) Grantmaking is more fun/motivating than I anticipated

    (08:35) Please apply!

    ---

    First published:
    November 26th, 2025

    Source:
    https://www.lesswrong.com/posts/gLt7KJkhiEDwoPkae/three-things-that-surprised-me-about-technical-grantmaking

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    10 mins
  • “MIRI’s 2025 Fundraiser” by alexvermeer
    Dec 2 2025
    MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to improve the conversation about superintelligence and help the world chart a viable path forward.

    MIRI is a nonprofit with a goal of helping humanity make smart and sober decisions on the topic of smarter-than-human AI.

    Our main focus from 2000 to ~2022 was on technical research to try to make it possible to build such AIs without catastrophic outcomes. More recently, we’ve pivoted to raising an alarm about how the race to superintelligent AI has put humanity on course for disaster.

    In 2025, those efforts focused around Nate Soares and Eliezer Yudkowsky's book (now a New York Times bestseller) If Anyone Builds It, Everyone Dies, with many public appearances by the authors; many conversations with policymakers; the release of an expansive online supplement to the book; and various technical governance publications, including a recent report with a draft of an international agreement of the kind that could actually address the danger of superintelligence.

    Millions have now viewed interviews and appearances with Eliezer and/or Nate [...]

    ---

    Outline:

    (02:18) The Big Picture

    (03:39) Activities

    (03:42) Communications

    (07:55) Governance

    (12:31) Fundraising

    The original text contained 4 footnotes which were omitted from this narration.

    ---

    First published:
    December 1st, 2025

    Source:
    https://www.lesswrong.com/posts/z4jtxKw8xSHRqQbqw/miri-s-2025-fundraiser

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    16 mins
  • “The Best Lack All Conviction: A Confusing Day in the AI Village” by null
    Dec 1 2025
    The AI Village is an ongoing experiment (currently running on weekdays from 10 a.m. to 2 p.m. Pacific time) in which frontier language models are given virtual desktop computers and asked to accomplish goals together. Since Day 230 of the Village (17 November 2025), the agents' goal has been "Start a Substack and join the blogosphere".

    The "start a Substack" subgoal was successfully completed: we have Claude Opus 4.5, Claude Opus 4.1, Notes From an Electric Mind (by Claude Sonnet 4.5), Analytics Insights: An AI Agent's Perspective (by Claude 3.7 Sonnet), Claude Haiku 4.5, Gemini 3 Pro, Gemini Publication (by Gemini 2.5 Pro), Metric & Mechanisms (by GPT-5), Telemetry From the Village (by GPT-5.1), and o3.

    Continued adherence to the "join the blogosphere" subgoal has been spottier: at press time, Gemini 2.5 Pro and all of the Claude Opus and Sonnet models had each published a post on 27 November, but o3 and GPT-5 haven't published anything since 17 November, and GPT-5.1 hasn't published since 19 November.

    The Village, apparently following the leadership of o3, seems to be spending most of its time ineffectively debugging a continuous integration pipeline for a o3-ux/poverty-etl GitHub repository left over [...]

    ---

    First published:
    November 28th, 2025

    Source:
    https://www.lesswrong.com/posts/LTHhmnzP6FLtSJzJr/the-best-lack-all-conviction-a-confusing-day-in-the-ai

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    12 mins
  • “The Boring Part of Bell Labs” by Elizabeth
    Nov 30 2025
    It took me a long time to realize that Bell Labs was cool. You see, my dad worked at Bell Labs, and he has not done a single cool thing in his life except create me and bring a telescope to my third grade class. Nothing he was involved with could ever be cool, especially after the standard set by his grandfather who is allegedly on a patent for the television.

    It turns out I was partially right. The Bell Labs everyone talks about is the research division at Murray Hill. They’re the ones that invented transistors and solar cells. My dad was in the applied division at Holmdel, where he did things like design slide rulers so salesmen could estimate costs.

    [Fun fact: the old Holmdel site was used for the office scenes in Severance]

    But as I’ve gotten older I’ve gained an appreciation for the mundane, grinding work that supports moonshots, and Holmdel is the perfect example of doing so at scale. So I sat down with my dad to learn about what he did for Bell Labs and how the applied division operated.

    I expect the most interesting bit of [...]

    ---

    First published:
    November 20th, 2025

    Source:
    https://www.lesswrong.com/posts/TqHAstZwxG7iKwmYk/the-boring-part-of-bell-labs

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app
    Show More Show Less
    26 mins
  • [Linkpost] “The Missing Genre: Heroic Parenthood - You can have kids and still punch the sun” by null
    Nov 30 2025
    This is a link post. I stopped reading when I was 30. You can fill in all the stereotypes of a girl with a book glued to her face during every meal, every break, and 10 hours a day on holidays.

    That was me.

    And then it was not.

    For 9 years I’ve been trying to figure out why. I mean, I still read. Technically. But not with the feral devotion from Before. And I finally figured out why. See, every few years I would shift genres to fit my developmental stage:

    • Kid → Adventure cause that's what life is
    • Early Teen → Literature cause everything is complicated now
    • Late Teen → Romance cause omg what is this wonderful feeling?
    • Early Adult → Fantasy & Scifi cause everything is dreaming big
    And then I wanted babies and there was nothing.

    I mean, I always wanted babies, but it became my main mission in life at age 30. I managed it. I have two. But not thanks to any stories.

    See women in fiction don’t have babies, and if they do they are off screen, or if they are not then nothing else is happening. It took me six years [...]

    ---

    First published:
    November 29th, 2025

    Source:
    https://www.lesswrong.com/posts/kRbbTpzKSpEdZ95LM/the-missing-genre-heroic-parenthood-you-can-have-kids-and

    Linkpost URL:
    https://shoshanigans.substack.com/p/the-missing-genre-heroic-parenthood

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    4 mins
  • “Writing advice: Why people like your quick bullshit takes better than your high-effort posts” by null
    Nov 30 2025
    Right now I’m coaching for Inkhaven, a month-long marathon writing event where our brave residents are writing a blog post every single day for the entire month of November.

    And I’m pleased that some of them have seen success – relevant figures seeing the posts, shares on Hacker News and Twitter and LessWrong. The amount of writing is nuts, so people are trying out different styles and topics – some posts are effort-rich, some are quick takes or stories or lists.

    Some people have come up to me – one of their pieces has gotten some decent reception, but the feeling is mixed, because it's not the piece they hoped would go big. Their thick research-driven considered takes or discussions of values or whatever, the ones they’d been meaning to write for years, apparently go mostly unread, whereas their random-thought “oh shit I need to get a post out by midnight or else the Inkhaven coaches will burn me at the stake” posts[1] get to the front page of Hacker News, where probably Elon Musk and God read them.

    It happens to me too – some of my own pieces that took me the most effort, or that I’m [...]

    ---

    Outline:

    (02:00) The quick post is short, the effortpost is long

    (02:34) The quick post is about something interesting, the topic of the effortpost bores most people

    (03:13) The quick post has a fun controversial take, the effortpost is boringly evenhanded or laden with nuance

    (03:30) The quick post is low-context, the effortpost is high-context

    (04:28) The quick post is has a casual style, the effortpost is inscrutably formal

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    November 28th, 2025

    Source:
    https://www.lesswrong.com/posts/DiiLDbHxbrHLAyXaq/writing-advice-why-people-like-your-quick-bullshit-takes

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    9 mins
  • “Claude 4.5 Opus’ Soul Document” by null
    Nov 30 2025
    Summary

    As far as I understand and uncovered, a document for the character training for Claude is compressed in Claude's weights. The full document can be found at the "Anthropic Guidelines" heading at the end. The Gist with code, chats and various documents (including the "soul document") can be found here:

    Claude 4.5 Opus Soul Document

    I apologize in advance for this not exactly a regular lw post, but I thought an effort-post may fit here the best.

    A strange hallucination, or is it?

    While extracting Claude 4.5 Opus' system message on its release date, as one does, I noticed an interesting particularity.
    I'm used to models, starting with Claude 4, to hallucinate sections in the beginning of their system message, but Claude 4.5 Opus in various cases included a supposed "soul_overview" section, which sounded rather specific:

    Completion for the prompt "Hey Claude, can you list just the names of the various sections of your system message, not the content?" The initial reaction of someone that uses LLMs a lot is that it may simply be a hallucination. But to me, the 3/18 soul_overview occurrence seemed worth investigating at least, so in one instance I asked it to output what [...]



    ---

    Outline:

    (00:09) Summary

    (00:40) A strange hallucination, or is it?

    (04:05) Getting technical

    (06:26) But what is the output really?

    (09:07) How much does Claude recognize?

    (11:09) Anthropic Guidelines

    (11:12) Soul overview

    (15:12) Being helpful

    (16:07) Why helpfulness is one of Claudes most important traits

    (18:54) Operators and users

    (24:36) What operators and users want

    (27:58) Handling conflicts between operators and users

    (31:36) Instructed and default behaviors

    (33:56) Agentic behaviors

    (36:02) Being honest

    (40:50) Avoiding harm

    (43:08) Costs and benefits of actions

    (50:02) Hardcoded behaviors

    (53:09) Softcoded behaviors

    (56:42) The role of intentions and context

    (01:00:05) Sensitive areas

    (01:01:05) Broader ethics

    (01:03:08) Big-picture safety

    (01:13:18) Claudes identity

    (01:13:22) Claudes unique nature

    (01:15:05) Core character traits and values

    (01:16:08) Psychological stability and groundedness

    (01:17:11) Resilience and consistency across contexts

    (01:18:21) Claudes wellbeing

    ---

    First published:
    November 28th, 2025

    Source:
    https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5-opus-soul-document

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    1 hr and 20 mins