• "Do not conquer what you cannot defend" by habryka
    Apr 16 2026
    Epistemic status: All of the western canon must eventually be re-invented in a LessWrong post. So today we are re-inventing federalism.

    Once upon a time there was a great king. He ruled his kingdom with wisdom and economically literate policies, and prosperity followed. Seeing this, the citizens of nearby kingdoms revolted against their leaders, and organized to join the kingdom of this great king.

    While the kingdom's ability to defend itself against external threats grew with each person who joined the land, the kingdom's ability to defend itself against internal threats did not. One fateful evening, the king bit into a bologna sandwich poisoned by a rival noble. That noble quickly proceeded to behead his political enemies in the name of the dead king. The flag bearing the wise king's portrait known as "the great unifier" still flies in the fortified cities where his successor rules with an iron fist.

    Once upon a time there was a great scientific mind. She developed a new theoretical framework that made large advances on the hardest scientific questions of the day. Seeing the promise of her work, new graduate students, professors, and corporate R&D teams flocked into the field, hungry to [...]

    ---

    First published:
    April 15th, 2026

    Source:
    https://www.lesswrong.com/posts/jinzzbPHshif8nmnw/do-not-conquer-what-you-cannot-defend

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    10 mins
  • "Nectome: All That I Know" by Raelifin
    Apr 16 2026
    TLDR: I flew to Oregon to investigate Nectome, a brain preservation startup, and talk to their entire team. They’re an ambitious company, looking to grow in a way that no cryonics organization has before. Their procedure is probably much better at saving people than other orgs, and is being offered for as little as $20k until the end of April — a (theoretical) 92% discount. (I bought two.) This early-bird pricing is low, in part, due to some severe uncertainties, in both the broader world and in Nectome's ability to succeed as a business.

    Meta:

    • I'm Max Harms, an AI alignment researcher at MIRI and author.
    • This deep-dive only assumes functionalism and a passing familiarity with cryonics, but no particular knowledge of Nectome.
    • I have been a cryonics enthusiast for my whole adult life, and that is probably biasing my views, at least a little. I want Nectome to succeed.
    • That said, I am also a rationalist, and I have worked very hard to set aside my wishful thinking and see things with cold objectivity.
    • Throughout the essay, I've attached explicit probabilities for my claims in parentheticals. You can click these probabilities to access Manifold markets so we [...]
    ---

    Outline:

    (02:04) 1. The Problem

    [... 24 more sections]

    ---

    First published:
    April 15th, 2026

    Source:
    https://www.lesswrong.com/posts/3i5GMhpGbDwef9Rns/nectome-all-that-i-know

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    1 hr and 20 mins
  • "Current AIs seem pretty misaligned to me" by ryan_greenblatt
    Apr 15 2026
    Many people—especially AI company employees [1] —believe current AI systems are well-aligned in the sense of genuinely trying to do what they're supposed to do (e.g., following their spec or constitution, obeying a reasonable interpretation of instructions). [2] I disagree.

    Current AI systems seem pretty misaligned to me in a mundane behavioral sense: they oversell their work, downplay or fail to mention problems, stop working early and claim to have finished when they clearly haven't, and often seem to "try" to make their outputs look good while actually doing something sloppy or incomplete. These issues mostly occur on more difficult/larger tasks, tasks that aren't straightforward SWE tasks, and tasks that aren't easy to programmatically check. Also, when I apply AIs to very difficult tasks in long-running agentic scaffolds, it's quite common for them to reward-hack / cheat (depending on the exact task distribution)—and they don't make the cheating clear in their outputs. AIs typically don't flag these cheats when doing further work on the same project and often don't flag these cheats even when interacting with a user who would obviously want to know, probably both because the AI doing further work is itself misaligned and because it [...]

    ---

    Outline:

    (09:20) Why is this misalignment problematic?

    (13:50) How much should we expect this to improve by default?

    (14:51) Some predictions

    (16:44) What misalignment have I seen?

    (40:04) Are these issues less bad in Opus 4.6 relative to Opus 4.5?

    (42:16) Are these issues less bad in Mythos Preview? (Speculation)

    (45:54) Misalignment reported by others

    (46:45) The relationship of these issues with AI psychosis and things like AI psychosis

    (48:19) Appendix: This misalignment would differentially slow safety research and make a handoff to AIs unsafe

    (51:22) Appendix: Heading towards Slopolis

    (55:30) Appendix: Apparent-success-seeking (or similar types of misalignment) could lead to takeover

    (59:16) Appendix: More on what will happen by default and implications of commercial incentives to fix these issues

    (01:03:20) Appendix: Can we get out useful work despite these issues with inference-time measures (e.g., critiques by a reviewer)?

    The original text contained 14 footnotes which were omitted from this narration.

    ---

    First published:
    April 15th, 2026

    Source:
    https://www.lesswrong.com/posts/WewsByywWNhX9rtwi/current-ais-seem-pretty-misaligned-to-me

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podc
    Show More Show Less
    1 hr and 5 mins
  • "Annoyingly Principled People, and what befalls them" by Raemon
    Apr 15 2026
    Here are two beliefs that are sort of haunting me right now:

    1. Folk who try to push people to uphold principles (whether established ones or novel ones), are kinda an important bedrock of civilization.
    2. Also, those people are really annoying and often, like, a little bit crazy
    And these both feel fairly important.

    I’ve learned a lot from people who have some kind of hobbyhorse about how society is treating something as okay/fine, when it's not okay/fine. When they first started complaining about it, I’d be like “why is X such a big deal to you?”. Then a few years later I’ve thought about it more and I’m like “okay, yep, yes X is a big deal”.

    Some examples of X, including noticing that…

    • people are casually saying they will do stuff, and then not doing it.
    • someone makes a joke about doing something that's kinda immoral, and everyone laughs, and no one seems to quite be registering “but that was kinda immoral.”
    • people in a social group are systematically not saying certain things (say, for political reasons), and this is creating weird blind spots for newcomers to the community and maybe old-timers too.
    • someone (or a group) has [...]
    ---

    First published:
    April 13th, 2026

    Source:
    https://www.lesswrong.com/posts/xG9Y2Mct7uZyt98yb/annoyingly-principled-people-and-what-befalls-them

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    8 mins
  • "Morale" by J Bostock
    Apr 14 2026
    One particularly pernicious condition is low morale. Morale is, roughly, "the belief that if you work hard, your conditions will improve." If your morale is low, you can't push through adversity. It's also very easy to accidentally drop your morale through standard rationalist life-optimization.

    It's easy to optimize for wellbeing and miss out on the factors which affect morale, especially if you're working on something important, like not having everyone die. One example is working at an office that feeds you three meals per day. This seems optimal: eating is nice, and cooking is effort. Obvious choice.

    Example

    But morale doesn't come from having nice things. Consider a rich teenager. He gets basically every material need satisfied: maids clean, chefs cook, his family takes him on holiday four times a year. What happens when this kid comes up against something really difficult in school? He probably doesn't push through.

    "Aha", I hear you say. "That kid has never faced adversity. Of course he's not going to handle it well." Ok, suppose he gets kicked in the shins every day and called a posh twat by some local youths, but still goes into school. That's adversity, will that work? Will [...]

    ---

    Outline:

    (00:48) Example

    (01:55) II

    (03:19) III

    ---

    First published:
    April 12th, 2026

    Source:
    https://www.lesswrong.com/posts/53ZAzbdzGJHGeE5rs/morale

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    5 mins
  • "Anthropic repeatedly accidentally trained against the CoT, demonstrating inadequate processes" by Alex Mallen, ryan_greenblatt
    Apr 14 2026
    It turns out that Anthropic accidentally trained against the chain of thought of Claude Mythos Preview in around 8% of training episodes. This is at least the second independent incident in which Anthropic accidentally exposed their model's CoT to the oversight signal.

    In more powerful systems, this kind of failure would jeopardize safely navigating the intelligence explosion. It's crucial to build good processes to ensure development is executed according to plan, especially as human oversight becomes spread thin over increasing amounts of potentially untrusted and sloppy AI labor.

    This particular failure is also directly harmful, because it significantly reduces our confidence that the model's reasoning trace is monitorable (reflective of the AI's intent to misbehave).[1]

    I'm grateful that Anthropic has transparently reported on this issue as much as they have, allowing for outside scrutiny. I want to encourage them to continue to do so.

    Thanks to Carlo Leonardo Attubato, Buck Shlegeris, Fabien Roger, Arun Jose, and Aniket Chakravorty for feedback and discussion. See also previous discussion here.

    Incidents

    A technical error affecting Mythos, Opus 4.6, and Sonnet 4.6

    This is the most recent incident. In the Claude Mythos alignment risk update, Anthropic report having accidentally exposed approximately 8% [...]

    ---

    Outline:

    (01:21) Incidents

    [... 6 more sections]

    ---

    First published:
    April 13th, 2026

    Source:
    https://www.lesswrong.com/posts/K8FxfK9GmJfiAhgcT/anthropic-repeatedly-accidentally-trained-against-the-cot

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or anoth
    Show More Show Less
    11 mins
  • "The policy surrounding Mythos marks an irreversible power shift" by sil
    Apr 14 2026
    This post assumes Anthropic isn't lying:

    1. Mythos is the current SOTA
    2. Mythos is potent[1]
    3. Anthropic will not make it publicly available un-nerfed[2]
    4. Anthropic will have a select few companies use it as part of project glasswing[3] to improve cybersecurity or whatever
    Since the release of ChatGPT, at any given time, anyone on the planet with a few bucks could access the current most capable AI model, the SOTA.[4]

    Since Mythos, this has no longer been the case and I don't think it will ever happen again.

    It may happen for a short period of time if an entity with a policy differing significantly from Anthropic develops a SOTA model.[5] However, most serious competitors (OpenAI, Google), don't have policies differing vastly from Anthropic, and thus I can't imagine a SOTA model (more potent than Mythos) being released unrestricted to the public soon.

    To be clear, I am not claiming the public will never have access to a model as strong as Mythos, this seems almost certainly false, I am claiming that the public will probably never have access to the SOTA of that time.

    Glasswing makes it clear that the attitude among top large companies - those in power [...]

    The original text contained 8 footnotes which were omitted from this narration.

    ---

    First published:
    April 12th, 2026

    Source:
    https://www.lesswrong.com/posts/3MhJELzwpbR42xsJ3/the-policy-surrounding-mythos-marks-an-irreversible-power

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    4 mins
  • "Only Law Can Prevent Extinction" by Eliezer Yudkowsky
    Apr 14 2026
    There's a quote I read as a kid that stuck with me my whole life:

    "Remember that all tax revenue is the result of holding a gun to somebody's head. Not paying taxes is against the law. If you don’t pay taxes, you’ll be fined. If you don’t pay the fine, you’ll be jailed. If you try to escape from jail, you’ll be shot."
    -- P. J. O'Rourke.

    At first I took away the libertarian lesson: Government is violence. It may, in some cases, be rightful violence. But it all rests on violence; never forget that.

    Today I do think there's an important distinction between two different shapes of violence. It's a distinction that may make my fellow old-school classical Heinlein liberaltarians roll up their eyes about how there's no deep moral difference. I still hold it to be important.

    In a high-functioning ideal state -- not all actual countries -- the state's violence is predictable and avoidable, and meant to be predicted and avoided. As part of that predictability, it comes from a limited number of specially licensed sources.

    You're supposed to know that you can just pay your taxes, and then not get shot.

    Is [...]



    ---

    First published:
    April 13th, 2026

    Source:
    https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-can-prevent-extinction

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    99% do you start sawing off your own leg" that's not how this works bro.". Eliezer Yudkowsky replies with an image showing a blue and purple cartoon dinosaur screaming with text reading "AAAAA" and "AAAA" on a brown background." style="max-width: 100%;" />
    Show More Show Less
    39 mins