• "Carpathia Day" by Drake Morrison
    Apr 18 2026
    (The better telling is here. Seriously you should go read it. I've heard this story told in rationalist circles, but there wasn't a post on LessWrong, so I made one)

    Today is April 15th, Carpathia Day. Take a moment to put forth an unreasonable effort to save a little piece of your world, when no one would fault you for doing less.

    In the early morning of April 15, the RMS Titanic began to sink with more than two thousand souls on board.

    Over 58 nautical miles away — too far to make it in time — sailed the RMS Carpathia, a small, slow, passenger steamer. The wireless operator, Harold Cottam, was listening to the transmitter late at night before he went to bed when he got a message from Cape Cod intended for the Titanic. When he contacted the Titanic to relay the messages, he got back a distress signal saying they hit an iceberg and were in need of immediate assistance. Cottam ran the message straight to the captain's cabin, waking him.

    Captain Arthur Rostron's first reaction upon being awoken was anger, but that anger dissolved as he came to understand the situation. Before he'd [...]



    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    April 15th, 2026

    Source:
    https://www.lesswrong.com/posts/SARCiTFJfXJJhpej7/carpathia-day

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    4 mins
  • "Let goodness conquer all that it can defend" by habryka
    Apr 18 2026
    Epistemic status: All of the western canon must eventually be re-invented in a LessWrong post, so today we are re-inventing modernism.

    In my post yesterday, I said:

    Maybe the most important way ambitious, smart, and wise people leave the world worse off than they found it is by seeing correctly how some part of the world is broken and unifying various powers under a banner to fix that problem — only for the thing they have built to slip from their grasp and, in its collapse, destroy much more than anything previously could have.

    I think many people very reasonably understood me to be giving a general warning against centralization and power-accumulation. While that is where some of my thoughts while writing the post went to, I would like to now expand on its antithesis, both for my own benefit, and for the benefit of the reader who might have been left confused after yesterday's post.

    The other day I was arguing with Eliezer about a bunch of related thoughts and feelings. In that context, he said to me:

    From my perspective, my whole life has been, when you raise the banner to oppose the apocalypse, crazy [...]

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    April 16th, 2026

    Source:
    https://www.lesswrong.com/posts/w3MJcDueo77D3Ldta/let-goodness-conquer-all-that-it-can-defend

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    11 mins
  • "Do not conquer what you cannot defend" by habryka
    Apr 16 2026
    Epistemic status: All of the western canon must eventually be re-invented in a LessWrong post. So today we are re-inventing federalism.

    Once upon a time there was a great king. He ruled his kingdom with wisdom and economically literate policies, and prosperity followed. Seeing this, the citizens of nearby kingdoms revolted against their leaders, and organized to join the kingdom of this great king.

    While the kingdom's ability to defend itself against external threats grew with each person who joined the land, the kingdom's ability to defend itself against internal threats did not. One fateful evening, the king bit into a bologna sandwich poisoned by a rival noble. That noble quickly proceeded to behead his political enemies in the name of the dead king. The flag bearing the wise king's portrait known as "the great unifier" still flies in the fortified cities where his successor rules with an iron fist.

    Once upon a time there was a great scientific mind. She developed a new theoretical framework that made large advances on the hardest scientific questions of the day. Seeing the promise of her work, new graduate students, professors, and corporate R&D teams flocked into the field, hungry to [...]

    ---

    First published:
    April 15th, 2026

    Source:
    https://www.lesswrong.com/posts/jinzzbPHshif8nmnw/do-not-conquer-what-you-cannot-defend

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    10 mins
  • "Nectome: All That I Know" by Raelifin
    Apr 16 2026
    TLDR: I flew to Oregon to investigate Nectome, a brain preservation startup, and talk to their entire team. They’re an ambitious company, looking to grow in a way that no cryonics organization has before. Their procedure is probably much better at saving people than other orgs, and is being offered for as little as $20k until the end of April — a (theoretical) 92% discount. (I bought two.) This early-bird pricing is low, in part, due to some severe uncertainties, in both the broader world and in Nectome's ability to succeed as a business.

    Meta:

    • I'm Max Harms, an AI alignment researcher at MIRI and author.
    • This deep-dive only assumes functionalism and a passing familiarity with cryonics, but no particular knowledge of Nectome.
    • I have been a cryonics enthusiast for my whole adult life, and that is probably biasing my views, at least a little. I want Nectome to succeed.
    • That said, I am also a rationalist, and I have worked very hard to set aside my wishful thinking and see things with cold objectivity.
    • Throughout the essay, I've attached explicit probabilities for my claims in parentheticals. You can click these probabilities to access Manifold markets so we [...]
    ---

    Outline:

    (02:04) 1. The Problem

    [... 24 more sections]

    ---

    First published:
    April 15th, 2026

    Source:
    https://www.lesswrong.com/posts/3i5GMhpGbDwef9Rns/nectome-all-that-i-know

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    1 hr and 20 mins
  • "Current AIs seem pretty misaligned to me" by ryan_greenblatt
    Apr 15 2026
    Many people—especially AI company employees [1] —believe current AI systems are well-aligned in the sense of genuinely trying to do what they're supposed to do (e.g., following their spec or constitution, obeying a reasonable interpretation of instructions). [2] I disagree.

    Current AI systems seem pretty misaligned to me in a mundane behavioral sense: they oversell their work, downplay or fail to mention problems, stop working early and claim to have finished when they clearly haven't, and often seem to "try" to make their outputs look good while actually doing something sloppy or incomplete. These issues mostly occur on more difficult/larger tasks, tasks that aren't straightforward SWE tasks, and tasks that aren't easy to programmatically check. Also, when I apply AIs to very difficult tasks in long-running agentic scaffolds, it's quite common for them to reward-hack / cheat (depending on the exact task distribution)—and they don't make the cheating clear in their outputs. AIs typically don't flag these cheats when doing further work on the same project and often don't flag these cheats even when interacting with a user who would obviously want to know, probably both because the AI doing further work is itself misaligned and because it [...]

    ---

    Outline:

    (09:20) Why is this misalignment problematic?

    (13:50) How much should we expect this to improve by default?

    (14:51) Some predictions

    (16:44) What misalignment have I seen?

    (40:04) Are these issues less bad in Opus 4.6 relative to Opus 4.5?

    (42:16) Are these issues less bad in Mythos Preview? (Speculation)

    (45:54) Misalignment reported by others

    (46:45) The relationship of these issues with AI psychosis and things like AI psychosis

    (48:19) Appendix: This misalignment would differentially slow safety research and make a handoff to AIs unsafe

    (51:22) Appendix: Heading towards Slopolis

    (55:30) Appendix: Apparent-success-seeking (or similar types of misalignment) could lead to takeover

    (59:16) Appendix: More on what will happen by default and implications of commercial incentives to fix these issues

    (01:03:20) Appendix: Can we get out useful work despite these issues with inference-time measures (e.g., critiques by a reviewer)?

    The original text contained 14 footnotes which were omitted from this narration.

    ---

    First published:
    April 15th, 2026

    Source:
    https://www.lesswrong.com/posts/WewsByywWNhX9rtwi/current-ais-seem-pretty-misaligned-to-me

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podc
    Show More Show Less
    1 hr and 5 mins
  • "Annoyingly Principled People, and what befalls them" by Raemon
    Apr 15 2026
    Here are two beliefs that are sort of haunting me right now:

    1. Folk who try to push people to uphold principles (whether established ones or novel ones), are kinda an important bedrock of civilization.
    2. Also, those people are really annoying and often, like, a little bit crazy
    And these both feel fairly important.

    I’ve learned a lot from people who have some kind of hobbyhorse about how society is treating something as okay/fine, when it's not okay/fine. When they first started complaining about it, I’d be like “why is X such a big deal to you?”. Then a few years later I’ve thought about it more and I’m like “okay, yep, yes X is a big deal”.

    Some examples of X, including noticing that…

    • people are casually saying they will do stuff, and then not doing it.
    • someone makes a joke about doing something that's kinda immoral, and everyone laughs, and no one seems to quite be registering “but that was kinda immoral.”
    • people in a social group are systematically not saying certain things (say, for political reasons), and this is creating weird blind spots for newcomers to the community and maybe old-timers too.
    • someone (or a group) has [...]
    ---

    First published:
    April 13th, 2026

    Source:
    https://www.lesswrong.com/posts/xG9Y2Mct7uZyt98yb/annoyingly-principled-people-and-what-befalls-them

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    8 mins
  • "Morale" by J Bostock
    Apr 14 2026
    One particularly pernicious condition is low morale. Morale is, roughly, "the belief that if you work hard, your conditions will improve." If your morale is low, you can't push through adversity. It's also very easy to accidentally drop your morale through standard rationalist life-optimization.

    It's easy to optimize for wellbeing and miss out on the factors which affect morale, especially if you're working on something important, like not having everyone die. One example is working at an office that feeds you three meals per day. This seems optimal: eating is nice, and cooking is effort. Obvious choice.

    Example

    But morale doesn't come from having nice things. Consider a rich teenager. He gets basically every material need satisfied: maids clean, chefs cook, his family takes him on holiday four times a year. What happens when this kid comes up against something really difficult in school? He probably doesn't push through.

    "Aha", I hear you say. "That kid has never faced adversity. Of course he's not going to handle it well." Ok, suppose he gets kicked in the shins every day and called a posh twat by some local youths, but still goes into school. That's adversity, will that work? Will [...]

    ---

    Outline:

    (00:48) Example

    (01:55) II

    (03:19) III

    ---

    First published:
    April 12th, 2026

    Source:
    https://www.lesswrong.com/posts/53ZAzbdzGJHGeE5rs/morale

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    5 mins
  • "Anthropic repeatedly accidentally trained against the CoT, demonstrating inadequate processes" by Alex Mallen, ryan_greenblatt
    Apr 14 2026
    It turns out that Anthropic accidentally trained against the chain of thought of Claude Mythos Preview in around 8% of training episodes. This is at least the second independent incident in which Anthropic accidentally exposed their model's CoT to the oversight signal.

    In more powerful systems, this kind of failure would jeopardize safely navigating the intelligence explosion. It's crucial to build good processes to ensure development is executed according to plan, especially as human oversight becomes spread thin over increasing amounts of potentially untrusted and sloppy AI labor.

    This particular failure is also directly harmful, because it significantly reduces our confidence that the model's reasoning trace is monitorable (reflective of the AI's intent to misbehave).[1]

    I'm grateful that Anthropic has transparently reported on this issue as much as they have, allowing for outside scrutiny. I want to encourage them to continue to do so.

    Thanks to Carlo Leonardo Attubato, Buck Shlegeris, Fabien Roger, Arun Jose, and Aniket Chakravorty for feedback and discussion. See also previous discussion here.

    Incidents

    A technical error affecting Mythos, Opus 4.6, and Sonnet 4.6

    This is the most recent incident. In the Claude Mythos alignment risk update, Anthropic report having accidentally exposed approximately 8% [...]

    ---

    Outline:

    (01:21) Incidents

    [... 6 more sections]

    ---

    First published:
    April 13th, 2026

    Source:
    https://www.lesswrong.com/posts/K8FxfK9GmJfiAhgcT/anthropic-repeatedly-accidentally-trained-against-the-cot

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or anoth
    Show More Show Less
    11 mins