• "The policy surrounding Mythos marks an irreversible power shift" by sil
    Apr 14 2026
    This post assumes Anthropic isn't lying:

    1. Mythos is the current SOTA
    2. Mythos is potent[1]
    3. Anthropic will not make it publicly available un-nerfed[2]
    4. Anthropic will have a select few companies use it as part of project glasswing[3] to improve cybersecurity or whatever
    Since the release of ChatGPT, at any given time, anyone on the planet with a few bucks could access the current most capable AI model, the SOTA.[4]

    Since Mythos, this has no longer been the case and I don't think it will ever happen again.

    It may happen for a short period of time if an entity with a policy differing significantly from Anthropic develops a SOTA model.[5] However, most serious competitors (OpenAI, Google), don't have policies differing vastly from Anthropic, and thus I can't imagine a SOTA model (more potent than Mythos) being released unrestricted to the public soon.

    To be clear, I am not claiming the public will never have access to a model as strong as Mythos, this seems almost certainly false, I am claiming that the public will probably never have access to the SOTA of that time.

    Glasswing makes it clear that the attitude among top large companies - those in power [...]

    The original text contained 8 footnotes which were omitted from this narration.

    ---

    First published:
    April 12th, 2026

    Source:
    https://www.lesswrong.com/posts/3MhJELzwpbR42xsJ3/the-policy-surrounding-mythos-marks-an-irreversible-power

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    4 mins
  • "Only Law Can Prevent Extinction" by Eliezer Yudkowsky
    Apr 14 2026
    There's a quote I read as a kid that stuck with me my whole life:

    "Remember that all tax revenue is the result of holding a gun to somebody's head. Not paying taxes is against the law. If you don’t pay taxes, you’ll be fined. If you don’t pay the fine, you’ll be jailed. If you try to escape from jail, you’ll be shot."
    -- P. J. O'Rourke.

    At first I took away the libertarian lesson: Government is violence. It may, in some cases, be rightful violence. But it all rests on violence; never forget that.

    Today I do think there's an important distinction between two different shapes of violence. It's a distinction that may make my fellow old-school classical Heinlein liberaltarians roll up their eyes about how there's no deep moral difference. I still hold it to be important.

    In a high-functioning ideal state -- not all actual countries -- the state's violence is predictable and avoidable, and meant to be predicted and avoided. As part of that predictability, it comes from a limited number of specially licensed sources.

    You're supposed to know that you can just pay your taxes, and then not get shot.

    Is [...]



    ---

    First published:
    April 13th, 2026

    Source:
    https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-can-prevent-extinction

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    99% do you start sawing off your own leg" that's not how this works bro.". Eliezer Yudkowsky replies with an image showing a blue and purple cartoon dinosaur screaming with text reading "AAAAA" and "AAAA" on a brown background." style="max-width: 100%;" />
    Show More Show Less
    39 mins
  • "Dario probably doesn’t believe in superintelligence" by RobertM
    Apr 13 2026
    Epistemic status: I think this is true but don't think this post is a very strong argument for the case, or particularly interesting to read. But I had to get 500 words out! I think the 2013 conversation is interesting reading as a piece of history, separate from the top-level question, and recommend reading that.

    I think many people have a relationship with Anthropic that is premised on a false belief: that Dario Amodei believes in superintelligence.

    What do I mean by "believes" in superintelligence? Roughly speaking, that the returns to intelligence past the human level are large, in terms of the additional affordances they would grant for steering the world, and that it is practical to get that additional intelligence into a system.

    There are many pieces of evidence which suggest this, going quite far back.

    In 2013, Dario was one of two science advisors (along with Jacob Steinhardt) that Holden brought along to a discussion with Eliezer and Luke about MIRI strategy. A transcript of the conversation is here. It is the first piece of public communication I can find from Dario on the subject. Read end-to-end, I don't think it strongly supports my titular claim. However [...]

    ---

    First published:
    April 10th, 2026

    Source:
    https://www.lesswrong.com/posts/Fnty2JpQ6WBD9FWo5/dario-probably-doesn-t-believe-in-superintelligence

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    13 mins
  • "Daycare illnesses" by Nina Panickssery
    Apr 13 2026
    Before I had a baby I was pretty agnostic about the idea of daycare. I could imagine various pros and cons but I didn’t have a strong overall opinion. Then I started mentioning the idea to various people. Every parent I spoke to brought up a consideration I hadn’t thought about before—the illnesses.

    A number of parents, including family members, told me they had sent their baby to daycare only for them to become constantly ill, sometimes severely, until they decided to take them out. This worried me so I asked around some more. Invariably every single parent who had tried to send their babies or toddlers to daycare, or who had babies in daycare right now, told me that they were ill more often than not.

    One mother strongly advised me never to send my baby to daycare. She regretted sending her (normal and healthy) first son to daycare when he was one—he ended up hospitalized with severe pneumonia after a few months of constant illnesses and infections. She told me that after that she didn’t send her other kids to daycare and they had much healthier childhoods.

    I also started paying more attention to the kids I [...]

    ---

    First published:
    April 13th, 2026

    Source:
    https://www.lesswrong.com/posts/byiLDrbj8MNzoHZkL/daycare-illnesses

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    10 mins
  • "If Mythos actually made Anthropic employees 4x more productive, I would radically shorten my timelines" by ryan_greenblatt
    Apr 12 2026
    Anthropic's system card for Mythos Preview says:

    It's unclear how we should interpret this. What do they mean by productivity uplift? To what extent is Anthropic's institutional view that the uplift is 4x? (Like, what do they mean by "We take this seriously and it is consistent with our own internal experience of the model.")

    One straightforward interpretation is: AI systems improve the productivity of Anthropic so much that Anthropic would be indifferent between the current situation and a situation where all of their technical employees magically work 4 hours for every 1 hour (at equal productivity without burnout) but they get zero AI assistance. In other words, AI assistance is as useful as having their employees operate at 4x faster speeds for all activities (meetings, coding, thinking, writing, etc.) I'll call this "4x serial labor acceleration" [1] (see here for more discussion of this idea [2] ).

    I currently think it's very unlikely that Anthropic's AIs are yielding 4x serial labor acceleration, but if I did come to believe it was true, I would update towards radically shorter timelines. (I tentatively think my median to Automated Coder would go from 4 years from now to [...]

    ---

    Outline:

    (08:21) Appendix: Estimating AI progress speed up from serial labor acceleration

    (11:00) Appendix: Different notions of uplift

    The original text contained 4 footnotes which were omitted from this narration.

    ---

    First published:
    April 10th, 2026

    Source:
    https://www.lesswrong.com/posts/Jga7PHMzfZf4fbdyo/if-mythos-actually-made-anthropic-employees-4x-more

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    13 mins
  • "Do not be surprised if LessWrong gets hacked" by RobertM
    Apr 9 2026
    Or, for that matter, anything else.

    This post is meant to be two things:

    1. a PSA about LessWrong's current security posture, from a LessWrong admin[1]
    2. an attempt to establish common knowledge of the security situation it looks like the world (and, by extension, you) will shortly be in
    Claude Mythos was announced yesterday. That announcement came with a blog post from Anthropic's Frontier Red Team, detailing the large number of zero-days (and other security vulnerabilities) discovered by Mythos.

    This should not be a surprise if you were paying attention - LLMs being trained on coding first was a big hint, the labs putting cybersecurity as a top-level item in their threat models and evals was another, and frankly this blog post maybe could've been written a couple months ago (either this or this might've been sufficient). But it seems quite overdetermined now.

    LessWrong's security posture

    In the past, I have tried to communicate that LessWrong should not be treated as a platform with a hardened security posture. LessWrong is run by a small team. Our operational philosophy is similar to that of many early-stage startups. We treat some LessWrong data as private in a social sense, but do [...]

    ---

    Outline:

    (01:04) LessWrongs security posture

    (02:03) LessWrong is not a high-value target

    (04:11) FAQ

    (04:29) The Broader Situation

    The original text contained 6 footnotes which were omitted from this narration.

    ---

    First published:
    April 8th, 2026

    Source:
    https://www.lesswrong.com/posts/2wi5mCLSkZo2ky32p/do-not-be-surprised-if-lesswrong-gets-hacked

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    8 mins
  • "My picture of the present in AI" by ryan_greenblatt
    Apr 9 2026
    In this post, I'll go through some of my best guesses for the current situation in AI as of the start of April 2026. You can think of this as a scenario forecast, but for the present (which is already uncertain!) rather than the future. I will generally state my best guess without argumentation and without explaining my level of confidence: some of these claims are highly speculative while others are better grounded, certainly some will be wrong. I tried to make it clear which claims are relatively speculative by saying something like "I guess", "I expect", etc. (but I may have missed some).

    You can think of this post as more like a list of my current views rather than a structured post with a thesis, but I think it may be informative nonetheless.

    In a future post, I'll go beyond the present and talk about my predictions for the future.

    (I was originally working on writing up some predictions, but the "predictions" about today ended up being extensive enough that a separate post seemed warranted.)

    AI R&D acceleration (and software acceleration more generally)

    Right now, AI companies are heavily integrating and deploying [...]

    ---

    Outline:

    (01:07) AI R&D acceleration (and software acceleration more generally)

    (05:28) AI engineering capabilities and qualitative abilities

    (10:38) Misalignment and misalignment-related properties

    (15:59) Cyber

    (18:07) Bioweapons

    (18:52) Economic effects

    The original text contained 5 footnotes which were omitted from this narration.

    ---

    First published:
    April 7th, 2026

    Source:
    https://www.lesswrong.com/posts/WjaGAA4xCAXeFpyWm/my-picture-of-the-present-in-ai

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    21 mins
  • "The effects of caffeine consumption do not decay with a ~5 hour half-life" by kman
    Apr 9 2026
    epistemic status: confident in the overall picture, substantial quantitative uncertainty about the relative potency of caffeine and paraxanthine

    tldr: The effects of caffeine consumption last longer than many assume. Paraxanthine is sort of like caffeine that behaves the way many mistakenly believe caffeine behaves.




    You've probably heard that caffeine exerts its psychostimulatory effects by blocking adenosine receptors. That matches my understanding, having dug into this. I'd also guess that, insofar as you've thought about the duration of caffeine's effects, you've thought of them as decaying with a ~5 hour half-life. I used to think this, and every effect duration calculator I've seen assumes it (even this fancy one based on a complicated model that includes circadian effects). But this part is probably wrong.

    Very little circulating caffeine is directly excreted.[1] Instead, it's converted (metabolized) into other similar molecules (primary metabolites), which themselves undergo further steps of metabolism (into secondary, tertiary, etc. metabolites) before reaching a form where they're efficiently excreted.

    Importantly, the primary metabolites also block adenosine receptors. In particular, more than 80% of circulating caffeine is metabolized into paraxanthine, which has a comparable[2] binding affinity at adenosine receptors to caffeine itself. Paraxanthine then has its own [...]



    ---

    Outline:

    (02:43) Paraxanthine supplements

    (05:13) Exactly how potent is paraxanthine compared to caffeine?

    (08:41) Concluding thoughts

    The original text contained 9 footnotes which were omitted from this narration.

    ---

    First published:
    April 8th, 2026

    Source:
    https://www.lesswrong.com/posts/vefsxkGWkEMmDcZ7v/the-effects-of-caffeine-consumption-do-not-decay-with-a-5

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    10 mins