• "Daycare illnesses" by Nina Panickssery
    Apr 13 2026
    Before I had a baby I was pretty agnostic about the idea of daycare. I could imagine various pros and cons but I didn’t have a strong overall opinion. Then I started mentioning the idea to various people. Every parent I spoke to brought up a consideration I hadn’t thought about before—the illnesses.

    A number of parents, including family members, told me they had sent their baby to daycare only for them to become constantly ill, sometimes severely, until they decided to take them out. This worried me so I asked around some more. Invariably every single parent who had tried to send their babies or toddlers to daycare, or who had babies in daycare right now, told me that they were ill more often than not.

    One mother strongly advised me never to send my baby to daycare. She regretted sending her (normal and healthy) first son to daycare when he was one—he ended up hospitalized with severe pneumonia after a few months of constant illnesses and infections. She told me that after that she didn’t send her other kids to daycare and they had much healthier childhoods.

    I also started paying more attention to the kids I [...]

    ---

    First published:
    April 13th, 2026

    Source:
    https://www.lesswrong.com/posts/byiLDrbj8MNzoHZkL/daycare-illnesses

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    10 mins
  • "If Mythos actually made Anthropic employees 4x more productive, I would radically shorten my timelines" by ryan_greenblatt
    Apr 12 2026
    Anthropic's system card for Mythos Preview says:

    It's unclear how we should interpret this. What do they mean by productivity uplift? To what extent is Anthropic's institutional view that the uplift is 4x? (Like, what do they mean by "We take this seriously and it is consistent with our own internal experience of the model.")

    One straightforward interpretation is: AI systems improve the productivity of Anthropic so much that Anthropic would be indifferent between the current situation and a situation where all of their technical employees magically work 4 hours for every 1 hour (at equal productivity without burnout) but they get zero AI assistance. In other words, AI assistance is as useful as having their employees operate at 4x faster speeds for all activities (meetings, coding, thinking, writing, etc.) I'll call this "4x serial labor acceleration" [1] (see here for more discussion of this idea [2] ).

    I currently think it's very unlikely that Anthropic's AIs are yielding 4x serial labor acceleration, but if I did come to believe it was true, I would update towards radically shorter timelines. (I tentatively think my median to Automated Coder would go from 4 years from now to [...]

    ---

    Outline:

    (08:21) Appendix: Estimating AI progress speed up from serial labor acceleration

    (11:00) Appendix: Different notions of uplift

    The original text contained 4 footnotes which were omitted from this narration.

    ---

    First published:
    April 10th, 2026

    Source:
    https://www.lesswrong.com/posts/Jga7PHMzfZf4fbdyo/if-mythos-actually-made-anthropic-employees-4x-more

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    13 mins
  • "Do not be surprised if LessWrong gets hacked" by RobertM
    Apr 9 2026
    Or, for that matter, anything else.

    This post is meant to be two things:

    1. a PSA about LessWrong's current security posture, from a LessWrong admin[1]
    2. an attempt to establish common knowledge of the security situation it looks like the world (and, by extension, you) will shortly be in
    Claude Mythos was announced yesterday. That announcement came with a blog post from Anthropic's Frontier Red Team, detailing the large number of zero-days (and other security vulnerabilities) discovered by Mythos.

    This should not be a surprise if you were paying attention - LLMs being trained on coding first was a big hint, the labs putting cybersecurity as a top-level item in their threat models and evals was another, and frankly this blog post maybe could've been written a couple months ago (either this or this might've been sufficient). But it seems quite overdetermined now.

    LessWrong's security posture

    In the past, I have tried to communicate that LessWrong should not be treated as a platform with a hardened security posture. LessWrong is run by a small team. Our operational philosophy is similar to that of many early-stage startups. We treat some LessWrong data as private in a social sense, but do [...]

    ---

    Outline:

    (01:04) LessWrongs security posture

    (02:03) LessWrong is not a high-value target

    (04:11) FAQ

    (04:29) The Broader Situation

    The original text contained 6 footnotes which were omitted from this narration.

    ---

    First published:
    April 8th, 2026

    Source:
    https://www.lesswrong.com/posts/2wi5mCLSkZo2ky32p/do-not-be-surprised-if-lesswrong-gets-hacked

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    8 mins
  • "My picture of the present in AI" by ryan_greenblatt
    Apr 9 2026
    In this post, I'll go through some of my best guesses for the current situation in AI as of the start of April 2026. You can think of this as a scenario forecast, but for the present (which is already uncertain!) rather than the future. I will generally state my best guess without argumentation and without explaining my level of confidence: some of these claims are highly speculative while others are better grounded, certainly some will be wrong. I tried to make it clear which claims are relatively speculative by saying something like "I guess", "I expect", etc. (but I may have missed some).

    You can think of this post as more like a list of my current views rather than a structured post with a thesis, but I think it may be informative nonetheless.

    In a future post, I'll go beyond the present and talk about my predictions for the future.

    (I was originally working on writing up some predictions, but the "predictions" about today ended up being extensive enough that a separate post seemed warranted.)

    AI R&D acceleration (and software acceleration more generally)

    Right now, AI companies are heavily integrating and deploying [...]

    ---

    Outline:

    (01:07) AI R&D acceleration (and software acceleration more generally)

    (05:28) AI engineering capabilities and qualitative abilities

    (10:38) Misalignment and misalignment-related properties

    (15:59) Cyber

    (18:07) Bioweapons

    (18:52) Economic effects

    The original text contained 5 footnotes which were omitted from this narration.

    ---

    First published:
    April 7th, 2026

    Source:
    https://www.lesswrong.com/posts/WjaGAA4xCAXeFpyWm/my-picture-of-the-present-in-ai

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    21 mins
  • "The effects of caffeine consumption do not decay with a ~5 hour half-life" by kman
    Apr 9 2026
    epistemic status: confident in the overall picture, substantial quantitative uncertainty about the relative potency of caffeine and paraxanthine

    tldr: The effects of caffeine consumption last longer than many assume. Paraxanthine is sort of like caffeine that behaves the way many mistakenly believe caffeine behaves.




    You've probably heard that caffeine exerts its psychostimulatory effects by blocking adenosine receptors. That matches my understanding, having dug into this. I'd also guess that, insofar as you've thought about the duration of caffeine's effects, you've thought of them as decaying with a ~5 hour half-life. I used to think this, and every effect duration calculator I've seen assumes it (even this fancy one based on a complicated model that includes circadian effects). But this part is probably wrong.

    Very little circulating caffeine is directly excreted.[1] Instead, it's converted (metabolized) into other similar molecules (primary metabolites), which themselves undergo further steps of metabolism (into secondary, tertiary, etc. metabolites) before reaching a form where they're efficiently excreted.

    Importantly, the primary metabolites also block adenosine receptors. In particular, more than 80% of circulating caffeine is metabolized into paraxanthine, which has a comparable[2] binding affinity at adenosine receptors to caffeine itself. Paraxanthine then has its own [...]



    ---

    Outline:

    (02:43) Paraxanthine supplements

    (05:13) Exactly how potent is paraxanthine compared to caffeine?

    (08:41) Concluding thoughts

    The original text contained 9 footnotes which were omitted from this narration.

    ---

    First published:
    April 8th, 2026

    Source:
    https://www.lesswrong.com/posts/vefsxkGWkEMmDcZ7v/the-effects-of-caffeine-consumption-do-not-decay-with-a-5

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    10 mins
  • "AIs can now often do massive easy-to-verify SWE tasks and I’ve updated towards shorter timelines" by ryan_greenblatt
    Apr 6 2026
    I've recently updated towards substantially shorter AI timelines and much faster progress in some areas. [1] The largest updates I've made are (1) an almost 2x higher probability of full AI R&D automation by EOY 2028 (I'm now a bit below 30% [2] while I was previously expecting around 15%; my guesses are pretty reflectively unstable) and (2) I expect much stronger short-term performance on massive and pretty difficult but easy-and-cheap-to-verify software engineering (SWE) tasks that don't require that much novel ideation [3] . For instance, I expect that by EOY 2026, AIs will have a 50%-reliability [4] time horizon of years to decades on reasonably difficult easy-and-cheap-to-verify SWE tasks that don't require much ideation (while the high reliability—for instance, 90%—time horizon will be much lower, more like hours or days than months, though this will be very sensitive to the task distribution). In this post, I'll explain why I've made these updates, what I now expect, and implications of this update.

    I'll refer to "Easy-and-cheap-to-verify SWE tasks" as ES tasks and to "ES tasks that don't require much ideation (as in, don't require 'new' ideas)" as ESNI tasks for brevity.

    Here are the main drivers of [...]

    ---

    Outline:

    (04:58) Whats going on with these easy-and-cheap-to-verify tasks?

    (08:17) Some evidence against shorter timelines Ive gotten in the same period

    (10:46) Why does high performance on ESNI tasks shorten my timelines?

    (13:15) How much does extremely high performance on ESNI tasks help with AI R&D?

    (18:22) My experience trying to automate safety research with current models

    (19:58) My experience seeing if my setup can automate massive ES tasks

    (21:08) SWE tasks

    (23:29) AI R&D task

    (24:20) Cyber

    [... 1 more section]

    ---

    First published:
    April 6th, 2026

    Source:
    https://www.lesswrong.com/posts/dKpC6wHFqDrGZwnah/ais-can-now-often-do-massive-easy-to-verify-swe-tasks-and-i

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another
    Show More Show Less
    30 mins
  • "dark ilan" by ozymandias
    Apr 6 2026
    The second time Vellam uncovers the conspiracy underlying all of society, he approaches a Keeper.

    Some of the difference is convenience. Since Vellam reported that he’d found out about the first conspiracy, he's lived in the secret AI research laboratory at the Basement of the World, and Keepers are much easier to come by than when he was a quality control inspector for cheese.

    But Vellam is honest with himself. If he were making progress, he’d never tell the Keepers no matter how convenient they were, not even if they lined his front walkway every morning to beg him for a scrap of his current intellectual project. He’d sat on his insight about artificial general intelligence for two years before he decided that he preferred isolation to another day of cheese inspection.

    No, the only reason he's telling a Keeper is that he's stuck.

    Vellam is exactly as smart as the average human, a fact he has almost stopped feeling bad about. But the average person can only work twenty hours a week, and Vellam can work eighty-- a hundred, if he's particularly interested-- and raw thinkoomph can be compensated for with bloody-mindedness. Once he's found a loose end [...]

    ---

    First published:
    April 4th, 2026

    Source:
    https://www.lesswrong.com/posts/Fvm4AzLnoZHqNEBqf/dark-ilan

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    20 mins
  • "Dispatch from Anthropic v. Department of War Preliminary Injunction Motion Hearing" by Zack_M_Davis
    Apr 6 2026
    Dateline SAN FRANCISCO, Ca., 24 March 2026— A hearing was held on a motion for a preliminary injunction in the case of Anthropic PBC v. U.S. Department of War et al. in Courtroom 12 on the 19th floor of the Phillip Burton Federal Building, the Hon. Judge Rita F. Lin presiding. About 35 spectators in the gallery (journalists and other members of the public, including the present writer) looked on as Michael Mongan of WilmerHale (lead counsel for the plaintiff) and Deputy Assistant Attorney General Eric Hamilton (lead counsel for the defendant) argued before the judge. (The defendant also had another lawyer at their counsel table on the left, and the plaintiff had six more at theirs on the right, but none of those people said anything.)

    For some dumb reason, recording court proceedings is banned and the official transcript won't be available online for three months, so I'm relying on my handwritten live notes to tell you what happened. I'd say that any errors are my responsibility, but actually, it's kind of the government's fault for not letting me just take a recording.

    The case concerns the fallout of a contract dispute between Anthropic (makers of [...]

    ---

    First published:
    March 25th, 2026

    Source:
    https://www.lesswrong.com/posts/CCDQ7PdYHXsJAE5bi/dispatch-from-anthropic-v-department-of-war-preliminary

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    12 mins