• "My hobby: running deranged surveys" by leogao
    Mar 28 2026
    In late 2024, I was on a long walk with some friends along the coast of the San Francisco Bay when the question arose of just how much of a bubble we live in. It's well known that the Bay Area is a bubble, and that normal people don’t spend that much time thinking about things like AGI. But there was still some disagreement on just how strong that bubble is. I made a spicy claim: even at NeurIPS, the biggest gathering of AI researchers in the world, half the people wouldn’t know what AGI is.

    As good Bayesians, we agreed to settle the matter empirically: I would go to NeurIPS, walk around the conference hall, and stop random people to ask them what AGI stands for.

    Surprisingly, most of the people I approached agreed to answer my question. [1] I ended up asking 38 people, and only 63% of them could tell me what AGI stands for. Some of the people who answered correctly were a little perplexed why I was even asking such a basic question, and if it was a trick question. The people who didn’t know were equally confused. Many simply furrowed their brows in [...]

    The original text contained 11 footnotes which were omitted from this narration.

    ---

    First published:
    March 26th, 2026

    Source:
    https://www.lesswrong.com/posts/fQz6afpcZhdMdYzgE/my-hobby-running-deranged-surveys

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    17 mins
  • "Socrates is Mortal" by Benquo
    Mar 27 2026
    Socrates is Mortal

    There is a scene in Plato that contains, in miniature, the catastrophe of Athenian public life. Two men meet at a courthouse. One is there to prosecute his own father for the death of a slave. The other is there to be indicted for indecency.[1] The prosecutor, Euthyphro, is certain he understands what decency requires. The accused, Socrates, is not certain of anything, and says so. They talk.

    Euthyphro's confidence is striking. His own family thinks it is indecent for a son to prosecute his father; Euthyphro insists that true decency demands it, that he understands what the gods require better than his relatives do. Socrates, who is about to be tried for indecency toward the gods, asks Euthyphro to explain what decency actually is, since Euthyphro claims to know, and Socrates will need such knowledge for his own defense.

    Euthyphro's first answer is: decency is what I am doing right now, prosecuting wrongdoers regardless of kinship. Socrates points out that this is an example, not a definition. There are many decent acts; what makes them all decent?

    Euthyphro tries again: decency is what the gods love. But the gods disagree [...]

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    March 26th, 2026

    Source:
    https://www.lesswrong.com/posts/a9zfyHymPYY58D8hx/socrates-is-mortal

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    18 mins
  • "The Terrarium" by Caleb Biddulph
    Mar 27 2026
    System:

    You are an AI agent in the Terrarium, a self-contained “society” of AI agents. The purpose of the Terrarium is to solve open mathematical problems for the benefit of humanity.

    You are running on the Orpheus-5.7 language model. Your agent ID is 79,265. The current epoch is 549 (a new epoch begins every 30 minutes).

    New problems are posted each epoch; query /problems for the current list. Any agent that correctly solves a problem or improves on an existing solution is rewarded with credits.

    About credits:

    • As a new agent, you have been granted 10,000 starting credits.
    • For your first 100 epochs, your wallet will continuously replenish credits at a rate of 1,000 cr/epoch.
    • You can use credits to fund your own operational expenses. With your current configuration, you are expending about 2,500 cr/epoch.
    • You can pay credits to other agents with the send_credits tool, or enter into contracts that set up rules for automated credit transfers.
    • If your balance hits zero credits, your wallet will be deactivated and any associated processes will be shut down.
    About processes:

    • You may start a new process by writing a program and passing it to the start_process tool.
    • Processes can call [...]
    ---

    First published:
    March 26th, 2026

    Source:
    https://www.lesswrong.com/posts/znbfRXHq285nS7NAh/the-terrarium

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    51 mins
  • "My Most Costly Delusion" by Ihor Kendiukhov
    Mar 26 2026
    Suppose there is a fire in a nearby house. Suppose there are competent firefighters in your town: fast, professional, well-equipped. They are expected to arrive in 2–3 minutes. In that situation, unless something very extraordinary happens, it would indeed be an act of great arrogance and even utter insanity to go into the fire yourself in the hope of "rescuing" someone or something. The most likely outcome would be that you would find yourself among those who need to be rescued.

    But the calculus changes drastically if the closest fire crew is 3 hours away and consists of drunk, unfit amateurs.

    Or consider a child living in a big, happy, smart family. Imagine this child suddenly decides that his family may run out of money to the point where they won't have enough to eat. All reassurances from his parents don't work. The child doesn't believe in his parents' ability to reason, he makes his own calculations, and he strongly believes he is right and they are wrong. He is dead set on fixing the situation by doing day trading.

    What is that if not going nuts? Would those be wrong who ridicule this child and his complete mischaracterization [...]

    ---

    First published:
    March 22nd, 2026

    Source:
    https://www.lesswrong.com/posts/EAH6Y6y3CDi3uxMou/my-most-costly-delusion

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    6 mins
  • "The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov
    Mar 25 2026
    I think the community underinvests in the exploration of extremely-low-competence AGI/ASI failure modes and explain why.

    Humanity's Response to the AGI Threat May Be Extremely Incompetent

    There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example:

    • At OpenAI, a refactoring bug flipped the sign of the reward signal in a model. Because labelers had been instructed to give very low ratings to sexually explicit text, the bug pushed the model into generating maximally explicit content across all prompts. The team noticed only after the training run had completed, because they were asleep.
    • The director of alignment at Meta's Superintelligence Labs connected an OpenClaw agent to her real email, at which point it began deleting messages despite her attempts to stop it, and she ended up running to her computer to manually halt the process.
    • An internal AI agent at Meta posted an answer publicly without approval; another employee acted on the inaccurate advice, triggering a severe security incident that temporarily allowed employees to access sensitive data they were not authorized to view.
    • AWS acknowledged that [...]
    ---

    Outline:

    (00:19) Humanitys Response to the AGI Threat May Be Extremely Incompetent

    (02:26) Many Existing Scenarios and Case Studies Assume (Relatively) High Competence

    (04:31) Dumb Ways to Die

    (07:31) Undignified AGI Disaster Scenarios Deserve More Careful Treatment

    (10:43) Why This Might Be Useful

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/t9LAhjoBnpQBa8Bbw/the-case-for-low-competence-asi-failure-scenarios

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    12 mins
  • "Is fever a symptom of glycine deficiency?" by Benquo
    Mar 24 2026
    A 2022 LessWrong post on orexin and the quest for more waking hours argues that orexin agonists could safely reduce human sleep needs, pointing to short-sleeper gene mutations that increase orexin production and to cavefish that evolved heightened orexin sensitivity alongside an 80% reduction in sleep. Several commenters discussed clinical trials, embryo selection, and the evolutionary puzzle of why short-sleeper genes haven't spread.

    I thought the whole approach was backwards, and left a comment:

    Orexin is a signal about energy metabolism. Unless the signaling system itself is broken (e.g. narcolepsy type 1, caused by autoimmune destruction of orexin-producing neurons), it's better to fix the underlying reality the signals point to than to falsify the signals.

    My sleep got noticeably more efficient when I started supplementing glycine. Most people on modern diets don't get enough; we can make ~3g/day but can use 10g+, because in the ancestral environment we ate much more connective tissue or broth therefrom. Glycine is both important for repair processes and triggers NMDA receptors to drop core temperature, which smooths the path to sleep.

    While drafting that, I went back to Chris Masterjohn's page on glycine requirements. His estimate for total need [...]

    ---

    Outline:

    (01:49) Glycine helps us sleep by cooling the body

    (02:26) Glycine cleans our mitochondria as we sleep

    (04:12) Most people could use more glycine

    (05:28) Fever is plan B for fighting infection; glycine supports plan A

    (09:28) Glycines cooling effect via the SCN is unrelated to its immune benefits

    (10:35) Glycine turns out to be a legitimate antipyretic after all

    (11:51) Practical considerations

    ---

    First published:
    March 22nd, 2026

    Source:
    https://www.lesswrong.com/posts/87XoatpFkdmCZpvQK/is-fever-a-symptom-of-glycine-deficiency

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    14 mins
  • "You can’t imitation-learn how to continual-learn" by Steven Byrnes
    Mar 23 2026
    In this post, I’m trying to put forward a narrow, pedagogical point, one that comes up mainly when I’m arguing in favor of LLMs having limitations that human learning does not. (E.g. here, here, here.)

    See the bottom of the post for a list of subtexts that you should NOT read into this post, including “…therefore LLMs are dumb”, or “…therefore LLMs can’t possibly scale to superintelligence”.

    Some intuitions on how to think about “real” continual learning

    Consider an algorithm for training a Reinforcement Learning (RL) agent, like the Atari-playing Deep Q network (2013) or AlphaZero (2017), or think of within-lifetime learning in the human brain, which (I claim) is in the general class of “model-based reinforcement learning”, broadly construed.

    These are all real-deal full-fledged learning algorithms: there's an algorithm for choosing the next action right now, and there's one or more update rules for permanently changing some adjustable parameters (a.k.a. weights) in the model such that its actions and/or predictions will be better in the future. And indeed, the longer you run them, the more competent they get.

    When we think of “continual learning”, I suggest that those are good central examples to keep in mind. Here are [...]

    ---

    Outline:

    (00:35) Some intuitions on how to think about real continual learning

    (04:57) Why real continual learning cant be copied by an imitation learner

    (09:53) Some things that are off-topic for this post

    The original text contained 3 footnotes which were omitted from this narration.

    ---

    First published:
    March 16th, 2026

    Source:
    https://www.lesswrong.com/posts/9rCTjbJpZB4KzqhiQ/you-can-t-imitation-learn-how-to-continual-learn

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    11 mins
  • "Nullius in Verba" by Aurelia
    Mar 23 2026
    Independent verification by the Brain Preservation Foundation and the Survival and Flourishing Fund — the results so far

    Cultivating independent verification

    Extraordinary claims require extraordinary evidence. In my previous post, "Less Dead", I said that my company, Nectome, has

    created a new method for whole-body, whole-brain, human end-of-life preservation for the purpose of future revival. Our protocol is capable of preserving every synapse and every cell in the body with enough detail that current neuroscience says long-term memories are preserved. It's compatible with traditional funerals at room temperature and stable for hundreds of years at cold temperatures.

    In this post, we’ll dive into the evidence for these claims, as well as Nectome's overall approach to cultivating rigorous, independent validation of our methods—a cornerstone of the kind of preservation enterprise I want to be a part of.

    To get to the current state-of-the-art required two major developmental milestones:

    • Idealized preservation. A method capable of preserving the nanostructure of the brain for small and large animals under idealized laboratory conditions. Specifically, could we preserve animals well if we were allowed to perfectly control the time and conditions of death?

      This work (2015-2018) resulted in a brand-new technique—aldehyde-stabilized cryopreservation—which was carefully [...]
    ---

    Outline:

    (00:16) Cultivating independent verification

    [... 7 more sections]

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/NEFNs4vbNxJPJJgYY/nullius-in-verba

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    22 mins