• “Which side of the AI safety community are you in?” by Max Tegmark
    Oct 23 2025
    In recent years, I’ve found that people who self-identify as members of the AI safety community have increasingly split into two camps:

    Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”.

    Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.

    Whereas the 2023 extinction statement was widely signed by both Camp B and Camp A (including Dario Amodei, Demis Hassabis and Sam Altman), the 2025 superintelligence statement conveniently separates the two groups – for example, I personally offered all US Frontier AI CEO's to sign, and none chose [...]

    ---

    First published:
    October 22nd, 2025

    Source:
    https://www.lesswrong.com/posts/zmtqmwetKH4nrxXcE/which-side-of-the-ai-safety-community-are-you-in

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    4 mins
  • “Doomers were right” by Algon
    Oct 23 2025
    There's an argument I sometimes hear against existential risks, or any other putative change that some are worried about, that goes something like this:

    'We've seen time after time that some people will be afraid of any change. They'll say things like "TV will destroy people's ability to read", "coffee shops will destroy the social order","machines will put textile workers out of work". Heck, Socrates argued that books would harm people's ability to memorize things. So many prophets of doom, and yet the world has not only survived, it has thrived. Innovation is a boon. So we should be extremely wary when someone cries out "halt" in response to a new technology, as that path is lined with skulls of would be doomsayers."

    Lest you think this is a straw man, Yann Le Cun compared fears about AI doom to fears about coffee. Now, I don't want to criticize [...]

    ---

    First published:
    October 22nd, 2025

    Source:
    https://www.lesswrong.com/posts/cAmBfjQDj6eaic95M/doomers-were-right

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    5 mins
  • “Do One New Thing A Day To Solve Your Problems” by Algon
    Oct 22 2025
    People don't explore enough. They rely on cached thoughts and actions to get through their day. Unfortunately, this doesn't lead to them making progress on their problems. The solution is simple. Just do one new thing a day to solve one of your problems.

    Intellectually, I've always known that annoying, persistent problems often require just 5 seconds of actual thought. But seeing a number of annoying problems that made my life worse, some even major ones, just yield to the repeated application of a brief burst of thought each day still surprised me.

    For example, I had a wobbly chair. It was wobbling more as time went on, and I worried it would break. Eventually, I decided to try actually solving the issue. 1 minute and 10 turns of an allen key later, it was fixed.

    Another example: I have a shot attention span. I kept [...]

    ---

    First published:
    October 3rd, 2025

    Source:
    https://www.lesswrong.com/posts/gtk2KqEtedMi7ehxN/do-one-new-thing-a-day-to-solve-your-problems

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    3 mins
  • “That Mad Olympiad” by Tomás B.
    Oct 19 2025
    "I heard Chen started distilling the day after he was born. He's only four years old, if you can believe it. He's written 18 novels. His first words were, "I'm so here for it!" Adrian said.

    He's my little brother. Mom was busy in her world model. She says her character is like a "villainess" or something - I kinda worry it's a sex thing. It's for sure a sex thing. Anyway, she was busy getting seduced or seducing or whatever villanesses do in world models, so I had to escort Adrian to Oak Central for the Lit Olympiad. Mom doesn't like supervision drones for some reason. Thinks they're creepy. But a gangly older sister looming over him and witnessing those precious adolescent memories for her - that's just family, I guess.

    "That sounds more like a liability to me," I said. "Bad data, old models."

    Chen waddled [...]

    ---

    First published:
    October 15th, 2025

    Source:
    https://www.lesswrong.com/posts/LPiBBn2tqpDv76w87/that-mad-olympiad-1

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    27 mins
  • “The ‘Length’ of ‘Horizons’” by Adam Scholl
    Oct 17 2025
    Current AI models are strange. They can speak—often coherently, sometimes even eloquently—which is wild. They can predict the structure of proteins, beat the best humans at many games, recall more facts in most domains than human experts; yet they also struggle to perform simple tasks, like using computer cursors, maintaining basic logical consistency, or explaining what they know without wholesale fabrication.

    Perhaps someday we will discover a deep science of intelligence, and this will teach us how to properly describe such strangeness. But for now we have nothing of the sort, so we are left merely gesturing in vague, heuristical terms; lately people have started referring to this odd mixture of impressiveness and idiocy as “spikiness,” for example, though there isn’t much agreement about the nature of the spikes.

    Of course it would be nice to measure AI progress anyway, at least in some sense sufficient to help us [...]

    ---

    Outline:

    (03:48) Conceptual Coherence

    (07:12) Benchmark Bias

    (10:39) Predictive Value

    The original text contained 4 footnotes which were omitted from this narration.

    ---

    First published:
    October 14th, 2025

    Source:
    https://www.lesswrong.com/posts/PzLSuaT6WGLQGJJJD/the-length-of-horizons

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    14 mins
  • “Don’t Mock Yourself” by Algon
    Oct 15 2025
    About half a year ago, I decided to try stop insulting myself for two weeks. No more self-deprecating humour, calling myself a fool, or thinking I'm pathetic. Why? Because it felt vaguely corrosive. Let me tell you how it went. Spoiler: it went well.

    The first thing I noticed was how often I caught myself about to insult myself. It happened like multiple times an hour. I would lay in bed at night thinking, "you mor- wait, I can't insult myself, I've still got 11 days to go. Dagnabbit." The negative space sent a glaring message: I insulted myself a lot. Like, way more than I realized.

    The next thing I noticed was that I was the butt of half of my jokes. I'd keep thinking of zingers which made me out to be a loser, a moron, a scrub in some way. Sometimes, I could re-work [...]

    ---

    First published:
    October 12th, 2025

    Source:
    https://www.lesswrong.com/posts/8prPryf3ranfALBBp/don-t-mock-yourself

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    4 mins
  • “If Anyone Builds It Everyone Dies, a semi-outsider review” by dvd
    Oct 14 2025
    About me and this review: I don’t identify as a member of the rationalist community, and I haven’t thought much about AI risk. I read AstralCodexTen and used to read Zvi Mowshowitz before he switched his blog to covering AI. Thus, I’ve long had a peripheral familiarity with LessWrong. I picked up IABIED in response to Scott Alexander's review, and ended up looking here to see what reactions were like. After encountering a number of posts wondering how outsiders were responding to the book, I thought it might be valuable for me to write mine down. This is a “semi-outsider “review in that I don’t identify as a member of this community, but I’m not a true outsider in that I was familiar enough with it to post here. My own background is in academic social science and national security, for whatever that's worth. My review presumes you’re already [...]

    ---

    Outline:

    (01:07) My loose priors going in:

    (02:29) To skip ahead to my posteriors:

    (03:45) On to the Review:

    (08:14) My questions and concerns

    (08:33) Concern #1 Why should we assume the AI wants to survive? If it does, then what exactly wants to survive?

    (12:44) Concern #2 Why should we assume that the AI has boundless, coherent drives?

    (17:57) #3: Why should we assume there will be no in between?

    (21:53) The Solution

    (23:35) Closing Thoughts

    ---

    First published:
    October 13th, 2025

    Source:
    https://www.lesswrong.com/posts/ex3fmgePWhBQEvy7F/if-anyone-builds-it-everyone-dies-a-semi-outsider-review

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    26 mins
  • “The Most Common Bad Argument In These Parts” by J Bostock
    Oct 12 2025
    I've noticed an antipattern. It's definitely on the dark pareto-frontier of "bad argument" and "I see it all the time amongst smart people". I'm confident it's the worst, common argument I see amongst rationalists and EAs. I don't normally crosspost to the EA forum, but I'm doing it now. I call it Exhaustive Free Association.

    Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"[1] Once you spot it, you notice it all the damn time.

    Since I've most commonly encountered this amongst rat/EA types, I'm going to have to talk about people in our community as examples of this.

    Examples

    Here's a few examples. These are mostly for illustrative purposes, and my case does not rely on me having found [...]

    ---

    Outline:

    (00:55) Examples

    (01:08) Security Mindset

    (01:25) Superforecasters and AI Doom

    (02:14) With Apologies to Rethink Priorities

    (02:45) The Fatima Sun Miracle

    (03:14) Bad Reasoning is Almost Good Reasoning

    (05:09) Arguments as Soldiers

    (06:29) Conclusion

    (07:04) The Counter-Counter Spell

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:
    October 11th, 2025

    Source:
    https://www.lesswrong.com/posts/arwATwCTscahYwTzD/the-most-common-bad-argument-in-these-parts

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    8 mins