LessWrong (Curated & Popular)

By: LessWrong
  • Summary

  • Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.

    If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

    © 2025 LessWrong (Curated & Popular)
    Show More Show Less
Episodes
  • “A Bear Case: My Predictions Regarding AI Progress” by Thane Ruthenis
    Mar 6 2025
    This isn't really a "timeline", as such – I don't know the timings – but this is my current, fairly optimistic take on where we're heading.

    I'm not fully committed to this model yet: I'm still on the lookout for more agents and inference-time scaling later this year. But Deep Research, Claude 3.7, Claude Code, Grok 3, and GPT-4.5 have turned out largely in line with these expectations[1], and this is my current baseline prediction.

    The Current Paradigm: I'm Tucking In to Sleep

    I expect that none of the currently known avenues of capability advancement are sufficient to get us to AGI[2].

    • I don't want to say the pretraining will "plateau", as such, I do expect continued progress. But the dimensions along which the progress happens are going to decouple from the intuitive "getting generally smarter" metric, and will face steep diminishing returns.
      • Grok 3 and GPT-4.5 [...]
    ---

    Outline:

    (00:35) The Current Paradigm: Im Tucking In to Sleep

    (10:24) Real-World Predictions

    (15:25) Closing Thoughts

    The original text contained 7 footnotes which were omitted from this narration.

    ---

    First published:
    March 5th, 2025

    Source:
    https://www.lesswrong.com/posts/oKAFFvaouKKEhbBPm/a-bear-case-my-predictions-regarding-ai-progress

    ---

    Narrated by TYPE III AUDIO.

    Show More Show Less
    19 mins
  • “Statistical Challenges with Making Super IQ babies” by Jan Christian Refsgaard
    Mar 5 2025
    This is a critique of How to Make Superbabies on LessWrong.

    Disclaimer: I am not a geneticist[1], and I've tried to use as little jargon as possible. so I used the word mutation as a stand in for SNP (single nucleotide polymorphism, a common type of genetic variation).

    Background

    The Superbabies article has 3 sections, where they show:

    • Why: We should do this, because the effects of editing will be big
    • How: Explain how embryo editing could work, if academia was not mind killed (hampered by institutional constraints)
    • Other: like legal stuff and technical details.
    Here is a quick summary of the "why" part of the original article articles arguments, the rest is not relevant to understand my critique.

    1. we can already make (slightly) superbabies selecting embryos with "good" mutations, but this does not scale as there are diminishing returns and almost no gain past "best [...]
    ---

    Outline:

    (00:25) Background

    (02:25) My Position

    (04:03) Correlation vs. Causation

    (06:33) The Additive Effect of Genetics

    (10:36) Regression towards the null part 1

    (12:55) Optional: Regression towards the null part 2

    (16:11) Final Note

    The original text contained 4 footnotes which were omitted from this narration.

    ---

    First published:
    March 2nd, 2025

    Source:
    https://www.lesswrong.com/posts/DbT4awLGyBRFbWugh/statistical-challenges-with-making-super-iq-babies

    ---

    Narrated by TYPE III AUDIO.

    Show More Show Less
    18 mins
  • “Self-fulfilling misalignment data might be poisoning our AI models” by TurnTrout
    Mar 4 2025
    This is a link post.Your AI's training data might make it more “evil” and more able to circumvent your security, monitoring, and control measures. Evidence suggests that when you pretrain a powerful model to predict a blog post about how powerful models will probably have bad goals, then the model is more likely to adopt bad goals. I discuss ways to test for and mitigate these potential mechanisms. If tests confirm the mechanisms, then frontier labs should act quickly to break the self-fulfilling prophecy.

    Research I want to see

    Each of the following experiments assumes positive signals from the previous ones:

    1. Create a dataset and use it to measure existing models
    2. Compare mitigations at a small scale
    3. An industry lab running large-scale mitigations
    Let us avoid the dark irony of creating evil AI because some folks worried that AI would be evil. If self-fulfilling misalignment has a strong [...]

    The original text contained 1 image which was described by AI.

    ---

    First published:
    March 2nd, 2025

    Source:
    https://www.lesswrong.com/posts/QkEyry3Mqo8umbhoK/self-fulfilling-misalignment-data-might-be-poisoning-our-ai

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    2 mins

What listeners say about LessWrong (Curated & Popular)

Average Customer Ratings
Overall
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Performance
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.