LessWrong (Curated & Popular)

By: LessWrong
  • Summary

  • Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.

    If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

    © 2025 LessWrong (Curated & Popular)
    Show More Show Less
activate_mytile_page_redirect_t1
Episodes
  • [Linkpost] “Playing in the Creek” by Hastings
    Apr 11 2025
    This is a link post. When I was a really small kid, one of my favorite activities was to try and dam up the creek in my backyard. I would carefully move rocks into high walls, pile up leaves, or try patching the holes with sand. The goal was just to see how high I could get the lake, knowing that if I plugged every hole, eventually the water would always rise and defeat my efforts. Beaver behaviour.

    One day, I had the realization that there was a simpler approach. I could just go get a big 5 foot long shovel, and instead of intricately locking together rocks and leaves and sticks, I could collapse the sides of the riverbank down and really build a proper big dam. I went to ask my dad for the shovel to try this out, and he told me, very heavily paraphrasing, 'Congratulations. You've [...]

    ---

    First published:
    April 10th, 2025

    Source:
    https://www.lesswrong.com/posts/rLucLvwKoLdHSBTAn/playing-in-the-creek

    Linkpost URL:
    https://hgreer.com/PlayingInTheCreek

    ---

    Narrated by TYPE III AUDIO.

    Show More Show Less
    4 mins
  • “Thoughts on AI 2027” by Max Harms
    Apr 10 2025
    This is part of the MIRI Single Author Series. Pieces in this series represent the beliefs and opinions of their named authors, and do not claim to speak for all of MIRI.

    Okay, I'm annoyed at people covering AI 2027 burying the lede, so I'm going to try not to do that. The authors predict a strong chance that all humans will be (effectively) dead in 6 years, and this agrees with my best guess about the future. (My modal timeline has loss of control of Earth mostly happening in 2028, rather than late 2027, but nitpicking at that scale hardly matters.) Their timeline to transformative AI also seems pretty close to the perspective of frontier lab CEO's (at least Dario Amodei, and probably Sam Altman) and the aggregate market opinion of both Metaculus and Manifold!

    If you look on those market platforms you get graphs like this:

    Both [...]

    ---

    Outline:

    (02:23) Mode ≠ Median

    (04:50) Theres a Decent Chance of Having Decades

    (06:44) More Thoughts

    (08:55) Mid 2025

    (09:01) Late 2025

    (10:42) Early 2026

    (11:18) Mid 2026

    (12:58) Late 2026

    (13:04) January 2027

    (13:26) February 2027

    (14:53) March 2027

    (16:32) April 2027

    (16:50) May 2027

    (18:41) June 2027

    (19:03) July 2027

    (20:27) August 2027

    (22:45) September 2027

    (24:37) October 2027

    (26:14) November 2027 (Race)

    (29:08) December 2027 (Race)

    (30:53) 2028 and Beyond (Race)

    (34:42) Thoughts on Slowdown

    (38:27) Final Thoughts

    ---

    First published:
    April 9th, 2025

    Source:
    https://www.lesswrong.com/posts/Yzcb5mQ7iq4DFfXHx/thoughts-on-ai-2027

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    40 mins
  • “Short Timelines don’t Devalue Long Horizon Research” by Vladimir_Nesov
    Apr 9 2025
    Short AI takeoff timelines seem to leave no time for some lines of alignment research to become impactful. But any research rebalances the mix of currently legible research directions that could be handed off to AI-assisted alignment researchers or early autonomous AI researchers whenever they show up. So even hopelessly incomplete research agendas could still be used to prompt future capable AI to focus on them, while in the absence of such incomplete research agendas we'd need to rely on AI's judgment more completely. This doesn't crucially depend on giving significant probability to long AI takeoff timelines, or on expected value in such scenarios driving the priorities.

    Potential for AI to take up the torch makes it reasonable to still prioritize things that have no hope at all of becoming practical for decades (with human effort). How well AIs can be directed to advance a line of research [...]

    ---

    First published:
    April 9th, 2025

    Source:
    https://www.lesswrong.com/posts/3NdpbA6M5AM2gHvTW/short-timelines-don-t-devalue-long-horizon-research

    ---

    Narrated by TYPE III AUDIO.

    Show More Show Less
    2 mins

What listeners say about LessWrong (Curated & Popular)

Average Customer Ratings
Overall
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Performance
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.