• "How AI Is Learning to Think in Secret" by Nicholas Andresen
    Jan 9 2026
    On Thinkish, Neuralese, and the End of Readable Reasoning

    In September 2025, researchers published the internal monologue of OpenAI's GPT-o3 as it decided to lie about scientific data. This is what it thought:

    Pardon? This looks like someone had a stroke during a meeting they didn’t want to be in, but their hand kept taking notes.

    That transcript comes from a recent paper published by researchers at Apollo Research and OpenAI on catching AI systems scheming. To understand what's happening here - and why one of the most sophisticated AI systems in the world is babbling about “synergy customizing illusions” - it first helps to know how we ended up being able to read AI thinking in the first place.

    That story starts, of all places, on 4chan.

    In late 2020, anonymous posters on 4chan started describing a prompting trick that would change the course of AI development. It was almost embarrassingly simple: instead of just asking GPT-3 for an answer, ask it instead to show its work before giving its final answer.

    Suddenly, it started solving math problems that had stumped it moments before.

    To see why, try multiplying 8,734 × 6,892 in your head. If you’re like [...]

    The original text contained 3 footnotes which were omitted from this narration.

    ---

    First published:
    January 6th, 2026

    Source:
    https://www.lesswrong.com/posts/gpyqWzWYADWmLYLeX/how-ai-is-learning-to-think-in-secret

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    38 mins
  • "On Owning Galaxies" by Simon Lermen
    Jan 8 2026
    It seems to be a real view held by serious people that your OpenAI shares will soon be tradable for moons and galaxies. This includes eminent thinkers like Dwarkesh Patel, Leopold Aschenbrenner, perhaps Scott Alexander and many more. According to them, property rights will survive an AI singularity event and soon economic growth is going to make it possible for individuals to own entire galaxies in exchange for some AI stocks. It follows that we should now seriously think through how we can equally distribute those galaxies and make sure that most humans will not end up as the UBI underclass owning mere continents or major planets.

    I don't think this is a particularly intelligent view. It comes from a huge lack of imagination for the future.

    Property rights are weird, but humanity dying isn't

    People may think that AI causing human extinction is something really strange and specific to happen. But it's the opposite: humans existing is a very brittle and strange state of affairs. Many specific things have to be true for us to be here, and when we build ASI there are many preferences and goals that would see us wiped out. It's actually hard to [...]

    ---

    Outline:

    (01:06) Property rights are weird, but humanity dying isnt

    (01:57) Why property rights wont survive

    (03:10) Property rights arent enough

    (03:36) What if there are many unaligned AIs?

    (04:18) Why would they be rewarded?

    (04:48) Conclusion

    ---

    First published:
    January 6th, 2026

    Source:
    https://www.lesswrong.com/posts/SYyBB23G3yF2v59i8/on-owning-galaxies

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    6 mins
  • "AI Futures Timelines and Takeoff Model: Dec 2025 Update" by elifland, bhalstead, Alex Kastner, Daniel Kokotajlo
    Jan 6 2026
    We’ve significantly upgraded our timelines and takeoff models! It predicts when AIs will reach key capability milestones: for example, Automated Coder / AC (full automation of coding) and superintelligence / ASI (much better than the best humans at virtually all cognitive tasks). This post will briefly explain how the model works, present our timelines and takeoff forecasts, and compare it to our previous (AI 2027) models (spoiler: the AI Futures Model predicts about 3 years longer timelines to full coding automation than our previous model, mostly due to being less bullish on pre-full-automation AI R&D speedups).

    If you’re interested in playing with the model yourself, the best way to do so is via this interactive website: aifuturesmodel.com

    If you’d like to skip the motivation for our model to an explanation for how it works, go here, The website has a more in-depth explanation of the model (starts here; use the diagram on the right as a table of contents), as well as our forecasts.

    Why do timelines and takeoff modeling?

    The future is very hard to predict. We don't think this model, or any other model, should be trusted completely. The model takes into account what we think are [...]

    ---

    Outline:

    (01:32) Why do timelines and takeoff modeling?

    (03:18) Why our approach to modeling? Comparing to other approaches

    (03:24) AGI timelines forecasting methods

    (03:29) Trust the experts

    (04:35) Intuition informed by arguments

    (06:10) Revenue extrapolation

    (07:15) Compute extrapolation anchored by the brain

    (09:53) Capability benchmark trend extrapolation

    (11:44) Post-AGI takeoff forecasts

    (13:33) How our model works

    (14:37) Stage 1: Automating coding

    (16:54) Stage 2: Automating research taste

    (18:18) Stage 3: The intelligence explosion

    (20:35) Timelines and takeoff forecasts

    (21:04) Eli

    (24:34) Daniel

    (38:32) Comparison to our previous (AI 2027) timelines and takeoff models

    (38:49) Timelines to Superhuman Coder (SC)

    (43:33) Takeoff from Superhuman Coder onward

    The original text contained 31 footnotes which were omitted from this narration.

    ---

    First published:
    December 31st, 2025

    Source:
    https://www.lesswrong.com/posts/YABG5JmztGGPwNFq2/ai-futures-timelines-and-takeoff-model-dec-2025-update

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    51 mins
  • "In My Misanthropy Era" by jenn
    Jan 5 2026
    For the past year I've been sinking into the Great Books via the Penguin Great Ideas series, because I wanted to be conversant in the Great Conversation. I am occasionally frustrated by this endeavour, but overall, it's been fun! I'm learning a lot about my civilization and the various curmudgeons that shaped it.

    But one dismaying side effect is that it's also been quite empowering for my inner 13 year old edgelord. Did you know that before we invented woke, you were just allowed to be openly contemptuous of people?

    Here's Schopenhauer on the common man:

    They take an objective interest in nothing whatever. Their attention, not to speak of their mind, is engaged by nothing that does not bear some relation, or at least some possible relation, to their own person: otherwise their interest is not aroused. They are not noticeably stimulated even by wit or humour; they hate rather everything that demands the slightest thought. Coarse buffooneries at most excite them to laughter: apart from that they are earnest brutes – and all because they are capable of only subjective interest. It is precisely this which makes card-playing the most appropriate amusement for them – card-playing for [...]

    The original text contained 3 footnotes which were omitted from this narration.

    ---

    First published:
    January 4th, 2026

    Source:
    https://www.lesswrong.com/posts/otgrxjbWLsrDjbC2w/in-my-misanthropy-era

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    14 mins
  • "2025 in AI predictions" by jessicata
    Jan 3 2026
    Past years: 2023 2024

    Continuing a yearly tradition, I evaluate AI predictions from past years, and collect a convenience sample of AI predictions made this year. In terms of selection, I prefer selecting specific predictions, especially ones made about the near term, enabling faster evaluation.

    Evaluated predictions made about 2025 in 2023, 2024, or 2025 mostly overestimate AI capabilities advances, although there's of course a selection effect (people making notable predictions about the near-term are more likely to believe AI will be impressive near-term).

    As time goes on, "AGI" becomes a less useful term, so operationalizing predictions is especially important. In terms of predictions made in 2025, there is a significant cluster of people predicting very large AI effects by 2030. Observations in the coming years will disambiguate.

    Predictions about 2025

    2023

    Jessica Taylor: "Wouldn't be surprised if this exact prompt got solved, but probably something nearby that's easy for humans won't be solved?"

    The prompt: "Find a sequence of words that is: - 20 words long - contains exactly 2 repetitions of the same word twice in a row - contains exactly 2 repetitions of the same word thrice in a row"

    Self-evaluation: False; I underestimated LLM progress [...]

    ---

    Outline:

    (01:09) Predictions about 2025

    (03:35) Predictions made in 2025 about 2025

    (05:31) Predictions made in 2025

    ---

    First published:
    January 1st, 2026

    Source:
    https://www.lesswrong.com/posts/69qnNx8S7wkSKXJFY/2025-in-ai-predictions

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    22 mins
  • "Good if make prior after data instead of before" by dynomight
    Dec 27 2025
    They say you’re supposed to choose your prior in advance. That's why it's called a “prior”. First, you’re supposed to say say how plausible different things are, and then you update your beliefs based on what you see in the world.

    For example, currently you are—I assume—trying to decide if you should stop reading this post and do something else with your life. If you’ve read this blog before, then lurking somewhere in your mind is some prior for how often my posts are good. For the sake of argument, let's say you think 25% of my posts are funny and insightful and 75% are boring and worthless.

    OK. But now here you are reading these words. If they seem bad/good, then that raises the odds that this particular post is worthless/non-worthless. For the sake of argument again, say you find these words mildly promising, meaning that a good post is 1.5× more likely than a worthless post to contain words with this level of quality.

    If you combine those two assumptions, that implies that the probability that this particular post is good is 33.3%. That's true because the red rectangle below has half the area of the blue [...]

    ---

    Outline:

    (03:28) Aliens

    (07:06) More aliens

    (09:28) Huh?

    ---

    First published:
    December 18th, 2025

    Source:
    https://www.lesswrong.com/posts/JAA2cLFH7rLGNCeCo/good-if-make-prior-after-data-instead-of-before

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    18 mins
  • "Measuring no CoT math time horizon (single forward pass)" by ryan_greenblatt
    Dec 27 2025
    A key risk factor for scheming (and misalignment more generally) is opaque reasoning ability.One proxy for this is how good AIs are at solving math problems immediately without any chain-of-thought (CoT) (as in, in a single forward pass).I've measured this on a dataset of easy math problems and used this to estimate 50% reliability no-CoT time horizon using the same methodology introduced in Measuring AI Ability to Complete Long Tasks (the METR time horizon paper).

    Important caveat: To get human completion times, I ask Opus 4.5 (with thinking) to estimate how long it would take the median AIME participant to complete a given problem.These times seem roughly reasonable to me, but getting some actual human baselines and using these to correct Opus 4.5's estimates would be better.

    Here are the 50% reliability time horizon results:

    I find that Opus 4.5 has a no-CoT 50% reliability time horizon of 3.5 minutes and that time horizon has been doubling every 9 months.

    In an earlier post (Recent LLMs can leverage filler tokens or repeated problems to improve (no-CoT) math performance), I found that repeating the problem substantially boosts performance.In the above plot, if [...]

    ---

    Outline:

    (02:33) Some details about the time horizon fit and data

    (04:17) Analysis

    (06:59) Appendix: scores for Gemini 3 Pro

    (11:41) Appendix: full result tables

    (11:46) Time Horizon - Repetitions

    (12:07) Time Horizon - Filler

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:
    December 26th, 2025

    Source:
    https://www.lesswrong.com/posts/Ty5Bmg7P6Tciy2uj2/measuring-no-cot-math-time-horizon-single-forward-pass

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    13 mins
  • "Recent LLMs can use filler tokens or problem repeats to improve (no-CoT) math performance" by ryan_greenblatt
    Dec 23 2025
    Prior results have shown that LLMs released before 2024 can't leverage 'filler tokens'—unrelated tokens prior to the model's final answer—to perform additional computation and improve performance.[1]I did an investigation on more recent models (e.g. Opus 4.5) and found that many recent LLMs improve substantially on math problems when given filler tokens.That is, I force the LLM to answer some math question immediately without being able to reason using Chain-of-Thought (CoT), but do give the LLM filter tokens (e.g., text like "Filler: 1 2 3 ...") before it has to answer.Giving Opus 4.5 filler tokens[2] boosts no-CoT performance from 45% to 51% (p=4e-7) on a dataset of relatively easy (competition) math problems[3].I find a similar effect from repeating the problem statement many times, e.g. Opus 4.5's no-CoT performance is boosted from 45% to 51%.

    Repeating the problem statement generally works better and is more reliable than filler tokens, especially for relatively weaker models I test (e.g. Qwen3 235B A22B), though the performance boost is often very similar, especially for Anthropic models.The first model that is measurably uplifted by repeats/filler is Opus 3, so this effect has been around for a [...]

    ---

    Outline:

    (03:59) Datasets

    (06:01) Prompt

    (06:58) Results

    (07:01) Performance vs number of repeats/filler

    (08:29) Alternative types of filler tokens

    (09:03) Minimal comparison on many models

    (10:05) Relative performance improvement vs default absolute performance

    (13:59) Comparing filler vs repeat

    (14:44) Future work

    (15:53) Appendix: How helpful were AIs for this project?

    (16:44) Appendix: more information about Easy-Comp-Math

    (25:05) Appendix: tables with exact values

    (25:11) Gen-Arithmetic - Repetitions

    (25:36) Gen-Arithmetic - Filler

    (25:59) Easy-Comp-Math - Repetitions

    (26:21) Easy-Comp-Math - Filler

    (26:45) Partial Sweep - Gen-Arithmetic - Repetitions

    (27:07) Partial Sweep - Gen-Arithmetic - Filler

    (27:27) Partial Sweep - Easy-Comp-Math - Repetitions

    (27:48) Partial Sweep - Easy-Comp-Math - Filler

    (28:09) Appendix: more plots

    (28:14) Relative accuracy improvement plots

    (29:05) Absolute accuracy improvement plots

    (29:57) Filler vs repeat comparison plots

    (30:02) Absolute

    (30:30) Relative

    (30:58) Appendix: significance matrices

    (31:03) Easy-Comp-Math - Repetitions

    (32:27) Easy-Comp-Math - Filler

    (33:50) Gen-Arithmetic - Repetitions

    (35:12) Gen-Arithmetic - Filler

    The original text contained 9 footnotes which were omitted from this narration.

    ---

    First published:
    December 22nd, 2025

    Source:
    https://www.lesswrong.com/posts/NYzYJ2WoB74E6uj9L/recent-llms-can-use-filler-tokens-or-problem-repeats-to

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    37 mins