• “Where is the Capital? An Overview” by johnswentworth
    Nov 17 2025
    When a new dollar goes into the capital markets, after being bundled and securitized and lent several times over, where does it end up? When society's total savings increase, what capital assets do those savings end up invested in?

    When economists talk about “capital assets”, they mean things like roads, buildings and machines. When I read through a company's annual reports, lots of their assets are instead things like stocks and bonds, short-term debt, and other “financial” assets - i.e. claims on other people's stuff. In theory, for every financial asset, there's a financial liability somewhere. For every bond asset, there's some payer for whom that bond is a liability. Across the economy, they all add up to zero. What's left is the economists’ notion of capital, the nonfinancial assets: the roads, buildings, machines and so forth.

    Very roughly speaking, when there's a net increase in savings, that's where it has to end up - in the nonfinancial assets.

    I wanted to get a more tangible sense of what nonfinancial assets look like, of where my savings are going in the physical world. So, back in 2017 I pulled fundamentals data on ~2100 publicly-held US companies. I looked at [...]

    ---

    Outline:

    (02:01) Disclaimers

    (04:10) Overview (With Numbers!)

    (05:01) Oil - 25%

    (06:26) Power Grid - 16%

    (07:07) Consumer - 13%

    (08:12) Telecoms - 8%

    (09:26) Railroads - 8%

    (10:47) Healthcare - 8%

    (12:03) Tech - 6%

    (12:51) Industrial - 5%

    (13:49) Mining - 3%

    (14:34) Real Estate - 3%

    (14:49) Automotive - 2%

    (15:32) Logistics - 1%

    (16:12) Miscellaneous

    (16:55) Learnings

    ---

    First published:
    November 16th, 2025

    Source:
    https://www.lesswrong.com/posts/HpBhpRQCFLX9tx62Z/where-is-the-capital-an-overview

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try
    Show More Show Less
    18 mins
  • “Problems I’ve Tried to Legibilize” by Wei Dai
    Nov 17 2025
    Looking back, it appears that much of my intellectual output could be described as legibilizing work, or trying to make certain problems in AI risk more legible to myself and others. I've organized the relevant posts and comments into the following list, which can also serve as a partial guide to problems that may need to be further legibilized, especially beyond LW/rationalists, to AI researchers, funders, company leaders, government policymakers, their advisors (including future AI advisors), and the general public.

    1. Philosophical problems
      1. Probability theory
      2. Decision theory
      3. Beyond astronomical waste (possibility of influencing vastly larger universes beyond our own)
      4. Interaction between bargaining and logical uncertainty
      5. Metaethics
      6. Metaphilosophy: 1, 2
    2. Problems with specific philosophical and alignment ideas
      1. Utilitarianism: 1, 2
      2. Solomonoff induction
      3. "Provable" safety
      4. CEV
      5. Corrigibility
      6. IDA (and many scattered comments)
      7. UDASSA
      8. UDT
    3. Human-AI safety (x- and s-risks arising from the interaction between human nature and AI design)
      1. Value differences/conflicts between humans
      2. “Morality is scary” (human morality is often the result of status games amplifying random aspects of human value, with frightening results)
      3. [...]
    ---

    First published:
    November 9th, 2025

    Source:
    https://www.lesswrong.com/posts/7XGdkATAvCTvn4FGu/problems-i-ve-tried-to-legibilize

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    4 mins
  • “Do not hand off what you cannot pick up” by habryka
    Nov 17 2025
    Delegation is good! Delegation is the foundation of civilization! But in the depths of delegation madness breeds and evil rises.

    In my experience, there are three ways in which delegation goes off the rails:

    1. You delegate without knowing what good performance on a task looks like

    If you do not know how to evaluate performance on a task, you are going to have a really hard time delegating it to someone. Most likely, you will choose someone incompetent for the task at hand.

    But even if you manage to avoid that specific error mode, it is most likely that your delegee will notice that you do not have a standard, and so will use this opportunity to be lazy and do bad work, which they know you won't be able to notice.

    Or even worse, in an attempt to make sure your delegee puts in proper effort, you set an impossibly high standard, to which the delegee can only respond by quitting, or lying about their performance. This can tank a whole project if you discover it too late.

    2. You assigned responsibility for a crucial task to an external party

    Frequently some task will [...]

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    November 12th, 2025

    Source:
    https://www.lesswrong.com/posts/rSCxviHtiWrG5pudv/do-not-hand-off-what-you-cannot-pick-up

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    7 mins
  • “7 Vicious Vices of Rationalists” by Ben Pace
    Nov 17 2025
    Vices aren't behaviors that one should never do. Rather, vices are behaviors that are fine and pleasurable to do in moderation, but tempting to do in excess. The classical vices are actually good in part. Moderate amounts of gluttony is just eating food, which is important. Moderate amounts of envy is just "wanting things", which is a motivator of much of our economy.

    What are some things that rationalists are wont to do, and often to good effect, but that can grow pathological?

    1. Contrarianism

    There are a whole host of unaligned forces producing the arguments and positions you hear. People often hold beliefs out of convenience, defend positions that they are aligned with politically, or just don't give much thought to what they're saying one way or another.

    A good way find out whether people have any good reasons for their positions, is to take a contrarian stance, and to seek the best arguments for unpopular positions. This also helps you to explore arguments around positions that others aren't investigating.

    However, this can be taken to the extreme.

    While it is hard to know for sure what is going on inside others' heads, I know [...]

    ---

    Outline:

    (00:40) 1. Contrarianism

    (01:57) 2. Pedantry

    (03:35) 3. Elaboration

    (03:52) 4. Social Obliviousness

    (05:21) 5. Assuming Good Faith

    (06:33) 6. Undercutting Social Momentum

    (08:00) 7. Digging Your Heels In

    ---

    First published:
    November 16th, 2025

    Source:
    https://www.lesswrong.com/posts/r6xSmbJRK9KKLcXTM/7-vicious-vices-of-rationalists-1

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    10 mins
  • “Tell people as early as possible it’s not going to work out” by habryka
    Nov 17 2025
    Context: Post #4 in my sequence of private Lightcone Infrastructure memos edited for public consumption

    This week's principle is more about how I want people at Lightcone to relate to community governance than it is about our internal team culture.

    As part of our jobs at Lightcone we often are in charge of determining access to some resource, or membership in some group (ranging from LessWrong to the AI Alignment Forum to the Lightcone Offices). Through that, I have learned that one of the most important things to do when building things like this is to try to tell people as early as possible if you think they are not a good fit for the community; for both trust within the group, and for the sake of the integrity and success of the group itself.

    E.g. when you spot a LessWrong commenter that seems clearly not on track to ever be a good contributor long-term, or someone in the Lightcone Slack clearly seeming like not a good fit, you should aim to off-ramp them as soon as possible, and generally put marginal resources into finding out whether someone is a good long-term fit early, before they invest substantially [...]

    ---

    First published:
    November 14th, 2025

    Source:
    https://www.lesswrong.com/posts/Hun4EaiSQnNmB9xkd/tell-people-as-early-as-possible-it-s-not-going-to-work-out

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    3 mins
  • “Everyone has a plan until they get lied to the face” by Screwtape
    Nov 16 2025
    "Everyone has a plan until they get punched in the face."

    - Mike Tyson

    (The exact phrasing of that quote changes, this is my favourite.)

    I think there is an open, important weakness in many people. We assume those we communicate with are basically trustworthy. Further, I think there is an important flaw in the current rationality community. We spend a lot of time focusing on subtle epistemic mistakes, teasing apart flaws in methodology and practicing the principle of charity. This creates a vulnerability to someone willing to just say outright false things. We’re kinda slow about reacting to that.

    Suggested reading: Might People on the Internet Sometimes Lie, People Will Sometimes Just Lie About You. Epistemic status: My Best Guess.

    I.

    Getting punched in the face is an odd experience. I'm not sure I recommend it, but people have done weirder things in the name of experiencing novel psychological states. If it happens in a somewhat safety-negligent sparring ring, or if you and a buddy go out in the back yard tomorrow night to try it, I expect the punch gets pulled and it's still weird. There's a jerk of motion your eyes try to catch up [...]

    ---

    Outline:

    (01:03) I.

    (03:30) II.

    (07:33) III.

    (09:55) 4.

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    November 14th, 2025

    Source:
    https://www.lesswrong.com/posts/5LFjo6TBorkrgFGqN/everyone-has-a-plan-until-they-get-lied-to-the-face

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    13 mins
  • “Please, Don’t Roll Your Own Metaethics” by Wei Dai
    Nov 14 2025
    One day, when I was an interning at the cryptography research department of a large software company, my boss handed me an assignment to break a pseudorandom number generator passed to us for review. Someone in another department invented it and planned to use it in their product, and wanted us to take a look first. This person must have had a lot of political clout or was especially confident in himself, because he refused the standard advice that anything an amateur comes up with is very likely to be insecure and he should instead use one of the established, off the shelf cryptographic algorithms, that have survived extensive cryptanalysis (code breaking) attempts.

    My boss thought he had to demonstrate the insecurity of the PRNG by coming up with a practical attack (i.e., a way to predict its future output based only on its past output, without knowing the secret key/seed). There were three permanent full time professional cryptographers working in the research department, but none of them specialized in cryptanalysis of symmetric cryptography (which covers such PRNGs) so it might have taken them some time to figure out an attack. My time was obviously less valuable and my [...]

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    November 12th, 2025

    Source:
    https://www.lesswrong.com/posts/KCSmZsQzwvBxYNNaT/please-don-t-roll-your-own-metaethics

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    4 mins
  • “Paranoia rules everything around me” by habryka
    Nov 14 2025
    People sometimes make mistakes [citation needed].

    The obvious explanation for most of those mistakes is that decision makers do not have access to the information necessary to avoid the mistake, or are not smart/competent enough to think through the consequences of their actions.

    This predicts that as decision-makers get access to more information, or are replaced with smarter people, their decisions will get better.

    And this is substantially true! Markets seem more efficient today than they were before the onset of the internet, and in general decision-making across the board has improved on many dimensions.

    But in many domains, I posit, decision-making has gotten worse, despite access to more information, and despite much larger labor markets, better education, the removal of lead from gasoline, and many other things that should generally cause decision-makers to be more competent and intelligent. There is a lot of variance in decision-making quality that is not well-accounted for by how much information actors have about the problem domain, and how smart they are.

    I currently believe that the factor that explains most of this remaining variance is "paranoia", in-particular the kind of paranoia that becomes more adaptive as your environment gets [...]

    ---

    Outline:

    (01:31) A market for lemons

    (05:02) Its lemons all the way down

    (06:15) Fighter jets and OODA loops

    (08:23) The first thing you try is to blind yourself

    (13:37) The second thing you try is to purge the untrustworthy

    (20:55) The third thing to try is to become unpredictable and vindictive

    ---

    First published:
    November 13th, 2025

    Source:
    https://www.lesswrong.com/posts/yXSKGm4txgbC3gvNs/paranoia-rules-everything-around-me

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    23 mins