• “My theory of change for working in AI healthtech” by Andrew_Critch
    Oct 15 2024
    This post starts out pretty gloomy but ends up with some points that I feel pretty positive about. Day to day, I'm more focussed on the positive points, but awareness of the negative has been crucial to forming my priorities, so I'm going to start with those. It's mostly addressed to the EA community, but is hopefully somewhat of interest to LessWrong and the Alignment Forum as well.

    My main concerns

    I think AGI is going to be developed soon, and quickly. Possibly (20%) that's next year, and most likely (80%) before the end of 2029. These are not things you need to believe for yourself in order to understand my view, so no worries if you're not personally convinced of this.

    (For what it's worth, I did arrive at this view through years of study and research in AI, combined with over a decade of private forecasting practice [...]

    ---

    Outline:

    (00:28) My main concerns

    (03:41) Extinction by industrial dehumanization

    (06:00) Successionism as a driver of industrial dehumanization

    (11:08) My theory of change: confronting successionism with human-specific industries

    (15:53) How I identified healthcare as the industry most relevant to caring for humans

    (20:00) But why not just do safety work with big AI labs or governments?

    (23:22) Conclusion

    The original text contained 1 image which was described by AI.

    ---

    First published:
    October 12th, 2024

    Source:
    https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-theory-of-change-for-working-in-ai-healthtech

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    25 mins
  • “Why I’m not a Bayesian” by Richard_Ngo
    Oct 15 2024
    This post focuses on philosophical objections to Bayesianism as an epistemology. I first explain Bayesianism and some standard objections to it, then lay out my two main objections (inspired by ideas in philosophy of science). A follow-up post will speculate about how to formalize an alternative.

    Degrees of belief

    The core idea of Bayesianism: we should ideally reason by assigning credences to propositions which represent our degrees of belief that those propositions are true.

    If that seems like a sufficient characterization to you, you can go ahead and skip to the next section, where I explain my objections to it. But for those who want a more precise description of Bayesianism, and some existing objections to it, I’ll more specifically characterize it in terms of five subclaims. Bayesianism says that we should ideally reason in terms of:

    1. Propositions which are either true or false (classical logic)
    2. Each of [...]
    ---

    Outline:

    (00:22) Degrees of belief

    (04:06) Degrees of truth

    (08:05) Model-based reasoning

    (13:43) The role of Bayesianism

    The original text contained 1 image which was described by AI.

    ---

    First published:
    October 6th, 2024

    Source:
    https://www.lesswrong.com/posts/TyusAoBMjYzGN3eZS/why-i-m-not-a-bayesian

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    18 mins
  • “The AGI Entente Delusion” by Max Tegmark
    Oct 14 2024
    As humanity gets closer to Artificial General Intelligence (AGI), a new geopolitical strategy is gaining traction in US and allied circles, in the NatSec, AI safety and tech communities. Anthropic CEO Dario Amodei and RAND Corporation call it the “entente”, while others privately refer to it as “hegemony" or “crush China”. I will argue that, irrespective of one's ethical or geopolitical preferences, it is fundamentally flawed and against US national security interests.

    If the US fights China in an AGI race, the only winners will be machines

    The entente strategy

    Amodei articulates key elements of this strategy as follows:

    "a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust [...]

    ---

    Outline:

    (00:51) The entente strategy

    (02:22) Why it's a suicide race

    (09:19) Loss-of-control

    (11:32) A better strategy: tool AI

    The original text contained 1 image which was described by AI.

    ---

    First published:
    October 13th, 2024

    Source:
    https://www.lesswrong.com/posts/oJQnRDbgSS8i6DwNu/the-agi-entente-delusion

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    18 mins
  • “Momentum of Light in Glass” by Ben
    Oct 14 2024
    I think that most people underestimate how many scientific mysteries remain, even on questions that sound basic.

    My favourite candidate for "the most basic thing that is still unknown" is the momentum carried by light, when it is in a medium (for example, a flash of light in glass or water).

    If a block of glass has a refractive index of _n_, then the light inside that block travels _n_ times slower than the light would in vacuum. But what is the momentum of that light wave in the glass relative to the momentum it would have in vacuum?"

    In 1908 Abraham proposed that the light's momentum would be reduced by a factor of _n_. This makes sense on the surface, _n_ times slower means _n_ times less momentum. This gives a single photon a momentum of _hbar omega / nc_. For _omega_ the angular frequency, _c_ the [...]

    The original text contained 13 footnotes which were omitted from this narration.

    The original text contained 2 images which were described by AI.

    ---

    First published:
    October 9th, 2024

    Source:
    https://www.lesswrong.com/posts/njBRhELvfMtjytYeH/momentum-of-light-in-glass

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    19 mins
  • “Overview of strong human intelligence amplification methods” by TsviBT
    Oct 9 2024
    How can we make many humans who are very good at solving difficult problems?

    Summary (table of made-up numbers)

    I made up the made-up numbers in this table of made-up numbers; therefore, the numbers in this table of made-up numbers are made-up numbers.





    Call to action

    If you have a shitload of money, there are some projects you can give money to that would make supergenius humans on demand happen faster. If you have a fuckton of money, there are projects whose creation you could fund that would greatly accelerate this technology.

    If you're young and smart, or are already an expert in either stem cell / reproductive biology, biotech, or anything related to brain-computer interfaces, there are some projects you could work on.

    If neither, think hard, maybe I missed something.

    You can DM me or gmail [...]

    ---

    Outline:

    (00:12) Summary (table of made-up numbers)

    (00:45) Call to action

    (01:22) Context

    (01:25) The goal

    (02:56) Constraint: Algernons law

    (04:30) How to know what makes a smart brain

    (04:35) Figure it out ourselves

    (04:53) Copy natures work

    (05:18) Brain emulation

    (05:21) The approach

    (06:07) Problems

    (07:52) Genomic approaches

    (08:34) Adult brain gene editing

    (08:38) The approach

    (08:53) Problems

    (09:26) Germline engineering

    (09:32) The approach

    (11:37) Problems

    (12:11) Signaling molecules for creative brains

    (12:15) The approach

    (13:30) Problems

    (13:45) Brain-brain electrical interface approaches

    (14:41) Problems with all electrical brain interface approaches

    (15:11) Massive cerebral prosthetic connectivity

    (17:03) Human / human interface

    (17:59) Interface with brain tissue in a vat

    (18:30) Massive neural transplantation

    (18:35) The approach

    (19:01) Problems

    (19:39) Support for thinking

    (19:53) The approaches

    (21:04) Problems

    (21:58) FAQ

    (22:01) What about weak amplification

    (22:14) What about ...

    (24:04) The real intelligence enhancement is ...

    The original text contained 3 images which were described by AI.

    ---

    First published:
    October 8th, 2024

    Source:
    https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the ep
    Show More Show Less
    25 mins
  • “Struggling like a Shadowmoth” by Raemon
    Oct 3 2024
    This post is probably hazardous for one type of person in one particular growth stage, and necessary for people in a different growth stage, and I don't really know how to tell the difference in advance.

    If you read it and feel like it kinda wrecked you send me a DM. I'll try to help bandage it.



    One of my favorite stories growing up was Star Wars: Traitor, by Matthew Stover.

    The book is short, if you want to read it. Spoilers follow. (I took a look at it again recently and I think it didn't obviously hold up as real adult fiction, although quite good if you haven't yet had your mind blown that many times)

    One anecdote from the story has stayed with me and permeates my worldview.

    The story begins with "Jacen Solo has been captured, and is being tortured."

    He is being [...]

    The original text contained 3 footnotes which were omitted from this narration.

    ---

    First published:
    September 24th, 2024

    Source:
    https://www.lesswrong.com/posts/hvj9NGodhva9pKGTj/struggling-like-a-shadowmoth

    ---

    Narrated by TYPE III AUDIO.

    Show More Show Less
    12 mins
  • “Three Subtle Examples of Data Leakage” by abstractapplic
    Oct 3 2024
    This is a description of my work on some data science projects, lightly obfuscated and fictionalized to protect the confidentiality of the organizations I handled them for (and also to make it flow better). I focus on the high-level epistemic/mathematical issues, and the lived experience of working on intellectual problems, but gloss over the timelines and implementation details.

    The Upper Bound

    One time, I was working for a company which wanted to win some first-place sealed-bid auctions in a market they were thinking of joining, and asked me to model the price-to-beat in those auctions. There was a twist: they were aiming for the low end of the market, and didn't care about lots being sold for more than $1000.

    "Okay," I told them. "I'll filter out everything with a price above $1000 before building any models or calculating any performance metrics!"

    They approved of this, and told me [...]

    ---

    Outline:

    (00:27) The Upper Bound

    (02:58) The Time-Travelling Convention

    (05:56) The Tobit Problem

    (06:30) My Takeaways

    The original text contained 3 footnotes which were omitted from this narration.

    ---

    First published:
    October 1st, 2024

    Source:
    https://www.lesswrong.com/posts/rzyHbLZHuqHq6KM65/three-subtle-examples-of-data-leakage

    ---

    Narrated by TYPE III AUDIO.

    Show More Show Less
    8 mins
  • “the case for CoT unfaithfulness is overstated” by nostalgebraist
    Sep 30 2024
    [Meta note: quickly written, unpolished. Also, it's possible that there's some more convincing work on this topic that I'm unaware of – if so, let me know]

    In research discussions about LLMs, I often pick up a vibe of casual, generalized skepticism about model-generated CoT (chain-of-thought) explanations.

    CoTs (people say) are not trustworthy in general. They don't always reflect what the model is "actually" thinking or how it has "actually" solved a given problem.

    This claim is true as far as it goes. But people sometimes act like it goes much further than (IMO) it really does.

    Sometimes it seems to license an attitude of "oh, it's no use reading what the model says in the CoT, you're a chump if you trust that stuff." Or, more insidiously, a failure to even ask the question "what, if anything, can we learn about the model's reasoning process by reading the [...]

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    September 29th, 2024

    Source:
    https://www.lesswrong.com/posts/HQyWGE2BummDCc2Cx/the-case-for-cot-unfaithfulness-is-overstated

    ---

    Narrated by TYPE III AUDIO.

    Show More Show Less
    22 mins