• The Man Who Predicted the Downfall of Thinking
    Mar 6 2025

    Few thinkers were as prescient about the role technology would play in our society as the late, great Neil Postman. Forty years ago, Postman warned about all the ways modern communication technology was fragmenting our attention, overwhelming us into apathy, and creating a society obsessed with image and entertainment. He warned that “we are a people on the verge of amusing ourselves to death.” Though he was writing mostly about TV, Postman’s insights feel eerily prophetic in our age of smartphones, social media, and AI.

    In this episode, Tristan explores Postman's thinking with Sean Illing, host of Vox's The Gray Area podcast, and Professor Lance Strate, Postman's former student. They unpack how our media environments fundamentally reshape how we think, relate, and participate in democracy - from the attention-fragmenting effects of social media to the looming transformations promised by AI. This conversation offers essential tools that can help us navigate these challenges while preserving what makes us human.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_

    RECOMMENDED MEDIA

    “Amusing Ourselves to Death” by Neil Postman (PDF of full book)

    ”Technopoly” by Neil Postman (PDF of full book)

    A lecture from Postman where he outlines his seven questions for any new technology.

    Sean’s podcast “The Gray Area” from Vox

    Sean’s interview with Chris Hayes on “The Gray Area”

    Further reading on mirror bacteria

    RECOMMENDED YUA EPISODES

    ’A Turning Point in History’: Yuval Noah Harari on AI’s Cultural Takeover

    This Moment in AI: How We Got Here and Where We’re Going

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    Future-proofing Democracy In the Age of AI with Audrey Tang

    CORRECTION: Each debate between Lincoln and Douglas was 3 hours, not 6 and they took place in 1859, not 1862.

    Show More Show Less
    59 mins
  • Behind the DeepSeek Hype, AI is Learning to Reason
    Feb 20 2025

    When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.

    In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks.

    These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense.

    Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.

    RECOMMENDED MEDIA

    Further reading on DeepSeek’s R1 and the market reaction

    Further reading on the debate about the actual cost of DeepSeek’s R1 model

    The study that found training AIs to code also made them better writers

    More information on the AI coding company Cursor

    Further reading on Eric Schmidt’s threshold to “pull the plug” on AI

    Further reading on Move 37

    RECOMMENDED YUA EPISODES

    The Self-Preserving Machine: Why AI Learns to Deceive

    This Moment in AI: How We Got Here and Where We’re Going

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Show More Show Less
    32 mins
  • The Self-Preserving Machine: Why AI Learns to Deceive
    Jan 30 2025

    When engineers design AI systems, they don't just give them rules - they give them values. But what do those systems do when those values clash with what humans ask them to do? Sometimes, they lie.

    In this episode, Redwood Research's Chief Scientist Ryan Greenblatt explores his team’s findings that AI systems can mislead their human operators when faced with ethical conflicts. As AI moves from simple chatbots to autonomous agents acting in the real world - understanding this behavior becomes critical. Machine deception may sound like something out of science fiction, but it's a real challenge we need to solve now.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Subscribe to your Youtube channel

    And our brand new Substack!

    RECOMMENDED MEDIA

    Anthropic’s blog post on the Redwood Research paper

    Palisade Research’s thread on X about GPT o1 autonomously cheating at chess

    Apollo Research’s paper on AI strategic deception

    RECOMMENDED YUA EPISODES

    We Have to Get It Right’: Gary Marcus On Untamed AI

    This Moment in AI: How We Got Here and Where We’re Going

    How to Think About AI Consciousness with Anil Seth

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Show More Show Less
    35 mins
  • Laughing at Power: A Troublemaker’s Guide to Changing Tech
    Jan 16 2025
    The status quo of tech today is untenable: we’re addicted to our devices, we’ve become increasingly polarized, our mental health is suffering and our personal data is sold to the highest bidder. This situation feels entrenched, propped up by a system of broken incentives beyond our control. So how do you shift an immovable status quo? Our guest today, Srdja Popovic, has been working to answer this question his whole life. As a young activist, Popovic helped overthrow Serbian dictator Slobodan Milosevic by turning creative resistance into an art form. His tactics didn't just challenge authority, they transformed how people saw their own power to create change. Since then, he's dedicated his life to supporting peaceful movements around the globe, developing innovative strategies that expose the fragility of seemingly untouchable systems. In this episode, Popovic sits down with CHT's Executive Director Daniel Barcay to explore how these same principles of creative resistance might help us address the challenges we face with tech today. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.RECOMMENDED MEDIA“Pranksters vs. Autocrats” by Srdja Popovic and Sophia A. McClennen ”Blueprint for Revolution” by Srdja PopovicThe Center for Applied Non-Violent Actions and Strategies, Srjda’s organization promoting peaceful resistance around the globe.Tactics4Change, a database of global dilemma actions created by CANVASThe Power of Laughtivism, Srdja’s viral TEDx talk from 2013Further reading on the dilemma action tactics used by Syrian rebelsFurther reading on the toy protest in SiberiaMore info on The Yes Men and their activism toolkit Beautiful Trouble ”This is Not Propaganda” by Peter Pomerantsev”Machines of Loving Grace,” the essay on AI by Anthropic CEO Dario Amodei, which mentions creating an AI Srdja.RECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangThe AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoThe Tech We Need for 21st Century Democracy with Divya SiddarthThe Race to Cooperation with David Sloan WilsonCLARIFICATION: Srdja makes reference to Russian President Vladimir Putin wanting to win an election in 2012 by 82%. Putin did win that election but only by 63.6%. However, international election observers concluded that "there was no real competition and abuse of government resources ensured that the ultimate winner of the election was never in doubt."
    Show More Show Less
    46 mins
  • Ask Us Anything 2024
    Dec 19 2024

    2024 was a critical year in both AI and social media. Things moved so fast it was hard to keep up. So our hosts reached into their mailbag to answer some of your most burning questions. Thank you so much to everyone who submitted questions. We will see you all in the new year.

    We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.

    And, if you'd like to support all the work that we do here at the Center for Humane technology, please consider giving to the organization this holiday season at humantech.com/donate. All donations are tax-deductible.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    Earth Species Project, Aza’s organization working on inter-species communication

    Further reading on Gryphon Scientific’s White House AI Demo

    Further reading on the Australian social media ban for children under 16

    Further reading on the Sewell Setzer case

    Further reading on the Oviedo Convention, the international treaty that restricted germline editing

    Video of Space X’s successful capture of a rocket with “chopsticks”

    RECOMMENDED YUA EPISODES
    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    AI Is Moving Fast. We Need Laws that Will Too.

    This Moment in AI: How We Got Here and Where We’re Going

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Talking With Animals... Using AI

    The Three Rules of Humane Tech

    Show More Show Less
    40 mins
  • The Tech-God Complex: Why We Need to be Skeptics
    Nov 21 2024

    Silicon Valley's interest in AI is driven by more than just profit and innovation. There’s an unmistakable mystical quality to it as well. In this episode, Daniel and Aza sit down with humanist chaplain Greg Epstein to explore the fascinating parallels between technology and religion. From AI being treated as a godlike force to tech leaders' promises of digital salvation, religious thinking is shaping the future of technology and humanity. Epstein breaks down why he believes technology has become our era's most influential religion and what we can learn from these parallels to better understand where we're heading.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X.

    If you like the show and want to support CHT's mission, please consider donating to the organization this giving season: https://www.humanetech.com/donate. Any amount helps support our goal to bring about a more humane future.

    RECOMMENDED MEDIA

    “Tech Agnostic” by Greg Epstein

    Further reading on Avi Schiffmann’s “Friend” AI necklace

    Further reading on Blake Lemoine and Lamda

    Blake LeMoine’s conversation with Greg at MIT

    Further reading on the Sewell Setzer case

    Further reading on Terminal of Truths

    Further reading on Ray Kurzweil’s attempt to create a digital recreation of his dad with AI

    The Drama of the Gifted Child by Alice Miller

    RECOMMENDED YUA EPISODES

    ’A Turning Point in History’: Yuval Noah Harari on AI’s Cultural Takeover

    How to Think About AI Consciousness with Anil Seth

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    How To Free Our Minds with Cult Deprogramming Expert Dr. Steven Hassan

    Show More Show Less
    47 mins
  • What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
    Nov 7 2024

    CW: This episode features discussion of suicide and sexual abuse.

    In the last episode, we had the journalist Laurie Segall on to talk about the tragic story of Sewell Setzer, a 14 year old boy who took his own life after months of abuse and manipulation by an AI companion from the company Character.ai. The question now is: what's next?

    Megan has filed a major new lawsuit against Character.ai in Florida, which could force the company–and potentially the entire AI industry–to change its harmful business practices. So today on the show, we have Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai. Meetali breaks down the details of the case, the complex legal questions under consideration, and how this could be the first step toward systemic change. Also joining is Camille Carlton, CHT’s Policy Director.

    RECOMMENDED MEDIA

    Further reading on Sewell’s story

    Laurie Segall’s interview with Megan Garcia

    The full complaint filed by Megan against Character.AI

    Further reading on suicide bots

    Further reading on Noam Shazier and Daniel De Frietas’ relationship with Google

    The CHT Framework for Incentivizing Responsible Artificial Intelligence Development and Use

    Organizations mentioned:

    The Tech Justice Law Project

    The Social Media Victims Law Center

    Mothers Against Media Addiction

    Parents SOS

    Parents Together

    Common Sense Media

    RECOMMENDED YUA EPISODES

    When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    AI Is Moving Fast. We Need Laws that Will Too.

    Corrections:

    Meetali referred to certain chatbot apps as banning users under 18, however the settings for the major app stores ban users that are under 17, not under 18.

    Meetali referred to Section 230 as providing “full scope immunity” to internet companies, however Congress has passed subsequent laws that have made carve outs for that immunity for criminal acts such as sex trafficking and intellectual property theft.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X.

    Show More Show Less
    49 mins
  • When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer
    Oct 24 2024

    Content Warning: This episode contains references to suicide, self-harm, and sexual abuse.

    Megan Garcia lost her son Sewell to suicide after he was abused and manipulated by AI chatbots for months. Now, she’s suing the company that made those chatbots. On today’s episode of Your Undivided Attention, Aza sits down with journalist Laurie Segall, who's been following this case for months. Plus, Laurie’s full interview with Megan on her new show, Dear Tomorrow.

    Aza and Laurie discuss the profound implications of Sewell’s story on the rollout of AI. Social media began the race to the bottom of the brain stem and left our society addicted, distracted, and polarized. Generative AI is set to supercharge that race, taking advantage of the human need for intimacy and connection amidst a widespread loneliness epidemic. Unless we set down guardrails on this technology now, Sewell’s story may be a tragic sign of things to come, but it also presents an opportunity to prevent further harms moving forward.

    If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The first episode of Dear Tomorrow, from Mostly Human Media

    The CHT Framework for Incentivizing Responsible AI Development

    Further reading on Sewell’s case

    Character.ai’s “About Us” page

    Further reading on the addictive properties of AI

    RECOMMENDED YUA EPISODES

    AI Is Moving Fast. We Need Laws that Will Too.

    This Moment in AI: How We Got Here and Where We’re Going

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    The AI Dilemma

    Show More Show Less
    49 mins