• Episode 45: Billionaires, Influencers, and Ed Tech (feat. Adrienne Williams), November 18 2024
    Nov 26 2024

    From Bill Gates to Mark Zuckerberg, billionaires with no education expertise keep using their big names and big dollars to hype LLMs for classrooms. Promising ‘comprehensive AI tutors', or just ‘educator-informed’ tools to address understaffed classrooms, this hype is just another round of Silicon Valley pointing to real problems -- under-supported school systems -- but then directing attention and resources to their favorite toys. Former educator and DAIR research fellow Adrienne Williams joins to explain the problems this tech-solutionist redirection fails to solve, and the new ones it creates.

    Adrienne Williams started organizing in 2018 while working as a junior high teacher for a tech owned charter school. She expanded her organizing in 2020 after her work as an Amazon delivery driver, where many of the same issues she saw in charter schools were also in evidence. Adrienne is a Public Voices Fellow on Technology in the Public Interest with The OpEd Project in partnership with the MacArthur Foundation, as well as a Research Fellow at both (DAIR) and Just Tech.

    References:

    Funding Helps Teachers Build AI Tools

    Sal Khan's 2023 Ted Talk: AI in the classroom can transform education

    Bill Gates: My trip to the frontier of AI education

    • Background: Cory Booker Hates Public Schools
    • Background: Cory Booker's track record on education

    Book: Access is Capture: How Edtech Reproduces Racial Inequality
    Book: Disruptive Fixation: School Reform and the Pitfalls of Techno-Idealism

    Previously on MAIHT3K: Episode 26, Universities Anxiously Buy Into the Hype (feat. Chris Gilliard)
    Episode 17: Back to School with AI Hype in Education (feat. Haley Lepp)

    Fresh AI Hell:

    "Streamlining" teaching

    Google, Microsoft and Perplexity are promoting scientific racism in 'AI overviews'

    'Whisper' medical transcription tool used in hospitals is making things up

    X's AI bot can't tell the difference between a bad game and vandalism

    Prompting is not a substitute for probability measurements in large language models

    Yet another 'priestbot'

    Self-driving wheelchairs at Seattle-Tacoma International Airpot


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show More Show Less
    1 hr and 1 min
  • Episode 44: OpenAI's Ridiculous 'Reasoning'
    Nov 13 2024

    The company behind ChatGPT is back with bombastic claim that their new o1 model is capable of so-called "complex reasoning." Ever-faithful, Alex and Emily tear it apart. Plus the flaws in a tech publication's new 'AI hype index,' and some palette-cleansing new regulation against data-scraping worker surveillance.

    References:

    OpenAI: Learning to reason with LLMs

    • How reasoning works
    • GPQA, a 'graduate-level' Q&A benchmark system

    Fresh AI Hell:

    MIT Technology Review's AI 'AI hype index'

    CFPB Takes Action to Curb Unchecked Worker Surveillance


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show More Show Less
    1 hr
  • Episode 43: AI Companies Gamble with Everyone's Planet (feat. Paris Marx), October 21 2024
    Oct 31 2024

    Technology journalist Paris Marx joins Alex and Emily for a conversation about the environmental harms of the giant data centers and other water- and energy-hungry infrastructure at the heart of LLMs and other generative tools like ChatGPT -- and why the hand-wavy assurances of CEOs that 'AI will fix global warming' are just magical thinking, ignoring a genuine climate cost and imperiling the clean energy transition in the US.

    Paris Marx is a tech journalist and host of the podcast Tech Won’t Save Us. He also recently launched a 4-part series, Data Vampires, (which features Alex) about the promises and pitfalls of data centers like the ones AI boosters rely on.

    References:

    Eric Schmidt says AI more important than climate goals

    Microsoft's sustainability report

    Sam Altman's “The Intelligence Age” promises AI will fix the climate crisis

    Previously on MAIHT3K: Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023

    Fresh AI Hell:

    Rosetta to linguists: "Embrace AI or risk extinction" of endangered languages

    A talking collar that you can use to pretend to talk with your pets

    Google offers synthetic podcasts through NotebookLM

    An AI 'artist' claims he's losing millions of dolalrs from people stealing his work

    University hiring English professor to teach...prompt engineering



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show More Show Less
    1 hr and 1 min
  • Episode 42: Stop Trying to Make 'AI Scientist' Happen, September 30 2024
    Oct 10 2024

    Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?”

    Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot for some research questions, or have it synthesize some human subjects to save you time on surveys.

    Alex and Emily explain why so-called “fully automated, open-ended scientific discovery” can’t live up to the grandiose promises of tech companies. Plus, an update on their forthcoming book!

    References:

    Sakana.AI keeps trying to make 'AI Scientist' happen

    • The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery

    Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers

    How should the advent of large language models affect the practice of science?

    Relevant research ethics policies:

    ACL Policy on Publication Ethics

    Committee On Public Ethics (COPE)

    The Vancouver Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work

    Fresh AI Hell:

    Should journals allow LLMs as co-authors?

    Business Insider "asks ChatGPT"

    Otter.ai sends transcript of private after-meeting discussion to everyone

    "Could AI End Grief?"

    AI generated crime scene footage

    "The first college of nursing to offer an MSN in AI"

    FTC cracks down on "AI" claims



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show More Show Less
    1 hr
  • Episode 41: Sweating into AI Fall, September 9 2024
    Sep 26 2024

    Did your summer feel like an unending barrage of terrible ideas for how to use “AI”? You’re not alone. It's time for Emily and Alex to clear out the poison, purge some backlog, and take another journey through AI hell -- from surveillance of emotions, to continued hype in education and art.

    Fresh AI Hell:

    Synthetic data for Hollywood test screenings

    NaNoWriMo's AI fail

    • AI is built on exploitation
    • NaNoWriMo sponsored by an AI writing company
    • NaNoWriMo's AI writing sponsor creates bad writing

    AI assistant rickrolls customers

    Programming LLMs with "fiduciary duty"

    Canva increasing prices thank to "AI" features

    Ad spending by AI companies

    Clearview AI hit with largest GDPR fine yet

    'AI detection' in schools harms neurodivergent kids

    CS prof admits unethical ChatGPT use

    College recruiter chatbot can't discuss politics

    "The AI-powered nonprofits reimagining education"

    Teaching AI at art schools

    Professors' 'AI twins' as teaching assistants

    A teacherless AI classroom

    Another 'AI scientist'

    LLMs still biased against African American English

    AI "enhances" photo of Black people into white-appearing

    Eric Schmidt: Go ahead, steal data with ChatGPT

    The environmental cost of Google's "AI Overviews"

    Jeff Bezos' "Grand Challenge" for AI in environment

    What I found in an AI-company's e-waste

    xAI accused of worsening smog with unauthorized gas turbines

    Smile surveillance of workers

    AI for "emotion recognition" of rail passengers

    Chatbot harassment scenario reveals real victim

    AI has hampered productivity

    "AI" in a product description turns off consumers

    Is tripe kosher? It depends on the religion of the cow.


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show More Show Less
    1 hr and 1 min
  • Mystery AI Hype Theater 3000, Episode 40: Elders Need Care, Not 'AI' Surveillance (feat. Clara Berridge), August 19 2024
    Sep 13 2024

    Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as populations grow older on average globally, technology such as chatbots is often used to sidestep real solutions to providing meaningful care, while also playing on ageist and ableist tropes.

    Dr. Clara Berridge is an associate professor at the University of Washington’s School of Social Work. Her research focuses explicitly on the policy and ethical implications of digital technology in elder care, and considers things like privacy and surveillance, power, and decision-making about technology use.

    References:

    Care.Coach's 'Avatar' chat program*

    For Older People Who Are Lonely, Is the Solution a Robot Friend?

    Care Providers’ Perspectives on the Design of Assistive Persuasive Behaviors for Socially Assistive Robots

    Socio-Digital Vulnerability

    ***Care.Coach's 'Fara' and 'Auger' products, also discussed in this episode, are no longer listed on their site.

    Fresh AI Hell:

    Apple Intelligence hidden prompts include the command "don't hallucinate"

    The US wants to use facial recognition to identify migrant children as they age

    Family poisoned after following fake mushroom book

    It is a beautiful evening in the neighborhood, and you are a horrible Waymo robotaxi

    Dynamic pricing + surveillance hell at the grocery store

    Chinese social media's newest trend: imitating AI-generated videos


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show More Show Less
    1 hr and 1 min
  • Episode 39: Newsrooms Pivot to Bullshit (feat. Sam Cole), Aug 5 2024
    Aug 29 2024

    The Washington Post is going all in on AI -- surely this won't be a repeat of any past, disastrous newsroom pivots! 404 Media journalist Samantha Cole joins to talk journalism, LLMs, and why synthetic text is the antithesis of good reporting.

    References:

    The Washington Post Tells Staff It’s Pivoting to AI: "AI everywhere in our newsroom."
    Response: Defector Media Promotes Devin The Dugong To Chief AI Officer, Unveils First AI-Generated Blog

    The Washington Post's First AI Strategy Editor Talks LLMs in the Newsroom

    Also: New Washington Post CTO comes from Uber

    The Washington Post debuts AI chatbot, will summarize climate articles.

    Media companies are making a huge mistake with AI

    When ChatGPT summarizes, it does nothing of the kind

    404 Media: 404 Media Now Has a Full Text RSS Feed

    404 Media: Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones)


    Fresh AI Hell:

    "AI" Alan Turning

    • Our Opinions Are Correct: The Turing Test is Bullshit (w/Alex Hanna and Emily M. Bender)

    Google advertises Gemini for writing synthetic fan letters

    Dutch Judge uses ChatGPT's answers to factual questions in ruling

    Is GenAI coming to your home appliances?

    AcademicGPT (Galactica redux)

    "AI" generated images in medical science, again (now retracted)


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show More Show Less
    1 hr and 2 mins
  • Episode 38: Deflating Zoom's 'Digital Twin,' July 29 2024
    Aug 14 2024

    Could this meeting have been an e-mail that you didn't even have to read? Emily and Alex are tearing into the lofty ambitions of Zoom CEO Eric Yuan, who claims the future is a LLM-powered 'digital twin' that can attend meetings in your stead, make decisions for you, and even be tuned to different parameters with just the click of a button.

    References:
    The CEO of Zoom wants AI clones in meetings

    All-knowing machines are a fantasy

    A reminder of some things chatbots are not good for

    Medical science shouldn't platform automating end-of-life care

    The grimy residue of the AI bubble

    On the phenomenon of bullshit jobs: a work rant

    Fresh AI Hell:
    LA schools' ed tech chatbot misusing student data

    AI "teaching assistants" at Morehouse

    "Diet-monitoring AI tracks your each and every spoonful"

    A teacher's perspective on dealing with students who "asked ChatGPT"

    Are Swiss researchers affiliated with Israeli military industrial complex? Swiss institution asks ChatGPT

    Using a chatbot to negotiate lower prices


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show More Show Less
    1 hr and 2 mins