Episodes

  • Ravit Dotan: Rethinking AI Ethics
    Nov 6 2025

    Ravit Dotan argues that the primary barrier to accountable AI is not a lack of ethical clarity, but organizational roadblocks. While companies often understand what they should do, the real challenge is organizational dynamics that prevent execution—AI ethics has been shunted into separate teams lacking power and resources, with incentive structures that discourage engineers from raising concerns. Drawing on work with organizational psychologists, she emphasizes that frameworks prescribe what systems companies should have but ignore how to navigate organizational realities. The key insight: responsible AI can't be a separate compliance exercise but must be embedded organically into how people work. Ravit discusses a recent shift in her orientation from focusing solely on governance frameworks to teaching people how to use AI thoughtfully. She critiques "take-out mode" where users passively order finished outputs, which undermines skills and critical review. The solution isn't just better governance ,but teaching workers how to incorporate responsible AI practices into their actual workflows.

    Dr. Ravit Dotan is the founder and CEO of TechBetter, an AI ethics consulting firm, and Director of the Collaborative AI Responsibility (CAIR) Lab at the University of Pittsburgh. She holds a Ph.D. in Philosophy from UC Berkeley and has been named one of the "100 Brilliant Women in AI Ethics" (2023), and was a finalist for "Responsible AI Leader of the Year" (2025). Since 2021, she has consulted with tech companies, investors, and local governments on responsible AI. Her recent work emphasizes teaching people to use AI thoughtfully while maintaining their agency and skills. Her work has been featured in The New York Times, CNBC, Financial Times, and TechCrunch.

    Transcript


    My New Path in AI Ethics (October 2025)

    The Values Encoded in Machine Learning Research (FAccT 2022 Distinguished Paper Award) -

    Responsible AI Maturity Framework

    Show More Show Less
    34 mins
  • Trey Causey: Is Responsble AI Failing?
    Oct 30 2025

    Kevin Werbach speaks with Trey Causey about the precarious state of the responsible AI (RAI) field. Causey argues that while the mission is critical, the current organizational structures for many RAI teams are struggling. He highlights a fundamental conflict between business objectives and governance intentions, compounded by the fact that RAI teams' successes (preventing harm) are often invisible, while their failures are highly visible.

    Causey makes the case that for RAI teams to be effective, they must possess deep technical competence to build solutions and gain credibility with engineering teams. He also explores the idea of "epistemic overreach," where RAI groups have been tasked with an impossibly broad mandate they lack the product-market fit to fulfill. Drawing on his experience in the highly regulated employment sector at Indeed, he details the rigorous, science-based approach his team took to defining and measuring bias, emphasizing the need to move beyond simple heuristics and partner with legal and product teams before analysis even begins.

    Trey Causey is a data scientist who most recently served as the Head of Responsible AI for Indeed. His background is in computational sociology, where he used natural language processing to answer social questions.

    Transcript

    Responsible Ai Is Dying. Long Live Responsible AI

    Show More Show Less
    34 mins
  • Caroline Louveaux: Trust is Mission Critical
    Oct 23 2025

    Kevin Werbach speaks with Caroline Louveaux, Chief Privacy, AI, and Data Responsibility Officer at Mastercard, about what it means to make trust mission critical in the age of artificial intelligence. Caroline shares how Mastercard built its AI governance program long before the current AI boom, grounding it in the company's Data and Technology Responsibility Principles". She explains how privacy-by-design practices evolved into a single global AI governance framework aligned with the EU AI Act, NIST AI Risk Management, and standards.

    The conversation explores how Mastercard balances innovation speed with risk management, automates low-risk assessments, and maintains executive oversight through its AI Governance Council. Caroline also discusses the company's work on agentic commerce, where autonomous AI agents can initiate payments, and why trust, certification, and transparency are essential for such systems to succeed. Caroline unpacks what it takes for a global organization to innovate responsibly — from cross-functional governance and "tone from the top," to partnerships like the Data & Trust Alliance and efforts to harmonize global standards. Caroline emphasizes that responsible AI is a shared responsibility and that companies that can "innovate fast, at scale, but also do so responsibly" will be the ones that thrive.

    Caroline Louveaux leads Mastercard's global privacy and data responsibility strategy. She has been instrumental in building Mastercard's AI governance framework and shaping global policy discussions on data and technology. She serves on the board of the International Association of Privacy Professionals (IAPP), the WEF Task Force on Data Intermediaries, the ENISA Working Group on AI Cybersecurity, and the IEEE AI Systems Risk and Impact Executive Committee, among other activities.

    Transcript

    How Mastercard Uses AI Strategically: A Case Study (Forbes 2024)

    Lessons From a Pioneer: Mastercard's Experience of AI Governance (IMD, 2023)

    As AI Agents Gain Autonomy, Trust Becomes the New Currency. Mastercard Wants to Power Both. (Business Insider, July 2025)

    Show More Show Less
    33 mins
  • Cameron Kerry: From Gridlock to Governance?
    Oct 16 2025

    Cameron Kerry, Distinguished Visiting Fellow at the Brookings Institution and former Acting US Secretary of Commerce, joins Kevin Werbach to explore the evolving landscape of AI governance, privacy, and global coordination. Kerry emphasizes the need for agile and networked approaches to AI regulation that reflect the technology's decentralized nature. He argues that effective oversight must be flexible enough to adapt to rapid innovation while grounded in clear baselines that can help organizations and governments learn together. Kerry revisits his long-standing push for comprehensive U.S. privacy legislation, lamenting the near-passage of the 2022 federal privacy bill that was derailed by partisan roadblocks. Despite setbacks, he remains hopeful that bottom-up experimentation and shared best practices can guide responsible AI use, even without sweeping laws.

    Cameron F. Kerry is the Ann R. and Andrew H. Tisch Distinguished Visiting Fellow in Governance Studies at the Brookings Institution and a global thought leader on privacy, technology, and AI governance. He served as General Counsel and Acting Secretary of the U.S. Department of Commerce, where he led work on privacy frameworks and digital policy. A senior advisor to the Aspen Institute and board member of several policy initiatives, Kerry focuses on building transatlantic and global approaches to digital governance that balance innovation with accountability.

    Transcript

    What to Make of the Trump Administration's AI Action Plan (Brookings, July 31, 2025)

    Network Architecture for Global AI Policy (Brookings, February 10, 2025)

    How Privacy Legislation Can Help Address AI (Brookings, July 7, 2023)

    Show More Show Less
    33 mins
  • Derek Leben: All of Us are Going to Become Ethicists
    Oct 9 2025

    Carnegie Mellon business ethics professor Derek Leben joins Kevin Werbach to trace how AI ethics evolved from an early focus on embodied systems—industrial robots, drones, self-driving cars—to today's post-ChatGPT landscape that demands concrete, defensible recommendations for companies. Leben explains why fairness is now central: firms must decide which features are relevant to a task (e.g., lending or hiring) and reject those that are irrelevant—even if they're predictive. Drawing on philosophers such as John Rawls and Michael Sandel, he argues for objective judgments about a system's purpose and qualifications. Getting practical about testing for AI fairness, he distinguishes blunt outcome checks from better metrics, and highlights counterfactual tools that reveal whether a feature actually drives decisions. With regulations uncertain, he urges companies to treat ethics as navigation, not mere compliance: Make and explain principled choices (including how you mitigate models), accept that everything you do is controversial, and communicate trade-offs honestly to customers, investors, and regulators. In the end, Leben argues, we all must become ethicists to address the issues AI raises...whether we want to or not.

    Derek Leben is Associate Teaching Professor of Ethics at the Tepper School of Business, Carnegie Mellon University, where he teaches courses such as "Ethics of Emerging Technologies," "Fairness in Business," and "Ethics & AI." Leben is the author of Ethics for Robots (Routledge, 2018) and AI Fairness (MIT Press, 2025). He founded the consulting group Ethical Algorithms, through which he advises governments and corporations on how to build fair, socially responsible frameworks for AI and autonomous

    Transcript


    AI Fairness: Designing Equal Opportunity Algorithms (MIT Press 2025)

    Ethics for Robots: How to Design a Moral Algorithm (Routledge 2019)

    The Ethical Challenges of AI Agents (Blog post, 2025)

    Show More Show Less
    35 mins
  • Heather Domin: From Principles to Practice
    Oct 2 2025

    Kevin Werbach interviews Heather Domin, Global Head of the Office of Responsible AI and Governance at HCLTech. Domin reflects on her path into AI governance, including her pioneering work at IBM to establish foundational AI ethics practices. She discusses how the field has grown from a niche concern to a recognized profession, and the importance of building cross-functional teams that bring together technologists, lawyers, and compliance experts.

    Domin emphasizes the advances in governance tools, bias testing, and automation that are helping developers and organizations keep pace with rapidly evolving AI systems. She describes her role at HCLTech, where client-facing projects across multiple industries and jurisdictions create unique governance challenges that require balancing company standards with client-specific risk frameworks. Domin notes that while most executives acknowledge the importance of responsible AI, few feel prepared to operationalize it. She emphasizes the growing demand for proof and accountability from regulators and courts, and finds the work exciting for its urgency and global impact. She also talks about the new chalenges of agentic AI, and the potential for "oversight agents" that use AI to govern AI.

    Heather Domin is Global Head of the Office of Responsible AI and Governance at HCLTech and co-chair of the IAPP AI Governance Professional Certification. A former leader of IBM's AI ethics initiatives, she has helped shape global standards and practices in responsible AI. Named one of the Top 100 Brilliant Women in AI Ethics™ 2025, her work has been featured in Stanford executive education and outlets including CNBC, AI Today, Management Today, Computer Weekly, AI Journal, and the California Management Review.

    Transcript

    AI Governance in the Agentic Era

    Implementing Responsible AI in the Generative Age - Study Between HCL Tech and MIT

    Show More Show Less
    35 mins
  • Dean Ball: The World is Going to Be Totally Different in 10 Years
    Sep 25 2025

    Kevin Werbach interviews Dean Ball, Senior Fellow at the Foundation for American Innovation and one of the key shapers of the Trump Administration's approach to AI policy.

    Ball reflects on his career path from writing and blogging to shaping federal policy, including his role as Senior Policy Advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy, where he was the primary drafter of the Trump Administration's recent AI Action Plan. He explains how he has developed influence through a differentiated viewpoint: rejecting the notion that AI progress will plateau and emphasizing that transformative adoption is what will shape global competition. He critiques both the Biden administration's "AI Bill of Rights" approach, which he views as symbolic and wasteful, and the European Union's AI Act, which he argues imposes impossible compliance burdens on legacy software while failing to anticipate the generative AI revolution.

    By contrast, he describes the Trump administration's AI Action Plan as focused on pragmatic measures under three pillars: innovation, infrastructure, and international security. Looking forward, he stresses that U.S. competitiveness depends less on being first to frontier models than on enabling widespread deployment of AI across the economy and government. Finally, Ball frames tort liability as an inevitable and underappreciated force in AI governance, one that will challenge companies as AI systems move from providing information to taking actions on users' behalf.

    Dean Ball is a Senior Fellow at the Foundation for American Innovation, author of Hyperdimensional, and former Senior Policy Advisor at the White House OSTP. He has also held roles at the National Science Foundation, the Mercatus Center, and Fathom. His writing spans artificial intelligence, emerging technologies, bioengineering, infrastructure, public finance, and governance, with publications at institutions including Hoover, Carnegie, FAS, and American Compass.

    Transcript

    https://drive.google.com/file/d/1zLLOkndlN2UYuQe-9ZvZNLhiD3e2TPZS/view

    America's AI Action Plan

    Dean Ball's Hyperdimensional blog

    Show More Show Less
    38 mins
  • David Hardoon: You Can't Outsource Accountability
    Sep 18 2025

    Kevin Werbach interviews David Hardoon, Global Head of AI Enablement at Standard Chartered Bank and former Chief Data Officer of the Monetary Authority of Singapore (MAS), about the evolving practice of responsible AI. Hardoon reflects on his perspective straddling both government and private-sector leadership roles, from designing the landmark FEAT principles at MAS to embedding AI enablement inside global financial institutions. Hardoon explains the importance of justifiability, a concept he sees as distinct from ethics or accountability. Organizations must not only justify their AI use to themselves, but also to regulators and, ultimately, the public. At Standard Chartered, he focuses on integrating AI safety and AI talent into one discipline, arguing that governance is not a compliance burden but a driver of innovation and resilience. In the era of generative AI and black-box models, he stresses the need to train people in inquiry--interrogating outputs, cross-referencing, and, above all, exercising judgment. Hardoon concludes by reframing governance as a strategic advantage: not a cost center, but a revenue enabler. By embedding trust and transparency, organizations can create sustainable value while navigating the uncertainties of rapidly evolving AI risks.

    David Hardoon is the Global Head of AI Enbablement at Standard Chartered Bank with over 23 years of experience in Data and AI across government, finance, academia, and industry. He was previously the first Chief Data Officer at the Monetary Authority of Singapore, and CEO of Aboitiz Data Innovation.

    MAS Feat Principles on Repsonsible AI (2018)

    Veritas Initative – MAS-backed consortium applying FEAT principles in practice

    Can AI Save Us From the Losing War With Scammers? Perhaps (Business Times, 2024)

    Can Artificial Intelligence Be Moral? (Business Times, 2021)

    Show More Show Less
    37 mins