Episodes

  • Var Shankar: AI Governance for Smaller Organizations
    May 7 2026

    Var Shankar makes the case that most AI governance guidance is built for large, sophisticated, multifunctional global enterprises — and that this leaves out the roughly half of American workers employed at organizations with fewer than 500 people. Through the Council on AI Governance, the nonprofit he leads with Alexis Cook, he is trying to fill that gap with open, current, and pragmatic resources, including an AI Governance Playbook organized around four focus areas: strategy, risk and compliance, workforce literacy, and operational management. He tells Kevin that the case for AI governance no longer needs to be made; what smaller organizations now need is help asking vendors the right questions and clarifying who owns what internally when a few people are doing many jobs.

    The conversation then turns to the parts of the field Var thinks are most undercooked. Workforce literacy, he argues, is the focus area most often neglected because it functions as a vitamin rather than a painkiller — long-term, hard to resource, and easy to reduce to a training module when what is actually needed is hands-on involvement in pilots and documentation. He explains why healthcare offers an unusually strong foundation for AI assurance, with its existing regulatory architecture, comfort with use-case variability, and tradition of post-deployment monitoring, and he describes assurance itself as the connective tissue between an organization and the outside world — distinct from regulation and from internal governance, not a substitute for either. Drawing on a pilot he co-authored on with the Standards Council of Canada testing system-level certification at a Canadian bank, he highlights two surprising lessons: that even simplified certification criteria get interpreted differently by different actors, and that even one of the world's most forward-thinking public standards bodies lacked the technical capacity to play standard-setter for something as dynamic as an AI system. He closes with practical advice for risk and compliance professionals: start with the positive vision of what the organization is trying to do with AI, observe how existing IT, data, and security governance already work, and identify which standards ecosystems the organization is already plugged into.

    Var Shankar is Executive Director of the Council on AI Governance, an independent nonprofit developing open AI governance resources for organizations of all sizes. He previously served as Executive Director of the Responsible AI Institute and as Chief AI and Privacy Officer at Enzai, a regtech AI compliance startup. An attorney by training and a graduate of Harvard Law School, he practiced law at Cravath, Swaine & Moore and earlier worked on the Clinton Global Initiative and with the government of British Columbia on digital government and COVID response. He teaches AI governance at Purdue, where he has helped develop a master's-level AI auditing program, and serves on the OECD Network of Experts on AI, the World Economic Forum's AI Governance Alliance, and the Brookings Forum for Cooperation on AI. He co-developed Kaggle's Intro to AI Ethics course with Alexis Cook.

    Transcript

    Council on AI Governance: AI Governance Playbook

    Context-specific certification of AI systems: a pilot in the financial industry (AI and Ethics, 2025)

    Standards Council of Canada AI accreditation pilot

    Show More Show Less
    29 mins
  • Katie Fowler (Thompson Reuters Foundation): How 3,000 Companies Approach AI Governance
    Apr 30 2026

    Good data about how companies are implementing AI governance programs is essential both for organizations to benchmark their efforts, and for observers to understand the state of development. In this episode, Katie Fowler, Director of Responsible Business at the Thomson Reuters Foundation, joins Kevin Werbach to discuss the findings of Responsible AI in Practice, a new report drawing on a global dataset of roughly 3,000 companies across 13 sectors.

    Fowler unpacks the report's central finding: an enormous gap between corporate AI ambition and operational governance, with 44 percent of companies reporting an AI strategy but only 13 percent publicly committing to a formal governance framework. She argues that the gap is structural rather than just a disclosure failure, noting that AI expertise often sits deep within technical teams rather than at the leadership levels responsible for organization-wide rollout. She points to striking regional variation in workforce protections, the EU AI Act's emergence as a de facto global reference framework even outside Europe, and pushes back on the narrative that regulation stifles innovation. Looking forward, she discusses how investors are using transparency as a proxy for risk management in the absence of mature responsible AI metrics, and outlines the long-term vision of building a dataset robust enough to support a responsible AI index tied to financial materiality.

    Katie Fowler is Director of Responsible Business at the Thomson Reuters Foundation, the independent charity affiliated with Thomson Reuters. She leads initiatives including the Workforce Disclosure Initiative (a global platform collecting survey data on how companies treat workers across their direct operations and supply chains) and the AI Company Data Initiative, launched in partnership with UNESCO. Before joining the Foundation, Fowler held leadership roles at The Social Innovation Partnership and Chance for Childhood.

    Transcript

    Responsible AI in Practice: 2025 Global Insights from the AI Company Data Initiative

    Why a Companywide Effort Is Key to Responsible and Trustworthy AI Adoption (Katie Fowler, techUK guest blog, 2025)

    Show More Show Less
    38 mins
  • Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust
    Apr 23 2026

    AI-generated deepfakes are exploding in volume and quality, posing frightening challenges for public discourse, security, safety, and more. My guest, Henry Ajder, has been mapping the deepfake landscape since before most people had heard the term. In this conversation, he describes the dramatic changes in realism, efficiency, accessibility, and functionality of synthetic media tools since he published the first comprehensive census of deepfakes in 2019. Ajder describes the current moment as one of "epistemic nihilism," where people cannot reliably distinguish real from synthetic content and the available technological responses are not yet at a level of categorical trust. He introduces a framework of "deception, doubt, and degradation" for understanding deepfake harms, and draws a distinction between the clearly malicious, the clearly beneficial, and a vast unsettling middle ground of uses that society has not yet figured out how to evaluate.

    On the response side, Ajder warns that media literacy advice is not just outdated but actively harmful, because it gives people false confidence in their ability to spot fakes. Detection tools, watermarking, and content provenance standards like C2PA, while valuable, each have real limitations. Ajder's practical advice for organizations centers on red-teaming, understanding what your tool is actually for and who it serves, and recognizing that authenticity is a strategic asset in a synthetic age.

    Henry Ajder is the founder of Latent Space Advisory and one of the world's foremost experts on deepfakes and generative AI. He authored the landmark 2019 State of Deepfakes report, and has since advised organizations including Meta, Adobe, the UK Government, the EU Commission, the US FTC, and the World Economic Forum. He co-leads the University of Cambridge's Generative AI in Business programme, and sits on Meta's Reality Labs Advisory Council.

    Transcript


    Latent Space Advisory

    The State of Deepfakes: Landscape, Threats, and Impact (2019)

    The Future Will Be Synthesised (BBC Radio 4 Documentary Series, 2022)

    Show More Show Less
    39 mins
  • Phil Dawson, Armilla AI: Insurance for AI Risks
    Apr 16 2026

    Could a private insurance market play a significant role in compensating for AI-related harms and incentivizing companies to engage in more effective AI governance?

    Phil Dawson of Armillla AI explains why AI insurance is emerging as a distinct product category, why traditional policies aren't effective at addressing AI risks, and what AI insurance actually covers. Dawson details Armilla's journey from AI testing platform assurance provider to, managing general agent for AI insurance policies, arguing that the company's AI audit experience gave it the risk data and evaluation capabilities needed to underwrite AI systems. A key turning point, he says, was realizing that as companies received reports showing how their models performed or underperformed, they became more concerned about risk, and insurance emerged as the next logical step to build trust.

    Dawson identifies the absence of claims data as the central challenge for AI underwriting, which forces insurers to rely on proxy signals. He argues that policymakers can help by incentivizing transparency, disclosure, and third-party assessment. Drawing on lessons from cyber insurance, Dawson contends that risk-based pricing must be grounded in system-level governance evaluation. He also describes Armilla's partnership program, which connects insured companies with AI governance platforms, auditing firms, and certification bodies, ultimately driving improved AI governance maturity across the sector.

    Philip Dawson is Head of AI Policy and Partnerships at Armilla AI, an MGA and Lloyd's cover holder that provides dedicated AI insurance products. A lawyer and public policy adviser, he has spent nearly a decade working on AI governance, including early involvement in the drafting of the OECD AI Principles and roles at Element AI, the United Nations, and the Harvard Kennedy School's Carr Center for Human Rights Policy.

    Transcript

    Ready or Not: The Impact of Artifician Intelligence on Insurance Risks (Armilla AI and Lockton, February 2026)

    Armilla AI Raises Lloyd's-Backed Coverage to $25M as Traditional Insurers Retreat from AI Risk (Fintech Finance News, January 22, 2026)

    Gen AI Risks for Businesses: Exploring the Role for Insurance (Geneva Association, October 2, 2025)

    Show More Show Less
    30 mins
  • Walter Haydock, StackAware: In Search Of AI Governance Certification
    Apr 9 2026

    Walter Haydock draws a direct line from military risk management to the enterprise AI challenge. His argues that organizations need to stop doing "math with colors," and move toward quantitative assessment that assigns dollar values to potential AI failures. Much of the conversation in this episode focuses on ISO 42001, the global standard for AI management systems, which Haydock has championed and which his own firm has gone through. He draws a three-part taxonomy of AI governance frameworks: legislation you either comply with or don't, voluntary self-attestable frameworks like the NIST AI RMF, and externally certifiable standards like ISO 42001 that bring independent verification.

    Haydock outlines a forward-looking vision in which certification, insurance, and legal safe harbors reinforce one another. Machine-readable audit data will eventually allow insurers to make informed underwriting decisions about AI risk, reducing uncertainty for both enterprises and their customers. Though, as he acknowledges, we are still far from that environment, with AI audits today still roughly 90% manual.

    Walter Haydock is the founder of StackAware, which helps AI-powered companies manage security, compliance, and privacy risk. Before entering the private sector, he served as a reconnaissance and intelligence officer in the U.S. Marine Corps, as a professional staff member for the Homeland Security Committee of the U.S. House of Representatives, and as an analyst at the National Counterterrorism Center. He is a graduate of the United States Naval Academy, Georgetown University's School of Foreign Service, and Harvard Business School.

    Transcript


    Deploy Securely (Haydock's Substack)

    Show More Show Less
    33 mins
  • Richa Kaul, Complyance: Asking the Right Questions
    Apr 2 2026

    Richa Kaul breaks down the AI risk landscape for enterprises, and argues that the key to managing all of them is resisting the urge to sensationalize. Kaul offers a candid assessment of where enterprise AI governance committees are falling short, noting that many lack the technical fluency to ask vendors the right questions, such as where customer data goes, whether it trains other clients' models, and what specific steps reduce hallucination. She suggests that market-driven security standards like SOC-2 and ISO 27001 often matter more in practice than government regulation, creating a "beautiful ecosystem" where risk management runs ahead of the law. Looking forward, she addresses the growing challenge of agentic AI systems that make decisions autonomously, offering a deceptively simple prescription: Map every action an agent can take, know where your highest risk sits, identify the critical decision points, and demand human sign-off at each one/

    Richa Kaul is the founder and CEO of Complyance, an AI-native enterprise governance, risk, and compliance (GRC) platform. Before founding Complyance, she was Chief Strategy Officer at ContractPodAi, a legal technology company, and previously served as Managing Director at the Virginia Economic Development Partnership and as a management consultant at McKinsey.

    Transcript

    Complyance Raises $20M to Help Companies Manage Risk and Compliance (TechCrunch, February 11, 2026)

    Show More Show Less
    33 mins
  • Michael Horowitz, UPenn: Governing AI That's Designed to Kill
    Mar 26 2026

    How AI is, could, and shouldn't be used in military and other national security contexts is a topic of growing importance. Recent conflicts on the battlefield, and between the U.S. military and a major AI lab, are forcing conversations about legal, ethical, and appropriate business limitations for increasingly powerful AI tools. Michael Horowitz, a Political Science professor and Director of Perry World House at the University of Pennsylvania, is one of the world's leading experts on military AI and autonomous weapons. In this episode, drawing on his two stints in the U.S. Department of Defense, Horowitz walks through the major buckets of military AI use. He explains why militaries are, in some ways, more incentivized than any other institution to get AI governance right, but genuine tensions among speed, effectiveness, and meaningful human control can make responsible military AI difficult in practice. We cover Anthropic's recent dispute with the Pentagon as a case study in the fragile and increasingly consequential relationship between Silicon Valley and the defense establishment.

    Michael C. Horowitz is the Richard Perry Professor of Political Science and Director of Perry World House at the University of Pennsylvania, and a Senior Fellow for Technology and Innovation at the Council on Foreign Relations. From 2022 to 2024, he served as U.S. Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities, where he was the principal author of the U.S. Political Declaration on Responsible Military Use of AI and Autonomy. He is the author of The Diffusion of Military Power: Causes and Consequences for International Politics and co-author of Why Leaders Fight.

    Transcript

    Battles of Precise Mass: Technology Is Remaking War — and America Must Adapt (Foreign Affairs, 2024)

    The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons (Daedalus, 2016)

    Rules of Engagement (Penn Gazette, 2025)

    Show More Show Less
    34 mins
  • Tanvi Singh, Ekta AI: The Case for Sovereign AI
    Mar 19 2026

    Tanvi Singh draws on over two decades of building and governing AI systems inside global banks to make a provocative case: you cannot be accountable for decisions you do not control. Enterprises are consuming intelligence through models they don't own, can't explain, and didn't train. Singh reframes sovereignty beyond data center locations and infrastructure, to control across the entire stack, so that an organization's AI reflects its own values, laws, and culture. Whlile frontier LLMs will continue to dominate the consumer and retail market, she argues that domain-specific models will be important for enterprise and regulated use cases, offering better accuracy at dramatically lower cost. The conversation also touches on Singh's engagement with the Vatican's Pontifical Academy of Sciences around AI ethics, which has worked on benchmarks that reflect institutional values rather than defaulting to the cultural norms baked into large internet-trained models.

    Tanvi Singh is the Co-Founder and CEO of Ekta Inc., a sovereign AI platform company building domain-specific foundation models for governments and regulated industries. She previously served as Group Head of AI, Data & Analytics at UBS and held senior technology leadership roles at Credit Suisse, GE, and Monsanto. She is the founder and managing partner of Nirmata-ai Ventures, a Zurich-based deep-tech venture fund, and serves as a board member of the Global Blockchain Business Council and GirlsCanCode.

    Transcript

    Sovereign AI: Why States and Institutions Have to Take Back Their Digital Intelligence (HSToday, co-authored with Thomas Cellucci)

    Ekta AI

    Show More Show Less
    33 mins