Var Shankar makes the case that most AI governance guidance is built for large, sophisticated, multifunctional global enterprises — and that this leaves out the roughly half of American workers employed at organizations with fewer than 500 people. Through the Council on AI Governance, the nonprofit he leads with Alexis Cook, he is trying to fill that gap with open, current, and pragmatic resources, including an AI Governance Playbook organized around four focus areas: strategy, risk and compliance, workforce literacy, and operational management. He tells Kevin that the case for AI governance no longer needs to be made; what smaller organizations now need is help asking vendors the right questions and clarifying who owns what internally when a few people are doing many jobs.
The conversation then turns to the parts of the field Var thinks are most undercooked. Workforce literacy, he argues, is the focus area most often neglected because it functions as a vitamin rather than a painkiller — long-term, hard to resource, and easy to reduce to a training module when what is actually needed is hands-on involvement in pilots and documentation. He explains why healthcare offers an unusually strong foundation for AI assurance, with its existing regulatory architecture, comfort with use-case variability, and tradition of post-deployment monitoring, and he describes assurance itself as the connective tissue between an organization and the outside world — distinct from regulation and from internal governance, not a substitute for either. Drawing on a pilot he co-authored on with the Standards Council of Canada testing system-level certification at a Canadian bank, he highlights two surprising lessons: that even simplified certification criteria get interpreted differently by different actors, and that even one of the world's most forward-thinking public standards bodies lacked the technical capacity to play standard-setter for something as dynamic as an AI system. He closes with practical advice for risk and compliance professionals: start with the positive vision of what the organization is trying to do with AI, observe how existing IT, data, and security governance already work, and identify which standards ecosystems the organization is already plugged into.
Var Shankar is Executive Director of the Council on AI Governance, an independent nonprofit developing open AI governance resources for organizations of all sizes. He previously served as Executive Director of the Responsible AI Institute and as Chief AI and Privacy Officer at Enzai, a regtech AI compliance startup. An attorney by training and a graduate of Harvard Law School, he practiced law at Cravath, Swaine & Moore and earlier worked on the Clinton Global Initiative and with the government of British Columbia on digital government and COVID response. He teaches AI governance at Purdue, where he has helped develop a master's-level AI auditing program, and serves on the OECD Network of Experts on AI, the World Economic Forum's AI Governance Alliance, and the Brookings Forum for Cooperation on AI. He co-developed Kaggle's Intro to AI Ethics course with Alexis Cook.
Transcript
Council on AI Governance: AI Governance Playbook
Context-specific certification of AI systems: a pilot in the financial industry (AI and Ethics, 2025)
Standards Council of Canada AI accreditation pilot