Episodes

  • EP272 More Than Just Packets: Is NDR a "First-Class" Cloud Security Control?
    Apr 13 2026

    Guest:

    • Raja Mukerji, Co-Founder & Chief Scientist, Extrahop
    • Rafal Los, VP of Client Relations and Strategic Initiatives, Extrahop

    Topics:

    • Is Network Detection and Response (NDR) coming back after being shoved to the side by EDR a bit? Is this for real?
    • What's the value proposition of NDR in 2026, because some people still don't understand it? How does NDR apply to the world of WFH, cloud/SaaS, encryption, high bandwidth, etc?
    • Is the value of NDR the same, or different, when it comes to public (or private) cloud?
    • How does NDR fill visibility gaps that identity and agent-based solutions cannot?
    • What does NDR offer that built-in cloud security tooling (as of right now) does not? Would you call NDR a key cloud security control?
    • Does NDR help with shadow AI?
    • NDR elephant in the room is sometimes cost. How does cost change the value prop when compared to on-premise or physical infrastructure?

    Resources:

    • Video version
    • EP267 AI SOC or AI in a SOC? Cutting Through Hype, Pricing Models, and SIEM Detection Efficacy with Raffy Marty
    • EP113 Love it or Hate it, Network Security is Coming to the Cloud
    • EP154 Mike Schiffman: from Blueboxing to LLMs via Network Security at Google
    • EP115 How to Approach Cloud in a Cloudy Way, not As Somebody Else's Computer?
    • EP263 SOC Refurbishing: Why New Tools Won't Fix Broken Processes (Even With AI)
    • "The GC+CISO Connection Book" book
    Show More Show Less
    34 mins
  • EP271 Can AI-Native MDR Actually Fix Your Broken SOC Workflows or Just Automate the Mess?
    Apr 9 2026

    Guests:

    • Eric Foster, CEO, Tenex.AI
    • Bashar Abouseido, President, Tenex.AI

    Topics:

    • "10X SOC" sounds great. But for an organization stuck in "SIEM 1.0" with poor data quality and manual workflows, is "AI-native MDR" a "leapfrog" opportunity or a recipe for disaster?
    • We've seen the rise of "Decoupled SIEM" and security data lakes. Does a "Modern SIEM" even need to exist if an MDR platform has an agentic layer doing the heavy lifting?
    • You've argued for AI-native over AI-bolted-on. For an end user, what are the tangible differences of using "AI inside a legacy SIEM" versus using an "AI-native separate product"?
    • What is the one task you thought AI would handle by now that still requires a senior human analyst to step in?
    • If a CISO is using an AI MDR, "Mean Time to Detect" (MTTD) starts to look like a vanity metric because the machine is instant. What is the new golden metric for an AI-powered SOC? Is it "Time to Context," "Reduction in Human Toil," or something else?
    • How do you help a skeptical SOC Manager—who has been burned by false positives for a decade—trust an autonomous agent to perform a "containment" action at 3:00 AM?

    Resources:

    • EP227 AI-Native MDR: Betting on the Future of Security Operations?
    • EP10 SIEM Modernization? Is That a Thing?
    • The original "10X" paper "Autonomic Security Operations: 10X Transformation of the Security Operations Center"
    Show More Show Less
    27 mins
  • EP270 The Convenience Tax: Why We Keep Failing at Supply Chain Security
    Apr 6 2026

    Guest:

    • Dan Lorenc, Founder / CEO, Chainguard

    Topics:

    • We just saw a security tool (Trivy) get used to pop an AI infrastructure tool (LiteLLM) to eventually pop end users. Have we reached the point where our security tooling is actually our largest unmanaged attack surface?
    • Why now? Software supply chain security had the perennial vibe of "not top concern" for most organizations, right?
    • TeamPCP pushed malicious code to existing GitHub tags. We've been screaming about pinning versions to SHAs for years, but clearly, nobody is listening. Is it time to admit that 'convenience' is the primary enemy of supply chain security?
    • The Axios incident showed a victim compromised in under two minutes. In a world of auto-updating dependencies, is the concept of a human-in-the-loop for software updates officially dead, or do we need to look very hard at version pinning and such?
    • With XZ Utils case, we saw a long-game social engineering attack. Beyond just 'watching npm closely,' what are the realistic architectural safeguards for an org that knows they can't audit every line of an update?
    • We've spent the last three years talking about SBOMs (Software Bill of Materials) like they were a pill for supply chain health. But if the scanner producing the SBOM is the one that's compromised, isn't the SBOM just a signed receipt for your own house being on fire?
    • What is the one practical thing they can do to ensure their CI/CD isn't a credential-exfiltration-as-a-service platform?

    Resources:

    • Video version
    • North Korea-Nexus Threat Actor Compromises Widely Used Axios NPM Package in Supply Chain Attack
    • EP100 2022 Accelerate State of DevOps Report and Software Supply Chain Security
    • EP116 SBOMs: A Step Towards a More Secure Software Supply Chain
    • EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams
    • EP24 Linking Up The Pieces: Software Supply Chain Security at Google and Beyond
    • Matt Levine blog
    Show More Show Less
    27 mins
  • EP269 Reflections on RSA 2026 - Beyond AI AI AI AI AI AI AI
    Mar 30 2026

    Guests:

    • No guests! Just Tim and Anton

    Topics:

    • Hard to believe we've been doing these since 2022, is that right?
    • What did we see this year at RSA, apart from AI? And more AI? And more AI?
    • What framework can we use to understand the approaches vendors take to AI and security? Just saying "AI washing" is not enough!
    • How to tell "AI washer" from "AI tourist"?
    • I sense that "securing AI" (and agents) is finally growing as fast as "using AI for security", do you agree?
    • Is the AI vulnerability apocalypse coming? Soon?
    • Have we seen any signs of AI backlash?

    Resource:

    • Video version
    • EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025
    • RSA 2025: AI's Promise vs. Security's Past — A Reality Check blog
    • EP172 RSA 2024: Separating AI Signal from Noise, SecOps Evolves, XDR Declines?
    • EP119 RSA 2023 - What We Saw, What We Learned, and What We're Excited About
    • EP70 Special - RSA 2022 Reflections - Securing the Past vs Securing the Future
    • EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking
    • EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance
    Show More Show Less
    33 mins
  • EP268 Weaponizing the Administrative Fabric: Cloud Identity and SaaS Compromise in M Trends 2026
    Mar 23 2026

    Guests:

    • Kelli Vanderlee, Senior Manager, Threat Analysis, Mandiant, Google Cloud
    • Scott Runnels, Mandiant Incident Response, Google Cloud

    Topics:

    • Do we need to rethink "Mean Time to Respond" entirely, or are we just in deep trouble?
    • Why are threat groups collaborating so well, and are there actual lessons for defenders in their "business" model?
    • What is the scalable advice for teams worried about voice phishing and GenAI cloning?
    • What does "weaponizing the administrative fabric" actually mean in a world where identity is the perimeter?
    • Why is identity/SaaS compromise "news" in 2026 when cloud security folks have been shouting about it for years? What actually changed?
    • What's the latest in supply chain compromise, particularly regarding malicious open-source packages?
    • How do we defend against malware that is "lazy" enough to use the victim's own AI tools for reconnaissance?
    • What is the specific advice for Detection and Response (D&R) teams to handle "living off the land" (or "living off the cloud")?
    • How do you fix the situation when IT and Security departments genuinely hate each other?
    • Besides reading the report, what is the one book or piece of advice for a CISO to survive this year?

    Resources:

    • Video version
    • M-Trends 2026 Report
    • EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends
    • EP254 Escaping 1990s Vulnerability Management: From Unauthenticated Scans to AI-Driven Mitigation
    • EP205 Cybersecurity Forecast 2025: Beyond the Hype and into the Reality
    • EP147 Special: 2024 Security Forecast Report
    • "The Evolution of Cooperation" book
    Show More Show Less
    34 mins
  • EP267 AI SOC or AI in a SOC? Cutting Through Hype, Pricing Models, and SIEM Detection Efficacy with Raffy Marty
    Mar 16 2026

    Guest:

    • Raffael Marty, Operating Advisor, a SIEM legend since 1992

    Topics:

    • You argue that declaring existing SIEM being obsolete is a "marketing slogan" rather than a true thesis. What is the real pain point and the actual gap in traditional SIEMs as opposed to the more sensational claims?
    • You highlight that "correlation, state, timelines, and real-time detection require locality," making centralization a necessary trade-off. Can a truly federated or decoupled SIEM architecture achieve the same fidelity and real-time performance for complex, stateful detections as a centralized one?
    • You call the rise of independent security data pipelines the "SIEM Trojan Horse." How quickly is this abstraction layer turning SIEM into a "swappable" component, and what should SIEM vendors have done differently years ago to prevent this market from existing?
    • This "AI SOC" thing, is this even real? Is AI in a SOC a better label? Do you think major SIEM vendors will own this very soon, like they did with UEBA and SOAR?
    • If volume-based pricing is flawed because it penalizes good security hygiene, what is a better SIEM pricing model that fairly addresses compute, enrichment, and retention costs without just shifting the volume cost to unpredictable query charges?
    • You question the idea that startups can find a better way to release detection rules than large vendors with significant content teams. What metrics should security leaders use to evaluate the quality of a vendor's detection engineering (DE) output beyond just coverage numbers? Can AI fix DE?

    Resources:

    • Video version
    • The SIEM Maturity Framework: A Practical Scoring Tool for Security Analytics Platforms
    • The Gaps That Created the New Wave of SIEM and AI SOC Vendors
    • How AI Impacts the Cyber Market and The Future of SIEM
    • Why Venture Capital Is Betting Against Traditional SIEMs
    • EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI
    • EP234 The SIEM Paradox: Logs, Lies, and Failing to Detect
    • EP125 Will SIEM Ever Die: SIEM Lessons from the Past for the Future
    • Decoupled SIEM: Brilliant or Stupid?
    • Decoupled SIEM: Where I Think We Are Now?
    Show More Show Less
    36 mins
  • EP266 Resetting the SOC for Code War: Allie Mellen on Detecting State Actors vs. Doing the Basics
    Mar 9 2026

    Guest:

    • Allie Mellen, Principal Analyst @ Forrester, author of "Code War: How Nations Hack, Spy, and Shape the Digital Battlefield"

    Topics:

    • Your book focuses on the US, China, and Russia. When you were planning the book did you also want to cover players like Israel, Iran, and North Korea?
    • Most of our listeners are migrating to or operating heavily in the cloud. As nations refine their "digital battlefield" strategies, does the "shared responsibility model" actually hold up against a nation-state actor?
    • How does a company's detection strategy need to change when the adversary isn't a teenager looking for a ransom, but a state-funded group whose goal might be long-term persistence or subtle data manipulation? How should people allocate their resources to defending against both of these threats?
    • How afraid are you of a "bad guy with AI" scenarios? Mild anxiety or apocalyptic fears?
    • Do you see AI primarily helping "Tier 2" nations close the capability gap with the "Big Three," or does it just further cement the dominance of the nations that own the underlying compute and models?
    • You've spent a lot of time as an analyst looking at how enterprises buy and run security tech. For a CISO at (say) mid-tier logistics company, should 'nation-state cyberattacks' even be on their threat model? Or is worrying about the spies just a form of security theater when they haven't even solved basic credential theft yet?

    Resource:

    • Video version
    • "Code War: How Nations Hack, Spy, and Shape the Digital Battlefield" by Allie Mellen
    • Allie Mellen substack
    • The source for the original "air defense on the roof" argument (2008)
    • EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking
    • EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance
    • EP156 Living Off the Land and Attacking Critical Infrastructure: Mandiant Incident Deep Dive
    • "Disrupting the first reported AI-orchestrated cyber espionage campaign" report
    Show More Show Less
    33 mins
  • EP265 Beyond Shadow IT: Unsanctioned AI Agents Don't Just Talk, They Act!
    Mar 2 2026

    Guest:

    • Alastair Paterson, CEO and co-founder @ Harmonic Security

    Topics:

    • Harmonic Security focuses on securing generative AI in use. Can you walk us through a real, anonymized example of a data leak caused by employee AI usage that your platform has identified?
    • AI governance gets thrown around a lot. What does this mean in the context of Shadow AI? How should organizations be thinking about governing AI in light of upcoming AI regulations in the US and in the EU?
    • If we generally agree that employees are using AI tools before they are sanctioned, how can organizations control this? Network, API, endpoint?
    • Many organizations struggle with the "ban vs. embrace" debate for generative AI. Based on your experience, what's a compelling argument for moving from a blanket ban to a managed, secure adoption model? Can you share a success story where this approach demonstrably reduced risk?
    • The term "shadow AI" is often used interchangeably with "shadow IT" (but for AI-powered applications) but you've highlighted that AI is a different beast. What is the single biggest distinction between managing the risk of unsanctioned AI tools versus unsanctioned IT applications?
    • Looking forward, where do you see the biggest risks in the evolution of shadow AI? For instance, will the next threat be from highly specialized AI agents trained on proprietary data, or from the rapid proliferation of new, unmonitored open-source models?
    • Given the speed of change in this space, what's one piece of advice you'd give to a CISO today who is just beginning to get a handle on their organization's shadow AI problem?

    Resources:

    • Video version
    • Harmonic Security research
    • Shadow AI Strikes Back: Enterprise AI Absent Oversight in the Age of Gen AI blog
    • Shadow Agents: A New Era of Shadow AI Risk in the Enterprise blog (RSA 2026 presentation coming!)
    • Spotlighting 'shadow AI': How to protect against risky AI practices blog
    • EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (aka "dirty bomb episode")
    • A Conversation with Alastair Paterson from Harmonic Security video
    Show More Show Less
    29 mins