Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design) cover art

Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

By: Brian T. O’Neill from Designing for Analytics
Listen for free

About this listen

Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be?

While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be?

If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype?

My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions.

Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies.

I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better.

Hashtag: #ExperiencingData.

JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS
https://designingforanalytics.com/ed

ABOUT THE HOST, BRIAN T. O’NEILL:
https://designingforanalytics.com/bio/© 2019 Designing for Analytics, LLC
Art Economics Management Management & Leadership
Episodes
  • 178 - Designing Human-Friendly AI Tech in a World Moving Too Fast with Author and Speaker Kate O’Neill
    Sep 16 2025
    In this episode, I sat down with tech humanist Kate O’Neill to explore how organizations can balance human-centered design in a time when everyone is racing to find ways to leverage AI in their businesses. Kate introduced her “Now–Next Continuum,” a framework that distinguishes digital transformation (catching up) from true innovation (looking ahead). We dug into real-world challenges and tensions of moving fast vs. creating impact with AI, how ethics fits into decision making, and the role of data in making informed decisions. Kate stressed the importance of organizations having clear purpose statements and values from the outset, proxy metrics she uses to gauge human-friendliness, and applying a “harms of action vs. harms of inaction” lens for ethical decisions. Her key point: human-centered approaches to AI and technology creation aren’t slow; they create intentional structures that speed up smart choices while avoiding costly missteps. Highlights/ Skip to: How Kate approaches discussions with executives about moving fast, but also moving in a human-centered way when building out AI solutions (1:03)Exploring the lack of technical backgrounds among many CEOs and how this shapes the way organizations make big decisions around technical solutions (3:58) FOMO and the “Solution in Search of a Problem” problem in Data (5:18) Why ongoing ethnographic research and direct exposure to users are essential for true innovation (11:21) Balancing organizational purpose and human-centered tech decisions, and why a defined purpose must precede these decisions (18:09)How organizations can define, measure, operationalize, and act on ethical considerations in AI and data products (35:57)Risk management vs. strategic optimism: balancing risk reduction with embracing the art of the possible when building AI solutions (43:54) Quotes from Today’s Episode "I think the ethics and the governance and all those kinds of discussions [about the implications of digital transformation] are all very big word - kind of jargon-y kinds of discussions - that are easy to think aren't important, but what they all tend to come down to is that alignment between what the business is trying to do and what the person on the other side of the business is trying to do." –Kate O’Neill " I've often heard the term digital transformation used almost interchangeably with the term innovation. And I think that that's a grave disservice that we do to those two concepts because they're very different. Digital transformation, to me, seems as if it sits much more comfortably on the earlier side of the Now-Next Continuum. So, it's about moving the past to the present… Innovation is about standing in the present and looking to the future and thinking about the art of the possible, like you said. What could we do? What could we extract from this unstructured data (this mess of stuff that’s something new and different) that could actually move us into green space, into territory that no one’s doing yet? And those are two very different sets of questions. And in most organizations, they need to be happening simultaneously." –Kate O’Neill "The reason I chose human-friendly [as a term] over human-centered partly because I wanted to be very honest about the goal and not fall back into, you know, jargony kinds of language that, you know, you and I and the folks listening probably all understand in a certain way, but the CEOs and the folks that I'm necessarily trying to get reading this book and make their decisions in a different way based on it." –Kate O’Neill “We love coming up with new names for different things. Like whether something is “cloud,” or whether it’s like, you know, “SaaS,” or all these different terms that we’ve come up with over the years… After spending so long working in tech, it is kind of fun to laugh at it. But it’s nice that there’s a real earnestness [to it]. That’s sort of evergreen [laugh]. People are always trying to genuinely solve human problems, which is what I try to tap into these days, with the work that I do, is really trying to help businesses—business leaders, mostly, but a lot of those are non-tech leaders, and I think that’s where this really sticks is that you get a lot of people who have ascended into CEO or other C-suite roles who don’t come from a technology background.” –Kate O’Neill "My feeling is that if you're not regularly doing ethnographic research and having a lot of exposure time directly to customers, you’re doomed. The people—the makers—have to be exposed to the users and stakeholders.  There has to be ongoing work in this space; it can't just be about defining project requirements and then disappearing. However, I don't see a lot of data teams and AI teams that have non-technical research going on where they're regularly spending time with end users or customers such that they could even ...
    Show More Show Less
    50 mins
  • 177 - Designing Effective Commercial AI Data Products for the Cold Chain with the CEO of Paxafe
    Sep 3 2025

    In this episode, I talk with Ilya Preston, co-founder and CEO of PAXAFE, a logistics orchestration and decision intelligence platform for temperature-controlled supply chains (aka “cold chain”). Ilya explains how PAXAFE helps companies shipping sensitive products, like pharmaceuticals, vaccines, food, and produce, by delivering end-to-end visibility and actionable insights powered by analytics and AI that reduce product loss, improve efficiency, and support smarter real-time decisions.

    Ilya shares the challenges of building a configurable system that works for transportation, planning, and quality teams across industries. We also discuss their product development philosophy, team structure, and use of AI for document processing, diagnostics, and workflow automation.

    Highlights/ Skip to:

    • Intro to Paxafe (2:13)  
    • How PAXAFE brings tons of cold chain data together in one user experience (2:33)
    • Innovation in cold chain analytics is up, but so is cold chain product loss. (4:42)
    • The product challenge of getting sufficient telemetry data at the right level of specificity to derive useful analytical insights (7:14)
    • Why and how PAXAFE pivoted away from providing IoT hardware to collect telemetry (10:23)
    • How PAXAFE supports complex customer workflows, cold chain logistics, and complex supply chains (13:57)
    • Who the end users of PAXAFE are, and how the product team designs for these users (20:00)
    • Pharma loses around $40 billion a year relying on ‘Bob’s intuition’ in the warehouse. How Paxafe balances institutional user knowledge with the cold hard facts of analytics (42:43)
    • Lessons learned when Ilya’s team fell in love with its own product and didn’t listen to the market (23:57)
    Quotes from Today’s Episode

    "Our initial vision for what PAXAFE would become was 99.9% spot on. The only thing we misjudged was market readiness—we built a product that was a few years ahead of its time." –IIya

    "As an industry, pharma is losing $40 billion worth of product every year because decisions are still based on warehouse intuition about what works and what doesn’t. In production, the problem is even more extreme, with roughly $800 billion lost annually due to temperature issues and excursions." -IIya

    "With our own design, our initial hypothesis and vision for what Pacaf could be really shaped where we are today. Early on, we had a strong perspective on what our customers needed—and along the way, we fell in love with our own product and design.." -IIya

    "We spent months perfecting risk scores… only to hear from customers, ‘I don’t care about a 71 versus a 62—just tell me what to do.’ That single insight changed everything." -IIya

    "If you’re not talking to customers or building a product that supports those conversations, you’re literally wasting time. In the zero-to-product-market-fit phase, nothing else matters, you need to focus entirely on understanding your customers and iterating your product around their needs..” -IIya

    "Don’t build anything on day one, probably not on day two, three, or four either. Go out and talk to customers. Focus not on what they think they need, but on their real pain points. Understand their existing workflows and the constraints they face while trying to solve those problems." -IIya

    Links
    • PAXAFE: https://www.paxafe.com/
    • LinkedIn for Ilya Preston: https://www.linkedin.com/in/ilyapreston/
    • LinkedIn for company: https://www.linkedin.com/company/paxafe/
    Show More Show Less
    49 mins
  • 176 - (Part 2) The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications
    Aug 19 2025

    This is part two of the framework; if you missed part one, head to episode 175 and start there so you're all caught up.

    In this episode of Experiencing Data, I continue my deep dive into the MIRRR UX Framework for designing trustworthy agentic AI applications. Building on Part 1’s “Monitor” and “Interrupt,” I unpack the three R’s: Redirect, Rerun, and Rollback—and share practical strategies for data product managers and leaders tasked with creating AI systems people will actually trust and use. I explain human-centered approaches to thinking about automation and how to handle unexpected outcomes in agentic AI applications without losing user confidence. I am hoping this control framework will help you get more value out of your data while simultaneously creating value for the human stakeholders, users, and customers.

    Highlights / Skip to:

    • Introducing the MIRRR UX Framework (1:08)
    • Designing for trust and user adoption plus perspectives you should be including when designing systems. (2:31)
    • Monitor and interrupt controls let humans pause anything from a single AI task to the entire agent (3:17)
    • Explaining “redirection” in the example context of use cases for claims adjusters working on insurance claims—so adjusters (users) can focus on important decisions. (4:35)
    • Rerun controls: lets humans redo an angentic task after unexpected results, preventing errors and building trust in early AI rollouts (11:12)
    • Rerun vs. Redirect: what the difference is in the context of AI, using additional use cases from the insurance claim processing domain (12:07)
    • Empathy and user experience in AI adoption, and how the most useful insights come from directly observing users—not from analytics (18:28)
    • Thinking about agentic AI as glue for existing applications and workflows, or as a worker (27:35)

    Quotes from Today’s Episode

    The value of AI isn’t just about technical capability, it’s based in large part on whether the end-users will actually trust and adopt it. If we don’t design for trust from the start, even the most advanced AI can fail to deliver value."

    "In agentic AI, knowing when to automate is just as important as knowing what to automate. Smart product and design decisions mean sometimes holding back on full automation until the people, processes, and culture are ready for it."

    "Sometimes the most valuable thing you can do is slow down, create checkpoints, and give people a chance to course-correct before the work goes too far in the wrong direction."

    "Reruns and rollbacks shouldn’t be seen as failures, they’re essential safety mechanisms that protect both the integrity of the work and the trust of the humans in the loop. They give people the confidence to keep using the system, even when mistakes happen."

    "You can’t measure trust in an AI system by counting logins or tracking clicks. True adoption comes from understanding the people using it, listening to them, observing their workflows, and learning what really builds or breaks their confidence."

    "You’ll never learn the real reasons behind a team’s choices by only looking at analytics, you have to actually talk to them and watch them work."

    "Labels matter, what you call a button or an action can shape how people interpret and trust what will happen when they click it."

    Quotes from Today’s Episode
    • Part 1: The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications
    Show More Show Less
    30 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.