PodcastsArtsExperiencing Data w/ Brian T. O’Neill

Experiencing Data w/ Brian T. O’Neill

Brian T. O’Neill from Designing for Analytics
Experiencing Data w/ Brian T. O’Neill
Latest episode

124 episodes

  • Experiencing Data w/ Brian T. O’Neill

    193 - Faster…or Better? Creating Value with Blue Ocean Thinking and AI-Powered Product Development

    28/04/2026 | 24 mins.
    Speed is often confused with good product thinking. The idea is that if teams can ship prototypes, dashboards, and models faster, they will automatically learn faster. But execution speed alone doesn’t ensure a clearer understanding of what’s actually worth building.

    Instead, teams often fall into a loop driven by demo feedback. They present working prototypes, and users respond to what they can see in the form of interface design, visualizations, or surface-level data behavior. While this feedback feels positive, it’s often misleading. Teams can end up reacting to presentation (UI) feedback only to find it does not change propensity to buy or increase user adoption.

    The key idea today is that prototypes can either be used to clarify the problem space and user needs or to validate the solution presented. Where I see most teams fail is that every artifact or prototype is seen as a solution to validate, and they can miss the forest for the trees.

    Another approach borrows from blue ocean thinking, which focuses on creating value by looking for overlooked opportunities in the empty space—beyond the known “problem space” your customer knowingly lives in now. 

    Because AI lets us move so fast with prototyping, I think there is an exciting possibility to explore the blue-ocean spaces where your product could evolve to produce value. 

    As always, we seek to go beyond building “technically right, effectively wrong”—which doesn’t make people buy, use, or refer your product.  Today, we look at what AI can help us to do to see even farther beyond the immediate problem space. 

    Highlights/ Skip to:
     

    Where the idea for this episode came from (00:46)

    Why faster building of artifacts with AI doesn’t necessarily mean faster market validation (2:09)

    How understanding the problem space results in fewer prototypes being created (5:40)

    Using blue ocean strategy to arrive at new products worth paying for (08:23)

    Finding missed market opportunities  blocked by cost, tech limits, or risk (12:39)

    How AI-assisted user research fits into blue ocean thinking (14:33)

    The big picture: winners will figure out what’s worth building before they build it (20:42)

    Links

    Contact Designing for Analytics
  • Experiencing Data w/ Brian T. O’Neill

    192 – Product Usage Does Not = Value: Why “Adoption” Metrics Are Misleading You

    15/04/2026 | 46 mins.
    I’ve seen this challenge again and again with teams building analytics and AI products: nobody can define what quality to the end user means or how to measure. The answer? “Adoption.”  The problem is that “amount of usage” tells you nothing useful about your customer’s experience with your product beyond “it’s not zero.” So what should you be measuring instead so your buyers don’t quickly abandon once the end users get their hands on the keyboard (or agent!)?

     

    The answer is to understand through qualitative measures what users’ experiences are like now, so you have an objective baseline from which to compare future product investment. When you can define their current experience’s quality, it’s much easier to imagine their better future, and you also now have a change you can measure. Measurable outcomes are the foundation of high-value, sticky B2B analytics and intelligence products—and when your end users’ lives are improved, the sales close, and the renewals aren’t questioned. So today, I jump into “how do you measure UX?” so you aren’t surprised when the sale doesn’t close or that renewal doesn’t come through unexpectedly. 

     

    Highlights / Skip to:

    Why I think product adoption (i.e. product usage analytics) are misleading as a means to define whether your solution is valuable to users (1:34)

    Getting a better baseline reading of user experience so you can improve their life and your sales/retention KPIs (4:56)

    How to measure, hypothesize, and observe if your product is working “well” (7:35)

    Discovering where your product is being appreciated (20:28)

    What about when AI is in the loop? (23:05)

    The risk of creating bigger messes with AI capabilities (28:20)

    How to gain useful insights from your customer exposure time (31:28)

    The quantitative metrics you can use to help measure UX outcomes  (36:17) 

    Why "ship it and see if it gets used" isn't a product strategy  (40:52)

    Links

    More Resources 

    Get 1x1 Help from me if you know your product’s value is opaque, or the user experience is hindering your sales or adoption goals
  • Experiencing Data w/ Brian T. O’Neill

    191 - Turning Agents into Software that Sells [Smarter!] with Zig.ai CEO Steve Ancheta

    01/04/2026 | 42 mins.
    I'm talking with Steve Ancheta, CEO of Zig, a platform designed to free sales teams from repetitive, non-revenue-generating tasks. CRM and logistical tasks can consume up to 72% of the week of a sales team, but Zig’s AI agents handle them so reps can focus on closing deals. Unlike tools built for managers, Zig follows a rep-first design—simple, intuitive, and aligned with the motivation to sell more—while also creating an intelligence layer that preserves institutional knowledge and accelerates onboarding for new hires. 

    I wanted to chat with Steve about how he built a product that is both used—and worth paying for—with AI under the hood. Rather than relying on chat prompts, Zig surfaces prioritized tasks in panels and cards, integrates with CRMs and Slack, and builds confidence scores from user interactions. 

    Because Steve comes from the world of sales—and that’s the domain his product sits in—I wanted to explore his “problem clarity” and share that with you, since I often find data and technical founders to be more solution-oriented and lacking in this area.  Steve was an open book with me, and I’m hoping other founders trying to turn analytical complexity into commercial clarity can see how Steve is using AI and agents to make data work for end users—and worth paying for.

    Finally, I also challenge Steve to answer whether Zig.ai is a software company or a services company with a product behind the scenes—a question you might also ask yourself depending on your GTM model.

    Highlights/ Skip to:

    What is Zig.ai? (00:48)

    When managers see the value of a product but end-users don’t—and how product leaders need to react (5:20)

    What Zig’s UX is like and how it was designed (9:45)

    The sales process and risks salespeople face when demoing Zig (16:12)

    How Zig addressed their time-to-value challenge during the product experience (20:14)

    How Zig found a problem people were willing to pay to solve (24:16)

    We discuss whether an AI product company might be a services company with technology or a traditional software company (24:16)

    The Invisible Intelligence Gap Steve has observed within Zig’s business space (AI and analytics-powered sales tooling) (27:57)

    Why Steve isn’t worried about the major CRMs from building internal solutions to circumvent third-party tools like Zig (35:37)

    Steve Ancheta’s advice for trying to bring sophisticated data products to market (39:26)
  • Experiencing Data w/ Brian T. O’Neill

    190 - Why Discovering Valuable Analytics Use Cases for Your Product Is So Hard (Even with AI)

    17/03/2026 | 43 mins.
    I’ve seen this pattern repeatedly with teams building analytics and AI products: the issue usually isn’t the quality of the models or the sophistication of the data. The technology often works just fine. The real breakdown happens earlier—when teams begin with the data they already have and try to figure out what to build, instead of starting with the decisions their customers need to make.

     

     

    That approach often produces polished dashboards and compelling features that generate interest, but fail to drive real action. The missing piece is context. Decisions in the real world depend on incentives, habits, risk tolerance, and uncertainty—not just clean data. If your product doesn’t reflect that reality, it won’t meaningfully change behavior.

     

     

    Another common trap is assuming all available data is *evidence* worth surfacing. This “more is better” mindset leads to cluttered analytics tools that offload interpretation onto users. Even conversational AI interfaces can fall into this, encouraging open-ended exploration without helping users reach decisions.

     

     

    The analytics and AI products that succeed take a different approach. They’re designed around decision-making to reduce uncertainty, fit into real workflows, and guide users toward clear actions. In doing so, they bridge the gap between analytical capability and real-world value, making the product’s intelligence tangible, usable, and worth paying for.

     

     

    Highlights/ Skip to:
     

    The core mistake I see people making during the discovery process of building an insights product (2:07)

    Improve your product strategy by working ‘backwards” and understanding what decisions customers are trying to make  (6:06)

    Insights don’t equal decisions in the real world (7:39)

    Designing with a goal of improving the lives of users in mind  (11:17)

    Prototypes as a means of discovery (vs. product/solution validation) (13:48)

    The bias of data availability (20:39)

    Using AI and LLMs for discovery and product UX (24:17)

    Why AI-assisted analytics products should shape UX around making structured decisions (31:03)

    Overcoming the Invisible Intelligence Gap  (34:57)

    Final thoughts (37:21)

     

    Links

    CED: My UX Framework for Designing Analytics Tools That Drive Decision Making https://designingforanalytics.com/ced  

    Need my help finding the right use cases for your analytics or AI product?  Book a complimentary 1x1 discovery call with me: https://designingforanalytics.com/contact/
  • Experiencing Data w/ Brian T. O’Neill

    189 - The Invisible Intelligence Gap

    05/03/2026 | 25 mins.
    I’ve worked with a lot of teams building analytics and insights products and decision-support systems. The pattern I keep seeing isn’t that the math is wrong or the ML / AI models are weak. Much of the time, the technology is fine.

     

     

    The challenge is that all that [not always artificial!] intelligence is not surfacing as value to your customer. Dashboards look impressive. AI features demo well. Pilots get strong reactions. And then… usage stalls. Sales cycles drag. Teams quietly revert to spreadsheets. Buyers, or rather, prospective buyers, say they “like the vision,” but deals don’t move into the “closed” stage.

     

     

    If your gut tells you the primary blocker is not your sales process, pricing/packaging, procurement, data quality, or risk/compliance, then you may be suffering from what I call the Invisible Intelligence Gap. 

     

     

    Your product’s intelligence simply isn’t visible to them. Three forces tend to amplify this gap. First, the value translation gap, which is when buyers and users can’t easily connect insights to their own goals. Second is the workflow alignment gap resulting from the product not fitting how work actually gets done. Third, the trust and control gap involves users lacking confidence in how the system reaches conclusions. My frameworks like CED, FOWA, and MIRRR are designed to close these gaps by making value obvious, workflows smoother, and AI more trustworthy.

     

     

    Highlights/ Skip to:

    The challenge of insights not providing value to buyers, end-users, and stakeholders (3:20)

    How the invisible intelligence gap manifests itself (6:42)

    Common symptoms of the invisible intelligence gap (8:10) 

    Examples of how changes in human behavior cause the gap (10:00)
    The (3) amplifiers of the invisible intelligence gap (11:47)

    The CED framework for addressing the intelligence gap problem (18:28)

    Addressing the invisible intelligence gap with FOWA (20:14)

    Using MIRRR to solve the invisible intelligence gap (21:25)

More Arts podcasts

About Experiencing Data w/ Brian T. O’Neill

Does the value of your insights, analytics, or automated intelligence product sometimes feel invisible to buyers and users? Does your product have impressive analytics and AI technology, but user adoption and sales still are not where you want them to be? While it has never been easier to build data-driven products, why does it still seem so hard to build indispensable data products that users can't live without—and will gladly pay for? I’m Brian T. O’Neill, and on Experiencing Data — a Listen Notes top 2% global podcast — I help founders and B2B software product leaders close the Invisible Intelligence Gap through solo episodes and interviews with leaders at the intersection of product management, UX design, analytics, and AI. If you’re building analytics, BI, or automated intelligence (AI) products, this non-technical show will help you better connect your product to outcomes, value, and the human factors that still matter — even in the age of AI. Subscribe today on all major platforms or browse the episode archive. Get 1-Page Episode Summaries:https://designingforanalytics.com/experiencing-data-podcast/About the Host, Brian T. O'Neill:https://designingforanalytics.com/bio/
Podcast website

Listen to Experiencing Data w/ Brian T. O’Neill, Dish and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features