Brixo
Skip to main content
All Chapters
Chapter 2

The Experience Analytics Framework: Intent, Journey, Outcome

A new measurement model for AI products. Learn the three phases of every conversational experience and why funnels fail for dynamic interactions.

B
Brixo Team
7 min read

Why AI Products Need a Different Framework

Funnels were designed for deterministic flows. AI experiences are probabilistic and dynamic. The measurement model has to match the interaction model.

Traditional SaaS products operate on designed paths. Product teams build the flow: Sign up leads to Onboard leads to Activate leads to Convert. Users follow the designed path or drop off at known points. Each step is predefined. The funnel reflects actual user behavior because behavior is constrained by design.

AI products operate differently. Large language models are probabilistic systems. They do not follow fixed rules or decision trees. Instead, they predict the most likely next token based on patterns learned from training data. Each response is generated through a sequence of probability calculations, not a predetermined script.

This means the same input can produce different outputs. Ask an LLM the same question twice, and you may get two different answers. The model is not retrieving a stored response. It is generating one based on statistical likelihood.

When customers arrive with intent and express it in natural language, their journey emerges from the conversation. No predefined journey or steps exist. Ten customers with the same goal take ten different routes to get there, and even the same customer asking the same question twice may have a different experience.

Phase 1: Intent

Intent is the goal the customer is trying to accomplish. It is expressed at the start of the conversation or inferred from context. It is the foundation for evaluating everything that follows.

What to measure: What intents are customers arriving with? How do they express those intents (vague vs. specific)? Can your product serve those intents?

Intent distribution tells you what customers expect. Some intents you designed for. Others you did not. Without intent data, you are optimizing blind.

Example: A company deploys an AI support agent trained on billing and account questions. Intent data reveals the actual traffic: 30% billing questions, 25% technical bugs, 20% product how-to questions, 15% cancellation requests, and 10% feature feedback. The agent handles billing well. But the 25% reporting bugs need engineering context the agent lacks. The 15% trying to cancel need retention offers and human nuance. More than half the customers arrive with intents the agent was not built to serve. Without intent visibility, the team assumed low satisfaction scores meant the AI was bad. The AI was fine. The intent distribution was the problem.

Another example: An AI presentation tool sees this intent breakdown: 40% "create a pitch deck," 25% "make slides for a team meeting," 15% "design a report or document," and 20% "generate or improve images." The product uses a basic image model. That 20% will see stock-quality visuals when they expected custom illustrations. They churn not because the deck feature failed, but because they arrived with an image-generation intent the product was not built to serve.

Phase 2: Journey

The journey is the path from intent to outcome. It includes everything that happens between stating the goal and reaching it (or not reaching it). This is where experience quality reveals itself.

What to measure: How many turns does it take to reach an outcome? Where does confusion appear? Where does friction build? What signals indicate the experience is going well or poorly?

Two customers with identical intent can have opposite experiences. One reaches the outcome in 3 turns with positive sentiment. Another takes 25 turns with growing frustration. The journey tells you which experience is which.

Signals to track: Confusion shows up when customers ask clarifying questions or respond with "I don't understand." Friction shows up when customers rephrase requests, retry the same thing multiple ways, or start over from the beginning. Frustration shows up through sentiment shifts and negative language. Positive engagement shows up through enthusiasm, gratitude, and continued exploration.

These signals are invisible to event analytics. They live inside the conversation.

Phase 3: Outcome

Outcome is whether the customer accomplished their goal. It is the only metric that matters in the end.

What to measure: Did the customer complete the task? Did they complete the task you designed the product for? What is the success rate by intent type, journey characteristics, and customer segment?

Engagement is not success. A customer who sends 50 messages might be struggling. Outcome measurement distinguishes between "used the product a lot" and "got value from the product." Success rate is the north star metric for AI products.

Types of outcomes: Task completed (presentation generated, code written, email drafted). Question answered (information delivered). Issue resolved (problem fixed). Goal abandoned (gave up). Escalation (handed off to human). Each outcome type tells you something different about product performance.

How the Phases Connect

The three phases are sequential but not independent.

Intent clarity affects journey length. Vague prompts lead to longer journeys because the AI needs more turns to understand what the customer wants.

Journey quality affects outcome likelihood. High-friction journeys lead to abandonment because customers give up before reaching resolution.

Outcome success feeds back to intent. Repeat customers arrive with clearer intents because they learned how to use the product.

The framework is a system, not three separate measurements. Changes to one phase ripple through the others.

The Intent, Journey, Outcome framework: three connected phases showing what customers want, how they pursue it, and whether they succeed
The Intent, Journey, Outcome framework: three connected phases showing what customers want, how they pursue it, and whether they succeed

How This Differs from Traditional Funnels

The funnel model and the Experience Analytics model solve different problems.

Funnels follow a series of stages, track users at each stage, measure conversion between each stage, and optimize for progression. Funnels assume users follow a predictable path, each stage has clear entry and exit criteria, the goal is to move users from stage to stage, and success is reaching the end of the funnel. Funnels work well for e-commerce checkout flows, SaaS onboarding sequences, lead generation pipelines, and any process with defined, deterministic steps.

Experience Analytics captures intent at conversation start, tracks signals throughout the journey, measures outcome at conversation end, and segments by intent, journey characteristics, and outcome. Experience Analytics assumes users arrive with goals (not at "Stage 1"), the path varies based on the conversation, success is achieving an outcome (not completing a sequence), and journey quality is critical to understand in order to optimize for better outcomes.

Most products with AI components need both. Funnels for the structured parts. Experience Analytics for the conversational parts.

Side-by-side comparison: Traditional funnel with fixed stages and drop-off vs Experience Analytics with dynamic journey and friction signals
Side-by-side comparison: Traditional funnel with fixed stages and drop-off vs Experience Analytics with dynamic journey and friction signals

The Metrics That Matter

Each phase has specific metrics.

Intent metrics: Intent distribution — breakdown of what customers are trying to accomplish. Intent clarity — vague vs. specific initial prompts. Intent serviceability — percentage of intents your product can serve. Intent switch events — detection of topic changes within a conversation.

Journey metrics: Turns to outcome — how many exchanges before resolution. Friction signals — confusion, retries, sentiment shifts. Journey efficiency — ratio of productive turns to total turns. Sentiment distribution — overall breakdown of positive, negative, neutral sentiment. Sentiment trajectory — direction of sentiment change across conversation turns.

Outcome metrics: Outcome rate — percentage of conversations reaching defined success states. Outcome rate by intent — success rate segmented by what customers were trying to do. Failure patterns — common reasons for abandonment or escalation.

Outcomes are lagging indicators. They tell you what happened. Intent and journey signals are leading indicators. They predict what will happen. A customer with a vague intent who hits confusion signals at turn 3 has a high probability of abandonment. With visibility into leading indicators, you can intervene before the outcome is determined.

Leading vs lagging indicators: intent and journey signals predict outcomes, with an intervention window between detection and abandonment
Leading vs lagging indicators: intent and journey signals predict outcomes, with an intervention window between detection and abandonment

Outcomes,
not engagement.

Connect your conversation data and see what customers are trying to do, where they're getting stuck, and which accounts are at risk. The data is already there. Brixo makes it readable.