In the first three articles of our series on Experience-Led Commerce, we explored why customer experience is emerging as a powerful lever for sustainable growth, how to think about customer experience for your brand, and what it takes to operationalize CX investments across your business. Now it’s time to face a tougher question:
If customer experience is so important, why do so many brands still struggle to act on it?
For digital-native startups, weaving customer feedback and iteration into their DNA is almost second nature. But for established, complex commerce organizations, the friction is real. It often boils down to this: being "data-driven" sounds great in theory, but is much harder in practice.
In this article, we’ll look at what actually holds teams back from building data-driven CX programs—and how to evolve your approach without grinding your business to a halt.
The Two Biggest Barriers to a Data-Driven Approach
Adopting a data-driven mindset when it comes to CX requires overcoming two very common (and very human) challenges.
1. Short-Term Goals Override Long-Term Investments
For most commerce organizations, especially when they’re under pressure from economic headwinds, the priority is hitting the week’s revenue target. CX investments often feel too abstract, too slow to return, or too far removed from the conversion funnel to warrant resourcing.
But the reality is that neglecting CX can sabotage long-term performance. A short-term focus might get you a few wins, but it often comes at the expense of brand equity, retention, and customer lifetime value.
And yet, the solution can't just be telling brands to be more strategic. Brands need to balance the long-term nature of customer experience investments with short-term business constraints, finding quick wins through fast and measured experimentation.
2. Decisions Are Made by Guts, Not Data
All the brands we’ve worked with over the years have claimed to be data-driven. In practice, many are still reliant on hopes, intuitions, and the occasional good HiPPO, the Highest-Paid Person’s Opinion.
While it’s easy to point fingers, the reality is that this approach doesn’t stem from ignorance, but rather from the fast-paced nature of the average consumer brand. By the time you're done running your three-month audit, consumer sentiment has shifted, results are nowhere to be seen, and leadership is asking how fast you can launch that new sitewide sale.
As a result, in the best-case scenario, most leaders and operators end up relying on their gut. In the worst-case scenario, they cover their personal opinions under a thin layer of spurious numbers only meant to reinforce their original assumptions.
Again, simply trying to bash through and tell brands that they should be more data-driven is not helpful, and is often met with understandable nods and eye rolls.
Building Trust One Step at a Time
For XLC to be adopted, you need to make the data collection process manageable for a busy team and build trust in the data.
Luckily, the Hierarchy of Experimentation Evidence provides exactly that: a way to improve how decisions are made, without requiring your organization to grind to a halt. First designed by experimentation expert Ronny Kohavi, this framework helps brands evaluate the strength of the data behind their decisions.
We can visualize it as a pyramid with five levels. From bottom to top, we have:
- Anecdotes and expert opinions. Observations, gut feelings, HiPPOs, UX/UI audits are all useful data sources for generating hypotheses on friction points and improvement opportunities, but they are not reliable evidence.
- Observational studies. These encompass dashboard analytics, funnel analysis, user behavior patterns, etc. They can help identify issues and suggest potential data correlations, but they cannot be used to prove causality.
- Uncontrolled experiments. These are pre-post analyses (i.e., rolling out a change and verifying its impact) and other kinds of experiments that are vulnerable to confounding factors (e.g., seasonality, marketing campaigns, pricing adjustments).
- A/B tests. The gold standard for establishing causality. We have published two entire articles on what makes a good A/B test and how to avoid the most common types of biases when A/B testing (it’s harder than you think), so we encourage you to read up on the matter!
- Meta-analyses. Aggregating results across tests, either through systematic reviews or sequential testing methods, is the strongest form of evidence, but it’s also exceedingly rare in digital product development and e-commerce (and, in many cases, unnecessary).
As you have probably guessed, the data at the top of the pyramid is the most reliable, but it’s the hardest to collect; the data at the bottom of the pyramid is the least reliable, but it’s the easiest to collect.
How To Use the Pyramid
You can use the pyramid to map your data collection maturity. In doing this, it’s important to distinguish between data you collect and data you trust:
- Understanding what data you collect is easy. In our experience, most brands stop at observational studies or uncontrolled experiments. While A/B testing is common in e-commerce, it’s rarely done with the rigor required to produce useful insights.
- Understanding what data you trust is trickier and more political. As an example, if you collect granular data on the performance of your merchandising strategy, but then you throw it all away when the CEO sends you a Slack message, that data is being collected but not trusted.
Once you have this understanding, your goal should be to graduate to the next level of trust.
But that doesn't mean flipping a switch overnight. The transition from one layer to the next is less about chasing perfect alignment and more about creating structured opportunities to challenge your assumptions with more reliable evidence.
For instance, imagine your analytics suggest that customers are dropping off at the shipping step of your checkout. A pre-post analysis might show a lift after simplifying that step, but it's hard to isolate the cause. So you run an A/B test that confirms the shorter form improves conversion—but only on mobile. That result contradicts a prior assumption that desktop was the problem, and it gives your team stronger footing to explore mobile-first optimizations.
The point isn't to confirm what you already believe. It's to upgrade the quality of your decision-making, even when the results surprise you.
By doing this, you’re allowing your organization’s approach to customer experience to mature more gradually. That means your team doesn’t have to pause all work for a six-month analytics overhaul. Instead, they can start small, run a focused A/B test, or build a hypothesis from behavioral data, and incrementally build momentum.
When CX leaders adopt this model, they gain the credibility to win time and budget for long-term projects (solving the short-termism problem), and Gget everyone making decisions based on the facts, not opinions (solving the trust problem).
Over time, organizations become more data-driven, trusting rigorous evidence over intuition and making faster, more confident decisions.
Small Steps, Compounding Impact
The brands that win on customer experience aren't necessarily the ones with the biggest budgets or the flashiest tech stacks. They're the ones that build feedback loops, run smart tests, and act on what they learn, week after week and quarter after quarter.
You don’t have to wait for a replatform or an executive offsite to start. Identify your current level of evidence maturity. Run one better experiment. Make one stronger decision. Then do it again.
This is how transformation happens: not with a bang, but with a rhythm.
Need help getting your team in the right rhythm? Let’s talk.