Introducing a Strategic Experimentation Framework
When your testing program grows fast, chaos often follows. Teams celebrate win rates, dashboards fill with numbers, yet business leaders still ask: “Why aren’t we seeing this in the bottom line?”
Here’s how I built and implemented a framework at TomTom that finally connected experimentation to strategy, revenue, and trust.
Challenge
At TomTom, our experimentation program was thriving — hundreds of A/B tests, rising velocity, solid win rates.
But one question from leadership stopped us cold:
“We’re running more tests than ever, so why isn’t this moving our bottom line?”
That question exposed a gap between testing output and business outcomes.
We were optimizing micro-metrics (clicks, flows, button copy) but not tying them back to company OKRs.
And like many organizations, we faced classic pitfalls:
1. Win Rate Obsession: Celebrating small UX wins that didn’t shift revenue.
2. Velocity Illusion: Mistaking more tests for more impact.
3. Surface Metrics: Measuring clicks, not contribution to business growth.
4. Stakeholder Misalignment: Failing to communicate the “why” behind each test.
It was time to evolve experimentation from a set of tactics to a strategic decision-making system.
Results
Strategic alignment: 80% of experiments mapped to OKRs or KPI branches.
Maintained test velocity with stable 33% win rate.
Faster decision-making: leaders could instantly see test-to-business impact.
More strategic experimentation: from spaghetti testing to strategic testing impacting business resutls.
Cultural shift: experimentation evolved from proof of concept to proof of impact.
80%
of experiments are tied to KPI's
33%
win rate maintained
84%
Increase in time spent on website
Process
I designed and implemented TomTom’s Strategic Experimentation Framework, a system that connects every experiment to business outcomes using three foundational pillars.
Strategic Alignment: Connecting Tests to Company Goals
Introduced KPI Trees to visualize how every test linked to top-line metrics like MRR, activation, and retention.
Mapped each product team’s hypotheses to company OKRs, ensuring direct alignment with business priorities.
Used KPI Trees as a storytelling tool - not just analysis - so anyone could see how a test influenced revenue.
This resulted in teams able to asnwer the question: “Which business goal does this experiment move?”
Lean Experimentation Execution: Doing More with Less
At TomTom, colleagues could only dedicate 10–30% of their time to experimentation. We needed lean, high-leverag that help us move quickly.
Focused on high-impact tests (pricing, onboarding, paywalls) instead of small tweaks.
Built a lightweight experiment database where every test (win or fail) is fed back into collective learning.
Introduced Opportunity Solution Trees to prioritize ideas based on potential ROI.
Standardized experiment setup using templates and AI-supported automation for reporting and analysis.
With this, I maintained 33% win rate while scaling from 50 to 300 tests per year with no additional headcount.
Stakeholder Buy-In: Turning Data into Decisions
The hardest part wasn’t running experiments — it was getting leadership to care about them. and to solve this:
Created strategic experiment reports that visualized impact across KPI Trees.
Held bi-weekly "Growth Demo's” sessions where results were framed around learnings and business goals.
Shared ROI projections, cumulative learnings, and OKR progress instead of isolated test wins.
This boosted executive buy-in. Experimentation became a trusted growth driver, and not just a marketing function.
Conclusion
This framework transformed TomTom’s experimentation culture.
By aligning tests with OKRs, visualizing impact through KPI Trees, and communicating value through lean, transparent reporting, we proved that experimentation is not about running tests, it’s about driving decisions.
We stopped throwing spaghetti at the wall. Now, every test has a purpose and every insight moves the business forward.

