The RUST framework helps you build relationships that don't corrode over time.

Move beyond conversion rate with the RUST framework. Learn how to measure Retention, Understanding, Satisfaction, and Trust to build tests that drive sustainable growth.

Ethics
A/B Testing
Conversion Optimisation
Trust & Transparency
User Experience

The RUST framework helps you build relationships that don't corrode over time.

Published on:
January 31, 2026
Author:
Jon Crowder

The RUST Framework: Four Metrics That Actually Matter in A/B Testing

Your A/B test won. Conversion rate is up 12%. Time to celebrate?

Maybe. Or maybe you've just optimised your way into a problem you won't see for another three months.

Conversion rate tells you that something happened. It doesn't tell you whether that something was good for your business. A user who converts because they were misled, pressured, or confused is not the same as a user who converts because they genuinely wanted what you're offering.

This is why we developed the RUST framework: four metrics that reveal whether your optimisation is building long-term value or quietly corroding customer relationships.

What is RUST?

RUST stands for Retention, Understanding, Satisfaction, and Trust. These four metrics work together to give you a complete picture of whether your test variant is actually good for your business, not just good for your conversion graph.

Think of conversion rate as a leading indicator. RUST metrics are the outcomes that actually matter.

R is for Retention

The question: Do customers come back?

Retention is the ultimate test of whether your optimisation created genuine value. Repeat purchases, subscription renewals, return visits, and ongoing engagement all signal that customers got what they expected and want more of it.

A variant that boosts first-time conversions but tanks repeat purchase rates hasn't improved anything. It's just moved customers through the funnel faster before they realise they've made a mistake.

What to measure:

Repeat purchase rates over 30, 60, and 90 days. Subscription renewal rates. Return visitor frequency. Second and third transaction rates. Churn velocity for new customers versus existing benchmarks.

Red flags:

Conversion goes up but repeat purchases go down. You're acquiring customers who don't stick. This is expensive, because you've paid to acquire someone who generates minimal lifetime value.

U is for Understanding

The question: Do customers actually understand what they're getting?

This is about testing whether users grasp your value proposition before they convert. A confused customer might still click the button, but confusion creates problems downstream: refund requests, support tickets, negative reviews, and churn.

Clarity converts better in the long run than persuasion. If users understand exactly what they're signing up for, they're more likely to be satisfied and more likely to return.

What to measure:

Post-purchase survey responses about expectations versus reality. Support ticket themes in the first week after conversion. Refund and return request rates with reason codes. Time spent on key information pages before conversion. Engagement with FAQ or help content pre-purchase.

Red flags:

High conversion rates paired with spikes in "this isn't what I expected" support queries. Users converting quickly without engaging with product details or pricing information. Refund reasons that suggest misunderstanding rather than product issues.

How to test for it:

Run comprehension tests alongside conversion tests. After exposing users to your variant, survey a sample to check whether they can accurately describe what they'd be getting. If understanding is low but conversion is high, you've got a problem brewing.

S is for Satisfaction

The question: Are customers happy with their experience?

Satisfaction measures whether the experience met or exceeded expectations. This is different from understanding: a customer can understand exactly what they're getting and still be dissatisfied with the experience of getting it.

Friction, frustration, and faff all erode satisfaction even when the end result is technically correct. If you want to understand where friction might be hiding, a UX audit can help identify the pain points your analytics won't show you.

What to measure:

Post-experience satisfaction scores (CSAT, NPS). Support ticket volume and sentiment. Return and cancellation rates. Time to resolution for customer issues. Social media sentiment and review scores. Complaint rates and escalations.

Red flags:

Conversion improves but satisfaction scores drop. Support volume increases after a "winning" variant ships. Cancellation rates tick up within the first billing cycle.

The nuance:

Sometimes friction is appropriate. A mortgage application should feel thorough. A medical questionnaire should feel careful. Satisfaction isn't about making everything effortless; it's about ensuring the experience feels appropriate for the context and leaves customers feeling good about their decision.

T is for Trust

The question: Does this optimisation build or erode trust?

Trust is the slowest metric to build and the fastest to destroy. It's also the hardest to measure directly, which is why it often gets ignored in favour of more tangible numbers.

But trust compounds. Customers who trust you spend more over time, forgive occasional mistakes, recommend you to others, and give you the benefit of the doubt. Customers who don't trust you are one bad experience away from leaving forever.

What to measure:

Brand sentiment tracking over time. Net Promoter Score trends. Referral and advocacy rates. Customer feedback themes. Social listening for trust-related language. Response to price increases or policy changes (trusted brands get more latitude).

Red flags:

Tactics that feel manipulative, even if they convert well. Fake urgency, hidden fees revealed late, dark patterns, and misleading comparisons all generate conversions while quietly eroding trust. The damage often doesn't show up until months later when retention mysteriously declines or a competitor enters the market and your customers leave without hesitation.

The long game:

Trust-building optimisations often show modest conversion improvements but exceptional long-term performance. Transparency, honesty, and genuine helpfulness don't always win the A/B test, but they win the customer.

How to Use RUST in Practice

Before the test

When developing your hypothesis, ask: "How might this variant affect each RUST metric?" If you can see obvious risks to retention, understanding, satisfaction, or trust, reconsider the approach before you spend time building it.

This isn't about avoiding all risk. It's about going in with eyes open rather than optimising blindly for conversion and hoping everything else works out. For a deeper dive on building tests with ethics baked in from the start, see our guide on how to run ethical A/B tests.

During the test

Track leading indicators for each RUST metric alongside your primary conversion metric. You won't have full retention data for months, but you can monitor early signals like engagement depth, support queries, and satisfaction surveys.

Build a dashboard that shows conversion and RUST indicators together. If conversion is climbing but RUST signals are declining, you need to investigate before declaring a winner.

After the test

Tag cohorts from your test variants and monitor their RUST metrics over time. Did the "winning" variant actually produce customers with better retention, understanding, satisfaction, and trust?

Create a quarterly review where you look back at tests shipped three to six months ago. How did those cohorts actually perform? This feedback loop will teach you which types of optimisations genuinely drive long-term value.

When results conflict

Sometimes conversion will point one direction and RUST metrics will point another. This is where judgement comes in.

A small conversion decrease paired with meaningful improvements in understanding and satisfaction might be worth it. A large conversion increase paired with declining trust signals probably isn't.

There's no formula for this. But having the data means you can make an informed decision rather than blindly chasing the conversion number.

RUST-Proofing Your Testing Programme

The metaphor writes itself: if you only measure conversion, you're letting your customer relationships rust.

Corrosion happens slowly. You don't notice it day to day. Then one quarter you look at your retention numbers and wonder what went wrong. The answer is usually a series of "winning" tests that optimised for the short term while neglecting the metrics that actually sustain a business. Ethics has a direct impact on retention and engagement, which is worth understanding if you want to avoid this trap.

RUST-proofing your testing programme means building these four metrics into how you evaluate success. Not as afterthoughts, but as core measures that sit alongside conversion rate from the start.

Getting Started

If you're currently only tracking conversion, here's how to start incorporating RUST:

Week one: Audit your last ten shipped tests. For each one, ask: "What do we actually know about how this affected retention, understanding, satisfaction, and trust?" You'll probably find the answer is "not much." That's your baseline.

Week two: Identify leading indicators for each RUST metric that you can track within normal testing timeframes. What early signals predict retention in your business? How can you measure understanding before it shows up as refund requests?

Week three: Add RUST indicators to your testing dashboard. Start tracking them alongside conversion for all new tests. If you need help setting up proper measurement, our web analytics services can help you build the right foundation.

Week four: Set up cohort tracking so you can monitor long-term RUST performance for test variants after they ship.

Ongoing: Create a quarterly review to connect test decisions with actual RUST outcomes. Let this inform your future testing strategy.

The Bottom Line

Conversion rate is easy to measure, which is why everyone measures it. But easy to measure doesn't mean important to measure.

RUST metrics tell you whether your optimisation is building a sustainable business or just generating short-term spikes that will cost you later. They're harder to track, slower to show results, and require more patience than watching a conversion graph tick upward.

But they're the metrics that actually matter.

Your customer relationships will either strengthen over time or corrode. The RUST framework helps you tell the difference before it's too late.

Want to see what good looks like? Check out brands getting ethics right for examples of companies building trust through their optimisation programmes. Or if you need a reminder of what not to do, we've got that covered too.

Ready to RUST-proof your testing programme? Get in touch to learn how we can help, or explore our CRO consulting services to see how we build experimentation programmes that optimise for long-term value.

Want to read more?

Ready to get started?

Book a Call