Framework for CRO programmes centred on user needs, not exploitation. Discover our service offering and learn how to build ethical optimisation programmes.
Most people who I have met working in CRO care about users. They want to build better experiences. But the structures they work within often push in a different direction. Conversion targets that reward short-term lifts. Marketing cultures that treat experimentation like a campaign tactic, something you "run" to "hack" a number. Stakeholders who want to see a graph going up and don't much care how it got there.
The result is programmes full of fake countdown timers, weird popups, in-your face modals and offers and banners you can't get away from, "Only 2 left!" warnings pulled from thin air, and confirmshaming copy that tries to guilt people into handing over a email address. Not because anyone set out to build that, but because the incentive structures made it the path of least resistance.
And users notice. They install ad blockers, distrust cookie banners, and assume every website is trying to con them. That erosion of trust didn't come from nowhere. It's the cumulative cost of optimisation cultures that measure success in conversion lifts without asking how those lifts were achieved.
CRO programmes don't have to work this way. You can build an experimentation programme that centres on what users actually need, removes genuine friction, and still delivers the business results your stakeholders care about. The two goals aren't in conflict. They never were. But getting there means changing the structures and incentives that push good practitioners toward bad practices.
This is a framework for building CRO programmes that respect the people using your website. Not because it sounds nice in a pitch deck (it does though - my pitch decks are excellent) but because it produces better long-term results than programmes built on manipulation.
The issue isn't that CRO teams are full of bad actors. Most practitioners genuinely want to improve user experiences. The issue is that the organisational structures around CRO push programmes in directions that don't serve users, often without anyone making a conscious decision to do so.
It starts with how targets get set. When success is defined purely as conversion rate uplift, the programme naturally gravitates toward whatever moves that number fastest. And what moves it fastest is often manipulation, not improvement. A well-placed dark pattern will outperform a genuine UX improvement in a two-week test almost every time. The damage only shows up later, in metrics nobody's really tracking.
Then there's the way experimentation gets framed within organisations. Too often, testing is treated as a tactical marketing tool rather than a strategic discipline. Something you "run" to "hack" a number, like a campaign with a start date and an end date. That framing encourages short-termism. It turns experimentation into a trick rather than a method for genuinely understanding and serving users.
The result is programmes chasing dashboard metrics while quietly eroding the trust and goodwill that actually drive sustainable growth. The incentives made it inevitable. You can have talented, well-intentioned people running a programme that still produces bad outcomes if the structures around them are broken.
This sounds obvious. It isn't how most programmes work.
The default approach in many organisations is to start with business goals: increase conversion rate, reduce cart abandonment, grow average order value. Then work backwards to figure out which changes might move those numbers. The user's actual experience becomes an afterthought, something to be engineered around rather than designed for. Not because the team doesn't care about users, but because that's what the brief demands.
Flip that. Start with user research. Talk to the people using your site. Watch session recordings with the question "what is this person trying to accomplish?" rather than "where did they drop off?" Run surveys that ask about user goals, not just satisfaction scores.
When you understand what users are trying to do, you can identify the places where your site makes that harder than it needs to be. That's where the real optimisation opportunities live, not in the gaps between your current conversion rate and your target.
Not all friction is bad, and this can be something the "frictionless" school of CRO consistently gets wrong.
A confirmation step before a large purchase protects users from expensive mistakes. A clear breakdown of costs before checkout helps people make informed decisions. An honest description of what a subscription involves respects user autonomy. These are examples of friction that serves users, and removing them might lift your conversion rate while making the experience objectively worse.
Unnecessary friction is different. A checkout flow that demands account creation before purchase. A form that asks for information you don't need. A returns policy buried three clicks deep in your footer. Navigation that makes it hard to compare products. This friction doesn't protect anyone. It just makes your site harder to use.
Your programme should focus on removing the second type and preserving the first. That distinction matters.
Here's a test for whether your optimisation programme actually serves users: look at your hypotheses.
If they read like "By changing the button colour from blue to orange, we expect to increase click-through rate by X%", they're missing the point. That hypothesis says nothing about why a user would benefit from this change. It's pure extraction logic dressed up in the theatre of a testing framework.
A hypothesis that serves users looks different: "By surfacing delivery cost information earlier in the product page, we'll help users make informed purchase decisions sooner, which should reduce cart abandonment caused by unexpected costs at checkout."
The structure is straightforward. What change are you making? How does it help the user? What business outcome should follow? If you can't fill in the middle part, if you can't explain how the change serves users, you're probably not optimising. You're manipulating.
This is a simple filter that catches most ethical problems before they make it into your testing queue.
Before you launch a test, ask your team: would you want this experience? If you were the user, would you feel respected by this variant? Or would you feel tricked, pressured, or misled?
If your team wouldn't want the experience for themselves, don't inflict it on your users. The fact that it might convert better is not a justification. Plenty of manipulative tactics convert well in the short term. That doesn't make them good optimisation.
This doesn't mean every test needs to be transformative. Small improvements to layout, copy clarity, and navigation are perfectly valid. The question is whether the change serves the user, not whether it's exciting.
Conversion rate is one metric. It is not the only metric, and optimising for it in isolation produces perverse outcomes.
Track what happens after the conversion. Are return rates stable? Is customer lifetime value improving? Do users come back? What does your NPS look like over time? Are support tickets increasing or decreasing?
A variant that lifts conversion by 5% but increases returns by 8% hasn't optimised anything. It's shifted costs from marketing to operations and created a worse experience for users who now have to deal with returning something they were pressured into buying.
Build your measurement framework around the full picture, not just the top of the funnel. Sustainable growth shows up in retention, lifetime value, and referral rates. Those metrics only improve when users genuinely value the experience you're providing.
This is where programmes often fail the ethics test, not through malice but through momentum.
You run a test. One variant wins clearly. It hit statistical significance, the sample size was solid, and the results are convincing. But when you look at the secondary metrics, something doesn't add up. Returns are up. Satisfaction scores dipped. The variant won because it obscured important information, not because it improved the experience.
The pressure in most organisations is to implement that variant anyway. It won. The numbers say so. The stakeholder wants to see it live. Moving on.
An ethical programme asks a harder question: did this win because it helped users, or because it exploited them? And if the answer is the latter, it doesn't get implemented. Full stop.
Sometimes the right business decision is to leave a winning variant on the shelf. That's prioritising sustainable growth over a quick hit that will cost you in trust and retention down the line.
Your users are adults capable of making their own decisions. Respect that. Don't remove information to push them toward a choice. Don't create false urgency to prevent them from thinking. Don't use dark patterns to make the "wrong" choice (from your perspective) harder to select.
Present clear, honest information and let people decide. If your product or service is genuinely good, transparency helps you. If it only converts when you hide the details, that's a product problem, not a CRO problem.
Every time you're transparent about costs, processes, timelines, or limitations, you're building trust. And trust is the most powerful conversion driver that exists. It's more effective than urgency. More durable than scarcity. More valuable than any dark pattern ever devised.
Users who trust you buy more, return less, refer others, and forgive mistakes.
If a change only benefits your business and does nothing for users, it's extraction. Extraction isn't optimisation. It's the opposite: you're making the experience worse to make your numbers better.
Genuine optimisation creates value for both sides. Users get a better experience, and the business benefits from the improved engagement that follows. That's the standard your programme should aim for.
User-respecting CRO programmes aren't a charity project. They produce measurably better long-term results.
Higher customer lifetime value, because users who trust you come back.
Better retention rates, because you're not generating buyer's remorse.
Increased word-of-mouth referrals, because people recommend businesses that treat them well.
Reduced support costs, because transparent experiences generate fewer complaints.
Stronger brand reputation, because users notice when you're honest.
And lower regulatory risk, because you're not building your growth on practices that regulators are increasingly targeting.
Programmes built on short-term extraction tactics might show faster initial growth. They always plateau, and they always cost more to sustain than they're worth. The businesses that win long-term are the ones that figured out how to grow by genuinely serving their users.
If you're looking at your current programme and thinking the structures around it might not support the kind of work outlined above, start with an honest audit. Not of your team's intentions, but of the incentives and processes that shape what gets tested and implemented.
Review your user journeys and identify where manipulative patterns have crept in. Fake urgency, hidden costs, confirmshaming, friction deliberately added to steer users toward a preferred option. Name the practices. Be specific. Often these things accumulate over time without anyone making a deliberate decision to add them.
Look at your current test queue. How many of those tests can articulate a clear user benefit? How many are extraction plays that ended up in the queue because someone needed to hit a target?
Examine your metrics framework. Are you measuring long-term impact, or just immediate conversion? Do you track returns, retention, and satisfaction alongside conversion rate?
And look at how your team is incentivised. When someone proposes a test, is there space to ask whether it serves users? Or does the conversation begin and end with projected conversion lift because that's what gets rewarded?
The answers will tell you where to focus first.
CRO doesn't have to be a discipline defined by manipulation. The tools, the methodology, the statistical frameworks: none of them require you to exploit users. The people working in this field largely know that already. What's needed is the organisational permission and structural support to do the work differently.
Build your programme around user needs. Test changes that genuinely help people. Measure what matters beyond the conversion event. Be willing to make the harder call when ethics and short-term metrics conflict. And push for the structural changes that make this approach sustainable rather than heroic.
The result is better business, built on trust rather than tricks. It will be a compeitive advantage that your competitors can't easily steal.
If you're running a CRO programme and want help reshaping it around user respect, whether that's fixing the structures, reframing the metrics, or just having someone in your corner who's been through this, get in touch. Or learn more about how we approach CRO at Another Web is Possible.