Skip to content
CRO

How to Run a CRO Audit: A Step-by-Step Framework

Patrick Scott · April 20, 2026 · 9 min read

The short answer

A CRO audit isn't a vibe check on your site. It's a structured pass that quantifies where the conversion rate leaks, why, and what to fix first. Done well, it produces a prioritized list of issues with estimated impact, supported by quantitative and qualitative evidence.

This post is the 6-stage framework I use on every engagement, with the deliverables and tools for each stage. If you've never run a structured CRO audit before, the order below is the order to follow.

If you're new to CRO entirely, start with the beginner's guide. This post assumes you understand the basics and want the actual audit process.

Audit vs. just looking at the site

Plenty of teams 'audit' their site by walking through it once and listing things that look wrong. That's a critique, not an audit. The difference matters because critiques get ignored. Audits drive backlogs.

A real audit produces three things: quantitative evidence (numbers from analytics, heatmaps, session data), qualitative evidence (real user behavior observed, surveyed, or interviewed), and a prioritized action list with estimated impact for each item. Without all three, the output is too easy to dismiss.

The 6-stage framework

Stage 1: Define what conversion means (and verify the tracking)

Before you can audit conversion, you have to agree on what counts. Most teams say 'conversion rate' and mean three different things. Form submission rate? Phone call rate? Purchase rate? All-conversions rate? Pin down the definition before you measure.

  • List every action a user can take that constitutes a business outcome.
  • Decide which is the primary conversion (the one you optimize against) and which are secondary.
  • Confirm the GA4 conversion tracking is firing correctly for each. Run them through DebugView. Don't trust the historical reports until you've seen events fire live.
  • Document the definition. If the team can't agree on what 'conversion' means, the audit will be argued instead of acted on.

If your tracking is broken, the audit will be wrong. Spend the first hour confirming the data before you spend the next ten hours analyzing it. Run the GA4 audit checklist as a prerequisite to this stage.

Stage 2: Pull the quantitative data

Three sources to pull. Each answers a different question.

  • GA4 analytics. Conversion rate by landing page, by traffic source, by device. The funnel exploration on your primary conversion path. Time-on-page and scroll depth for the highest-volume conversion pages.
  • Session recordings (Microsoft Clarity is free, Hotjar is the paid option). Watch 20 sessions on your highest-traffic conversion page. Note every observed friction point.
  • Heatmaps. Click maps and scroll maps for the same high-priority pages. Pay attention to dead zones (areas users never click) and false-positive clicks (areas users click expecting them to be interactive).

Output of this stage: a one-page summary of the quantitative state. Conversion rate by page and source, identified drop-off points in the funnel, key heatmap and session-recording observations. Five to ten bullet points, not a 30-slide deck.

Stage 3: Walk the funnel manually

Before forming hypotheses, walk the funnel yourself. On a real phone, on a real desktop, in incognito. Try to do the thing you want users to do. Note every moment of friction, confusion, broken validation, broken link, missing trust signal.

  • Phone walkthrough first. Mobile is where most issues live and where most testing skips.
  • Try every form, every CTA, every checkout path you have.
  • Time yourself. Note where you got slowed, where you doubted, where you'd have left if you weren't auditing.
  • Have someone outside your team do the same walkthrough. Pay them with coffee. Their friction points will be different from yours.

Output of this stage: a friction log. Specific moments where the experience broke or felt wrong, with the page, the device, and what happened. Don't editorialize yet. Just record.

Stage 4: Identify and prioritize the leaks

Now combine the quantitative output (Stage 2) and the qualitative output (Stage 3). Leaks where both sources agree are the highest-confidence findings. Leaks where only one source surfaces them are still worth investigating.

Prioritize by estimated impact, not by how easy the fix is. The impact is a function of two things: how many users hit this issue, and how much it changes their behavior. A 10-second mobile load time on your highest-traffic landing page beats a confusing CTA on a low-volume secondary page, even if the CTA is easier to fix.

  • List every identified leak.
  • Estimate the affected user volume (often the page's monthly traffic).
  • Estimate the conversion impact if fixed (rough guess, refine over time as you learn your site's response curve).
  • Multiply the two for a rough impact score. Sort the list by score.

Be honest about which leaks you can actually fix. A great prioritized list with three items the team can ship beats a beautiful prioritized list with thirty items that never get touched. Match the audit ambition to the team capacity.

Stage 5: Form hypotheses and test plan

For each prioritized leak, write a hypothesis: 'If we [change X], then [Y will happen], because [Z].' The because-clause is the part most teams skip and the part that makes the difference between a test program that compounds and one that just produces a stack of test results.

  • Make the change specific. 'Shorten the form' is not a hypothesis. 'Cut the company-size and job-title fields from the contact form, leaving name, email, and message' is a hypothesis.
  • Make the prediction specific. 'Conversion rate will improve' is too vague. 'Form submission rate will improve from 1.4% to 2% or higher within two weeks' is testable.
  • Make the reasoning explicit. 'Because users abandon the form when they hit the company-size field (visible in session recordings).' If the test wins, you've validated the reasoning. If it loses, you've ruled out a hypothesis and learned something.
  • Decide on the test method: A/B test if you have the traffic, before/after if you don't.

Output: a test plan. Three to five hypotheses, prioritized, with method and timeline for each.

Stage 6: Document and iterate

After each test, document the result and what you learned. Wins go into the playbook. Losses go into the playbook too. The point isn't to ship every change, it's to build a body of validated knowledge about how your specific audience behaves.

  • Result (won, lost, or inconclusive).
  • Magnitude of the change (relative percent, absolute lift).
  • Whether the result held when re-tested or measured over a longer window.
  • What the result implies for related hypotheses.

Then go back to Stage 4 and re-prioritize. CRO is a loop, not a project. The teams that win at it treat the audit as the start of a continuous program, not a one-time deliverable.

Common mistakes

  1. 1Skipping Stage 1 and starting on Stage 2. Without an agreed conversion definition, the audit produces numbers nobody trusts.
  2. 2Pulling the data without the manual walkthrough. Quantitative data shows what's happening, qualitative data shows why. Skip Stage 3 and you'll fix the wrong thing.
  3. 3Prioritizing by ease of fix instead of impact. Easy wins are good but shouldn't be the program. Always tackle at least one high-impact, harder-to-fix item per sprint.
  4. 4Treating one A/B test result as proof. Tests have noise. Validate big calls by re-testing or measuring over longer windows.
  5. 5Building a beautiful audit deck nobody acts on. The output of a CRO audit is a backlog the team will ship, not a presentation. If the team's not shipping, the audit failed regardless of how good the analysis was.

The point of a CRO audit isn't to be right about the site. It's to give the team a prioritized backlog they can act on this quarter, with enough evidence behind each item that they trust shipping the change. If the audit doesn't produce that, the audit didn't work.

When to do this yourself, when to hire

You can run this audit yourself if you have admin access to GA4, you're comfortable in session-recording tools, and you can dedicate two to three days of focused work. Skipping any of those makes the output thin.

Hire it out if you've never run a structured audit before, if your conversion stakes are high (revenue per percentage point of conversion is large), or if you need outside perspective because the team is too close to the site to see the friction. The cost of a structured audit is generally recouped in the first or second test win that comes out of it.

Where this connects to analytics

A CRO audit is built on top of analytics. If your GA4 setup is broken, the audit will be wrong. If your conversion tracking is incomplete, you'll prioritize the wrong leaks. Run those audits first. Then this one.

Once analytics is clean, the CRO audit is high-leverage and repeatable. Most teams should run a full audit annually and a lighter version quarterly, with continuous testing in between.

Getting started

If you want to run your first structured CRO audit this month, here's the order.

  1. 1Run the GA4 audit checklist first. Don't audit conversion on broken tracking.
  2. 2Stage 1: Document your conversion definition. Get team alignment.
  3. 3Stage 2: Pull conversion rate by page, by source, by device. Watch 20 session recordings on your top conversion page.
  4. 4Stage 3: Walk the funnel on your phone. Have someone outside the team do the same.
  5. 5Stage 4: Identify the top 5 to 10 leaks. Prioritize by estimated impact.
  6. 6Stage 5: Write three to five hypotheses. Decide on test method for each.
  7. 7Stage 6: Run the first test. Document the result. Re-prioritize. Repeat.
  8. 8If you want a structured audit done, with the deliverables formatted for the rest of your team to act on, reach out. CRO audits are part of every full-stack marketing engagement I run.

A real CRO audit takes two to three days of focused work and produces months of test agenda. The outdoor and DTC teams that treat it as a quarterly habit pull dramatically ahead of the ones that treat conversion as a fixed property of their site. Run the audit. Build the loop. Ship the changes.

Frequently asked questions

How long does a full CRO audit take?

Two to three days of focused work for a small or mid-sized site. Five to seven days for a larger site or one with multiple distinct conversion paths (e-commerce + lead-gen, multiple location funnels, etc.). The time is mostly in Stage 2 (data analysis) and Stage 3 (manual walkthrough). Stages 1 and 4 to 6 are quicker if you have the inputs.

Do I need a paid testing tool?

Not for the audit itself. For the testing program that follows, depends on traffic. Sites with high conversion volumes get value from VWO, Optimizely, or Convert. Sites without high traffic do better with structured before/after testing managed in a spreadsheet. Don't pay for testing software you can't generate enough conversions to use.

How often should I run a full audit?

Annually for a full audit. Quarterly for a lighter version that focuses on changed pages, new traffic sources, and recent test results. Continuous testing happens in between. Don't run a full audit every quarter, the team gets diminishing returns and audit fatigue.

What if my traffic is too low for A/B testing?

Use structured before/after testing instead. Pick the change. Establish the baseline (4 weeks of data minimum). Make the change. Measure for at least 4 weeks afterward. Compare. The conclusions are weaker than a proper A/B test, but they're still informative and they're the right tool for sites that don't generate enough conversions for statistical significance. The framework above works the same way regardless of test method.

Written by Patrick Scott, marketing consultant at Improve It Marketing. I run technical SEO, AEO, paid search, analytics, and CRO for small and mid-sized businesses, with a concentration of outdoor and DTC brands. More on how I work and who I work with on the About page.

Want to talk about this stuff?

No pitch, no pressure. Just a conversation about what's working, what isn't, and where to go from here.