All case studies

Illustrative case study

They had three growth problems. Fixing one changed everything

Traffic was inconsistent. Conversion was below average. Onboarding was losing most signups. The founder spent weeks trying to improve all three. Then they looked at the data and realized only one of them was the real bottleneck.

22% → 41%

Activation rate

14 → 26 / week

Active users

+45%

Day-7 retention

This is an illustrative case study. It is based on a pattern that shows up regularly in early-stage SaaS products: multiple visible problems competing for attention, and a founder trying to fix them all at once. The numbers and context are modeled from typical scenarios, not drawn from a specific named customer. If you are juggling several growth problems right now and unsure where to start, the framework here applies directly.


Quick summary

A founder was running a small project management tool for freelancers. The product had three simultaneous problems: traffic was inconsistent, pricing page conversion was mediocre, and most signups never activated.

The founder spent weeks splitting effort across all three. Nothing moved meaningfully.

When they sat down and compared the potential impact of fixing each problem independently, one stood out. Improving activation from 22% to 40% would nearly double the number of real, active users per week without needing a single additional visitor or signup. The other two improvements would each add only a handful of signups that would mostly churn anyway due to the same onboarding problem.

The founder stopped working on traffic and conversion. They spent three weeks focused entirely on the onboarding experience. Activation rose to 41%. Active users per week nearly doubled. Day-7 retention improved by 45%.

The lesson was not that traffic and conversion do not matter. It was that fixing them first would have been wasted effort because users were already leaking out of the bottom of the funnel.


The situation: three problems at once

The product had been live for about ten months. After an initial push of launches and community posts, the founder had settled into a routine of gradual improvement. Traffic was growing slowly from organic search and occasional social posts, but it was not consistent. Some weeks brought 1,000 visitors. Others brought 600. The average was around 800.

The signup conversion rate was sitting at about 8%. Not terrible, but the founder had seen competitors claim 12% to 15% and felt the pricing page could do better. The copy was functional but not sharp. The layout was standard. Nothing about it was obviously broken, but nothing about it was especially compelling either.

And then there was the third problem, which the founder had noticed but not fully investigated: most signups were not activating. Of the roughly 64 people who signed up each week, only about 14 completed the first meaningful action inside the product. That is a 22% activation rate.

Three problems. Three directions to run. Not enough hours in the week to run in all of them.


Why trying to fix everything failed

For about six weeks, the founder split their effort roughly equally across all three areas.

They wrote two blog posts to improve organic traffic. They redesigned the pricing comparison table on the landing page. They tweaked the onboarding email sequence. They adjusted the welcome screen copy. They experimented with different community posting strategies.

At the end of six weeks, the numbers were essentially unchanged. Traffic was still averaging around 800. Conversion was still around 8%. Activation was still around 22%.

The effort had been real. The problem was that none of the individual changes had been deep enough to move the number. A blog post brought 40 extra visitors one week. The pricing table tweak might have converted one or two additional signups. The email adjustment did not measurably change activation. Each change was too shallow because the effort behind it was divided three ways.

This is the trap of parallel improvement. When you split limited time across multiple problems, each one gets a surface-level fix that does not reach the root cause. You stay busy. The metrics stay flat. After a few weeks, it starts to feel like nothing you do works.

The problem is not that nothing works. It is that nothing was given enough focus to work.


What the data revealed

The turning point was when the founder stopped trying to fix things and started trying to understand which thing mattered most.

They pulled the numbers for a typical week and laid them out as a simple funnel:

Horizontal funnel showing 800 weekly visitors narrowing to 64 signups at 8% conversion then sharply narrowing to 14 activated users at 22% activation rate, with a callout highlighting the 78% drop between signup and activation as the main bottleneck

800 visitors. 64 signups. 14 activated users.

The two biggest drop-off points were immediately visible. The first was visitors to signups: 92% of visitors left without signing up. The second was signups to activation: 78% of signups left without activating.

Both drops look large. But their implications for the business were very different.

The visitor-to-signup drop was large in percentage terms, but 8% conversion is within a normal range for a bootstrapped SaaS tool with mixed traffic sources. Improving it by 25% (from 8% to 10%) would add about 16 more signups per week. Of those 16 additional signups, at the current 22% activation rate, only about 3 or 4 would actually activate. The rest would sign up and leave, just like the current ones.

The signup-to-activation drop was where the real leverage was. Improving activation from 22% to 40% would take the same 64 weekly signups and turn them into 26 active users instead of 14. That is 12 more active users per week, almost doubling the number, with zero additional traffic or conversion work required.

The math was not subtle. But the founder had not done it until they forced themselves to compare the three options side by side.


Comparing the three possible improvements

The founder wrote down three scenarios and estimated the downstream impact of each one.

Comparison table showing three possible improvements: increasing traffic 30% would add 4 active users per week, improving conversion 25% would add 4 active users per week, and improving activation from 22% to 40% would add 12 active users per week, with the onboarding row highlighted in orange

Scenario 1: Improve traffic by 30%. 800 visitors becomes 1,040. At 8% conversion, that is about 83 signups. At 22% activation, that is about 18 activated users. Net gain: about 4 more active users per week.

Scenario 2: Improve pricing page conversion by 25%. 800 visitors at 10% conversion gives about 80 signups. At 22% activation, that is about 18 activated users. Net gain: about 4 more active users per week.

Scenario 3: Improve activation from 22% to 40%. 800 visitors, 64 signups, 26 activated users. Net gain: 12 more active users per week.

The first two improvements were both reasonable goals. Both would have required real effort. And both produced roughly the same modest result: about 4 extra active users per week. The reason was the same in both cases. More signups flowing into a broken activation funnel meant more people who signed up and left. The leaky bucket did not care how much water you poured in.

The third option produced three times the impact on the number that actually mattered: users who experienced the product's value.

This is the core insight behind deciding what to fix first: when you have multiple problems, work on the one with the highest leverage, not the one that feels most urgent or most visible. And leverage is almost always highest at the point in the funnel where the largest gap sits between the number going in and the number coming out.


What they chose to fix

The founder committed to three weeks focused exclusively on onboarding. No blog posts. No landing page tweaks. No distribution experiments. Just the experience between signup and activation.

They started by walking through their own onboarding flow from scratch.

The existing flow had five steps: account creation, a welcome screen, a preferences form, a feature walkthrough, and a blank workspace. The activation event was creating the first project with at least one task.

The problems were familiar to anyone who has read about time to first value. The user had to complete four steps of overhead before they could do the one thing that made the product useful. By the time they reached the blank workspace, many of them had already spent their patience.

Change 1: Eliminated the preferences form and feature walkthrough from the initial flow.

Both were moved to an optional settings area. The preferences form had been collecting data the product did not use in the first session. The feature walkthrough was showing users screenshots of things they could discover faster by just doing them. Removing both steps cut the path from account creation to workspace from four steps to one.

Change 2: Pre-populated the first workspace with sample content.

Instead of a blank screen, new users landed in a workspace with three example tasks, a sample project, and a brief note explaining what each element was. The product looked alive and functional on first contact. The user could immediately see what a working workspace looked like before investing time in creating their own.

Change 3: Added one clear prompt on the first screen.

Below the sample project, a single line of text said: "Ready to try it yourself? Create your first project below." Below that was a text field and a button. One action, one outcome, no competing options.

Change 4: Sent a focused follow-up email at 2 hours.

Users who signed up but did not create a project within two hours received a short email: "Your workspace is set up. Here is how to create your first project in 30 seconds." The email linked directly to the project creation screen, bypassing everything else. This caught about 15% of users who had dropped off during the first session but were still interested enough to come back when prompted.

The entire set of changes took about two and a half weeks to build and ship. The remaining half week was spent watching the numbers.


Results after focusing

The founder measured the impact three weeks after the new onboarding went live.

Three result cards showing activation rate at 41% up from 22%, active users at 2x per week going from 14 to 26, and day-7 retention lift of 45% among activated users

Activation rate went from 22% to 41%. Nearly twice as many signups were completing the first meaningful action. The product was reaching users before their attention moved on.

Active users per week nearly doubled. From 14 to about 26 active users per week, with the same traffic and the same conversion rate. No additional acquisition work was needed.

Day-7 retention improved by about 45%. Users who activated under the new flow came back at a significantly higher rate than users who had activated under the old flow. The likely reason: a faster path to value produced a stronger first impression. Users who understood the product in four minutes returned more reliably than users who had fought through twelve minutes of setup to get there.

The total traffic and signup numbers were unchanged. The founder had not touched the landing page, written any new blog posts, or done any distribution work during the three-week sprint. The only change was what happened after the signup button.


What other founders can learn from this

Not all problems are equal. Three problems can have three very different levels of impact on the metric that matters. The instinct is to work on whatever feels most urgent or most visible. The discipline is to estimate the impact of each fix and work on the one with the highest leverage. That estimate does not need to be precise. It just needs to be done.

Measure from the bottom of the funnel, not the top. The metric that matters for most early-stage products is active users, not visitors or signups. The five metrics that actually matter for small products covers this in detail, but the summary is: work backward from the number you care about most, find the step where the biggest gap exists, and focus there.

Fixing upstream problems is wasted effort when downstream is broken. More traffic flowing into a broken activation funnel produces more abandoned accounts, not more users. More signups flowing into a confusing onboarding experience produces more people who leave on day one. The fix that matters is the one closest to the leak, not the one closest to the top of the funnel.

Parallel improvement is a trap for small teams. A single founder or a small team cannot make three meaningful changes in the same week. Splitting effort means each change is shallow. One deep change to the highest-leverage point beats three surface-level changes spread across the funnel.

The comparison exercise takes 30 minutes. Write down three possible improvements. For each one, estimate the downstream impact on the metric you care about. The one with the highest impact is your priority. You do not need a spreadsheet. You need a back-of-napkin calculation and the willingness to ignore the other two for a few weeks.

Focus feels risky but produces faster results. The founder spent three weeks on onboarding and touched nothing else. That felt uncomfortable. It meant watching traffic fluctuate without responding. It meant ignoring a pricing page that could probably be improved. But three weeks of deep focus produced a result that six weeks of scattered effort had not: a measurable, meaningful change in the number that matters.


FAQ

How do I know which problem to fix first when all my metrics are low? Work from the bottom of the funnel upward. Check activation first. If activation is below 25%, that is likely your bottleneck regardless of what traffic or conversion look like. Improving traffic or conversion while activation is broken just pushes more people into a leaky bucket. If activation is healthy (above 35%) but conversion is low, focus on conversion. If both are healthy but traffic is low, focus on distribution. The rule is: fix the earliest broken stage in the funnel, measured from the bottom up by impact.

What if I do not have enough data to compare scenarios? You need less data than you think. Even with 200 to 300 weekly visitors, you can estimate the downstream effect of each improvement. If your conversion rate is 8%, you know how many signups you get. If your activation rate is 22%, you know how many active users you get. Run the multiplication for each scenario. The precision does not matter. The ranking usually does.

Should I completely ignore the other problems while I focus on one? You should stop making changes to them, but you should keep monitoring them. If traffic drops sharply or conversion falls off a cliff, respond to that. But do not proactively invest time in improving areas you have decided are not the priority. The point of focus is to go deep enough on one problem to actually move the number. Checking on the others weekly is fine. Splitting your work across them defeats the purpose.

How long should I focus on one area before checking results? At least two to three weeks after shipping the change. Most onboarding improvements need that long to accumulate enough data across new signups. If you check after three days, the sample is too small. Ship the change, wait two full weeks, then compare activation rate before and after. If the number moved, decide whether to continue or shift focus. If it did not, investigate whether the change reached the actual bottleneck.

What is a good activation rate to target? For most web-based SaaS products, 30% to 50% activation within the first 7 days is a healthy range. Below 20% almost always signals a structural onboarding problem. Above 50% means your path to first value is working well and you can likely shift focus to traffic or conversion. What is user activation covers how to define and benchmark this metric in more detail.


See this kind of comparison in your own product

The founder in this scenario spent six weeks working on the wrong things before running a simple comparison that took 30 minutes. The data was available the entire time. Nothing was prompting them to do the math.

Muro is built to surface this kind of prioritization signal automatically. When your funnel has a clear bottleneck, Muro tells you where it is and how it compares to other possible improvements. You do not need to build a spreadsheet or remember to check the right split.

If you are splitting effort across multiple problems right now and not sure which one deserves your full attention, that is the question Muro helps you answer.

See what Muro finds in your product

Start your 30-day free trial. No credit card required.

$5/month after the trial. Cancel anytime.