9
15 Comments

We mapped every point where SaaS users silently quit. Then we built a system that covers all of them.

Most retention advice starts at the wrong place.
Fix your onboarding. Improve your cancellation flow. Send a win-back email. All of it is reactive. All of it assumes the user is already halfway out the door before you do anything.
We spent months realizing that by the time someone clicks cancel the decision is usually already made. The real moment happened two weeks earlier when they stopped using that one feature. Or three weeks earlier when they logged in, hit a wall, and quietly decided it was not worth figuring out.
The cancel click is just the paperwork.
So we stopped optimizing the exit and started mapping the entire journey. Every point where a user goes quiet. Every moment where engagement drops before anyone notices. Every pattern that shows up in users who churn versus users who stay.
Here is everything we found and everything we built around it.

The first silent quit point — early product friction
Most users who churn never complain. They hit something confusing on day three, assume it is their fault or that the product just does not work for them, and mentally check out. They keep the subscription for another few weeks out of inertia and then cancel quietly.
No ticket. No feedback. Just gone.
This is the hardest one to catch because there is no signal. The user is still technically active. They are just not getting value anymore and you have no idea.
The fix we built is Drift detection. Flidget watches how users engage with the product and scores each one automatically. Healthy, Risky, or Drifting. The score is based on real behavior. Last active date, sessions, whether they reached the key features that actually correlate with staying.
A user who never touches your core feature within the first two weeks is statistically much more likely to churn. You can see that in the dashboard before they ever get close to the cancel page.

The second silent quit point — the slow fade
This one is more common than founders think.
User starts well. Uses the product regularly. Then life happens. A competitor catches their eye. A budget review comes up. Engagement starts dropping. Logins go from daily to weekly to every ten days.
They have not cancelled yet. They are just fading.
By the time they actually hit cancel you have already lost them mentally. The decision is made. The conversation on the cancel page is almost academic at that point.
Drift catches this too. When engagement drops below a threshold the user flips from Healthy to Risky. You see it in the dashboard with a plain English reason. Not raw event logs. Not a confusing graph. Just a line that says last active five days ago, usage dropping. Now you can reach out while there is still something to save.

The third silent quit point — the cancel click itself
By this stage most founders send a survey. Three days later. When the user has already moved on, already started their new tool, already mentally filed your product under things that did not work out.
The feedback you get back is vague. Polite. Basically useless.
We built Retention Copilot for this moment instead. A small chat that opens the instant someone clicks cancel. On their page. Their domain. Not a redirect. Not a popup from a random URL. A real conversation while the frustration is still fresh and specific and honest.
The timing changes everything about the quality of the answer. Someone mid-cancel will tell you exactly what went wrong. That same person three days later will say something like it just was not the right fit which tells you absolutely nothing.
Voice or typing, both work. Some people talk. Some type. The answer lands in the dashboard tagged automatically. Pricing. Competitor. Missing feature. Bug. Bad fit. You can filter and act on it immediately.

The fourth point — knowing who is worth saving
Not every churned user should be chased. Trying to retain everyone is a trap. Bad fit users who stay become support problems, bad reviews, and churn anyway six months later with more damage done.
The win-back queue solves this. It flags the specific conversations worth a personal follow up. The ones who left for a reason you just fixed. The ones who loved the product but hit one specific wall. The ones where a two line email could genuinely change the outcome.
Everyone else you let go gracefully and use their exit data to improve the product for the next hundred users.

What the full picture looks like
Drift catches the fade before it becomes a decision. Retention Copilot captures the honest reason at the exact moment of decision. The dashboard connects both so you always know who to focus on and why.
Two modules. One dashboard. Zero extra tools.
The insight that changed how we think about all of this is simple. Churn is not one moment. It is a series of small quiet moments that compound over days and weeks until the cancel click makes it official.
Cover all those moments and your retention looks completely different.

We are at flidget.com. Free to start. Retention Copilot takes two minutes to set up. Drift detection takes one event.
The first week usually surprises you.

Where in the user journey do you see the most silent drop off in your product?

on April 18, 2026
  1. 1

    This is a great breakdown of how churn actually happens in reality. The idea that the cancel click is just paperwork really stands out. Most teams focus too late in the journey. Mapping silent drop-offs and acting earlier makes a lot more sense.

    1. 1

      Exactly right. Most teams optimize the moment they can see on a dashboard. The real signal is always earlier and quieter than that. Glad the paperwork framing landed - it is the most honest way to describe what a cancel click actually is.

  2. 1

    Reactive retention is the cost of lagging indicators. The silent-quit signal shows up 2-3 weeks before the actual cancellation. Session depth collapses. Feature breadth narrows. Response time to in-app prompts doubles. All of it visible if you instrument for it before churn fires. Which leading signal hit first in your data?

    1. 1

      Session depth collapsing is the one we see hit first most consistently. Feature breadth narrows after but the depth drop is usually the earliest warning. User is still showing up but doing less each time they do.
      The response time to in-app prompts is an interesting one we have not weighted heavily enough yet. That might be worth looking at more closely in the data.
      What instrumentation are you using to catch the depth signal before it compounds?

  3. 1

    The line about the cancel click being just the paperwork captures something most retention playbooks miss. By the time a user is on the cancel page the company has already lost, and all the cancellation-flow optimisation you can do is effectively arguing with someone who has mentally moved on.

    The part I would pressure-test is the "Drifting" label. In practice I have seen users bounce between sessions per month for totally benign reasons (holiday, travel, switching accounts for a security review) and mis-classifying those people risks you either over-nurturing or worse, triggering a "we noticed you haven't been active" email to someone who never actually left. What is the false-positive rate looking like in early data?

    1. 1

      Completely fair challenge and honestly the most important technical question anyone has asked about Drift so far.
      You are right that raw inactivity is a terrible signal on its own. Someone on holiday looks identical to someone silently churning if you are only watching session frequency. That is exactly why we do not classify on inactivity alone.
      The Drifting label requires a combination of signals. Inactivity plus never reaching a key feature plus dropping below a usage baseline that we establish from the user's own first few weeks. Someone who was logging in daily and drops to weekly looks different from someone who never built a habit in the first place.
      False positive rate is something we are actively measuring right now. Early data shows the biggest source of false positives is exactly what you described, short breaks from users who were otherwise healthy. The fix we are working on is a minimum engagement baseline before Drift scoring even kicks in. If someone has not yet established a usage pattern we hold the score rather than classifying too early.
      The goal is not to flag everyone who goes quiet. It is to flag the ones where the pattern shift is meaningful relative to their own baseline. Still early but that framing has reduced noise significantly in testing.

  4. 1

    This is sharp — especially the framing that churn isn’t a moment but a series of quiet signals.

    The “cancel click is just paperwork” line hits hard — most people optimize way too late in the journey.

    Curious — have you seen any pattern in which signals matter most early (first 7–14 days)? Or does it vary a lot by product?

    1. 1

      First 7 to 14 days almost always matters more than anything that comes after. The pattern we see consistently across products is that users who never reach the core feature in that window are significantly more likely to churn regardless of what happens next.
      It is not about logins. Someone can log in every day and still be drifting if they are not touching the feature that actually delivers value.
      The two signals that show up most reliably in early churn are never completing a key action that correlates with retention and going quiet after day three without any support interaction or product engagement.
      After that first two weeks the signals shift. Early churn is almost always a friction or onboarding problem. Later churn tends to be value perception or competition.
      Different problems, different fixes, different urgency. The timeline tells you which one you are dealing with.

      1. 1

        That makes a lot of sense — especially the “day 3 drop-off” point.

        Feels like most products don’t actually know what their true “key action” is early on — they track activity, not progression.

        Have you seen cases where just making that core action more obvious (UI/positioning/messaging) moved retention significantly?
        Or does it usually require deeper product changes?

        1. 1

          Yes and more often than founders expect it is the obvious fix not the deep one.
          The most common pattern is that the key action exists and works fine but users just do not know they need to do it. Onboarding showed them ten features and buried the one that actually matters. Moving that one action to the front of the flow sometimes moves retention without touching the product at all.
          Deeper changes matter when the action itself is too complex or completing it does not immediately feel valuable. That is a product problem, not a positioning one.
          The diagnostic question is simple. Are users not finding the action or finding it and feeling nothing after. First is a messaging fix. Second is a product fix. Completely different solutions.

          1. 1

            That diagnostic is clean.

            One thing I’ve noticed building on top of that — even when the core action is clear in the product, a lot of drop-off still comes from how it’s framed before users even get there.

            If the mental model isn’t obvious (“what this actually does for me”), users don’t even reach the point where onboarding can help.

            So sometimes it’s not just:
            – surfacing the right action
            but also
            – making the outcome of that action instantly legible from the outside

            Have you seen cases where just tightening that top-level framing (landing page / naming / first impression) changed how many users even got to that key action?

            1. 1

              Yeah, that’s a sharp observation and it happens more often than people think.
              If users don’t “get it” from the outside, they never even enter the flow where the key action matters.

              We’ve seen cases where just fixing positioning, naming, or the first screen doubled activation -no product changes at all.
              Clarity before onboarding is often the real bottleneck.

  5. 1

    Great, Could you please give me a demo how it works so that we will integrate this in our product.

    1. 1

      Appreciate your decision. For a demo, we've added a demo module to our product. You can just drop your email and suitable time, and our team will reach out to you.

      1. 1

        Thanks, will take a look.

Trending on Indie Hackers
I built a tool that shows what a contract could cost you before signing User Avatar 111 comments The coordination tax: six years watching a one-day feature take four months User Avatar 73 comments My users are making my product better without knowing it. Here's how I designed that. User Avatar 63 comments A simple LinkedIn prospecting trick that improved our lead quality User Avatar 50 comments I changed AIagent2 from dashboard-first to chat-first. Does this feel clearer? User Avatar 39 comments Why I built a SaaS for online front-end projects that need more than a playground User Avatar 15 comments