Most distribution advice for solo founders is about channels: which one to pick, how to work it, how to scale it. After six weeks of trying the channels that fit my budget and personality, I realized the channel was not the problem.
Every channel I tried was rented.
Cold DMs were rented attention. The hour I spent writing them was the deposit. The moment I stopped, the attention stopped. Tweets rented reach until the algorithm shifted, which it always does. Quora answers rented inbound until the question slipped off page one. A halfway-decent Product Hunt launch rented a day of traffic that flattened by the next week. I was paying with effort instead of dollars, but the contract was the same: stop paying, traffic stops.
The IH community has been calling email lists "owned" channels for years. I had nodded at that framing without applying it at the feature level. Once I did, the question changed. Instead of "what channel should I work this week," it became "what feature can I build into the product that pays me back in six months without my hand on it?"
I rebuilt the growth side of Flowly (flowly.run, a task manager with timers and analytics for freelancers) around that question. In four days I shipped 10 features. None of them is revolutionary. Every one of them is mine to keep.
Here is the test I run before I build any growth feature now:
"Will this still produce in 30 days if I stop touching it?"
If yes, it is a loop. You own it. If no, it is a campaign. You are renting.
This test is unforgiving. Most things people call "distribution work" fail it. Cold outbound fails. Twitter posts fail. Even a beautifully run Product Hunt launch fails by week four. Almost nothing that costs you a calendar hour today is still producing value 30 days later.
The 10 features below pass it.
The 10 loops, ordered by the customer story below. The order I would build them in if starting over is in parentheses.
The goodbye that knows your name (build order: #2)
The email that meets you where you actually are (build order: #4)
The email I almost did not send (build order: #6)
The survey that does its own segmentation (build order: #3)
The referral I almost cut for being too early (build order: #7)
The roadmap that is actually free product research (build order: #8)
The 15 templates that pay rent at 3am (build order: #5)
The free tools that ask for nothing (build order: #9)
The card that does the bragging for you (build order: #10)
The boring email worth more than the clever ones (build order: #1, by a wide margin)
Total cost: roughly 60 hours across four days. The reason it took four days and not four months: I had no meetings.
If you only ship one of these this week, ship #10. It is the most boring item on this list and the closest thing to free money in this entire post.
When a trial ends, most apps show the same goodbye message to everyone. "You are about to lose analytics, calendar sync, and unlimited history." Generic. Forgettable.
I changed it. Now the trial-end modal shows the user their actual numbers from the trial: "You created 47 tasks. Tracked 23.5 hours across 4 projects. Hit a 9-day streak. That data goes read-only at midnight."
Same trial. Same product. Same price. Only the goodbye is different.
Why it works: loss aversion is documented at roughly 2:1 over gain framing (Kahneman, Tversky). Personalization research puts the CTR lift at 25 to 50% when you replace generic copy with the user's own data. The mechanic is older than software: a person fights harder to keep something they already built than to acquire something new of equal value.
Who it is actually for: the freelance designer who signed up two weeks ago, quietly built habits, and forgot how much she had already invested by the time the trial-end modal showed up. The modal reminds her.
What it taught me: I shipped the first version without a fallback. A user with very low trial activity saw "You created 0 tasks. Tracked 0 hours." I had built loss aversion that worked perfectly against me. The screen told her the trial was a waste. The fix is one line of code; the lesson is bigger: loss aversion needs something to lose. If your version of this feature can hit zero, build the fallback first.
The trial drip used to send the same Day 3 email to everyone. Same Day 7. Same Day 10. I rewrote it to branch on what the user actually did inside the product.
Created zero tasks in the first 48 hours? "Start with one quick task. Takes 30 seconds."
Created tasks but never started a timer? "One click to know where your hours actually go."
Timers running but no calendar connected? "Add 15 minutes. Link Calendar. See your full day."
Calendar already connected? An entirely different Day 7. There is no point re-pitching a feature she already uses.
Why it works (the data): triggered emails outperform batch sends by roughly 70.5% on open rate and 152% on click-through (MarketingProfs). Braze reports about 5x revenue per email for behavioral versus broadcast. Automated emails are about 2% of email volume and drive about 37% of email sales.
Why it actually works (the customer part): the user is told, in plain language, "I see what you did and did not do this week. Here is the next 30-second action specifically for that." Most onboarding emails address a user the company imagines. This one addresses the user who exists.
What it taught me: I wildly underestimated how much harder it is to write four good versions of a Day 3 email than to write one polished version. The code change was a switch statement. The copy is the actual work. I am already rewriting two of the four.
When a paid subscriber cancels, three emails fire over the following weeks.
Day 7: "We miss you. Here is what shipped since you left."
Day 21: "Try Pro free for 30 days, no card."
Day 45: "What would have kept you?"
Why it works (the data): Klaviyo data has win-back open rates around 29% on average. Multi-email sequences reactivate around 10% of recipients; optimized sequences hit 14 to 18%. The median ROI on optimized win-back is documented at about 380%, compared to 150 to 200% for paid acquisition. One in ten cancellations coming back is meaningful at every scale.
What it taught me: I held this feature off for weeks. Writing the Day 21 email felt presumptuous, as if I was bothering someone who had already left and made it clear. I drafted it three times. I deleted it twice.
Then I read the open-rate benchmarks and realized the embarrassment was mine, not the customer's. Your churned users opted in to hearing from you once when they signed up. One polite "we miss you" email is not a violation; it is a courtesy. The customer who left because she switched jobs and stopped freelancing is not insulted by your email. She just deletes it. The customer who left because of a real reason (price, a missing feature, a workflow change) might come back. Either way, you find out.
I almost cost myself a 10% reactivation rate on every future cancellation because I was projecting embarrassment onto people I had never met. Do not do that.
At Day 30 of paid, every customer gets a one-question email: "0 to 10, how likely are you to recommend Flowly?" The email body is 11 score buttons. Each click goes to a different page depending on the score.
9 to 10 lands on a pre-filled G2 review page
7 to 8 lands on "What would make it a 10?"
0 to 6 lands on a direct message to me
The clever part is not the email. The clever part is that the user does the segmentation for me. I do not have to look at a score in a database and decide what to do; the user has already self-routed by the time the page loads.
Why it works: G2 reviews compound forever and cost nothing to acquire from a customer who was already going to give you a 9 anyway. Productivity SaaS with NPS above 40 correlates with strong word-of-mouth. Below zero means you have a churn timer running and do not know it.
Who it is actually for: the customer who has been paying for four months, is genuinely happy, and would never sit down at her laptop on a Saturday to write a public review on her own. The email gives her a 30-second path with one click of friction. She would not have done it otherwise. Now she does.
Every user gets a unique referral link. When their referee signs up and pays for the first time, the referrer gets +30 days of Pro added to their plan. The referee gets +14 days on top of the standard 14-day trial.
The detail that mattered: the reward fires on the referee's first paid charge, not on signup. Without that single condition, every trial signup via a referral link would reward the referrer regardless of whether the referee ever pays. That is the difference between a working incentive and a leaking one.
Why it works (the data): referred customers have 16 to 25% higher LTV, churn 20% less, and convert at 3 to 5x the rate of paid leads (GrowSurf 2026 benchmarks). Adding referral as a channel drops blended CAC by 25 to 35%.
Why it actually works (the human part): referrals are not really about the reward. The reward is permission. When a freelancer tells another freelancer about a tool, the reward says "the company endorses this conversation and made it easy for you to do it." Most people just want to be helpful and slightly compensated. The reward is the slightly.
What it taught me: I almost cut this feature for being premature. The math says it does very little this month. The counter-argument I talked myself into is that the cost of installing the loop now is the same as installing it at 10x the user base, and the only loop you cannot measure is the one you did not build. The mechanic stays whether or not it produces this quarter. Future-me thanks present-me.
I shipped a public roadmap page with three columns: Planned, In Progress, Shipped. Logged-in users can upvote. I seeded it with 8 honest items spanning all three columns, including items that are months out.
Why it works as a distribution loop: the page is indexable. Every item title is a small SEO bet on a phrase a specific user is searching for ("Slack integration," "iOS app," "weekly invoice export"). The page itself is also a trust signal: companies that publish what they are building are perceived as more accountable and more permanent.
Why it actually works (the human part): people who upvote feel ownership. Ownership correlates with lower churn. They are invested in the version of the product that exists in six months, not just the one they bought today.
What surprised me: the upvotes are the most valuable product research I have ever paid nothing for. The feature I expected to win did not. The one that did is going to the top of my queue. I had not appreciated, before shipping this, that a public roadmap is a product-discovery channel disguised as a marketing surface.
What I am watching: noise from competitors voting strategically and from loud free users who never upgrade. I am planning to weight votes by account age and plan tier if it gets noisy. Worth thinking about before you ship.
I shipped 15 real templates as public, indexable pages at flowly.run/templates. Freelance time tracker. Weekly client review. Onboarding flow. Each is a real workflow, not "Hello world" filler. Each detail page is SEO-targeted with structured data so Google can render it as a rich snippet. A logged-out visitor who clicks "Use this template" gets signed up and the template applied to their workspace in one motion.
Why it works (the data): Notion's template gallery ranks for over 60,000 organic keywords in the US and pulls roughly 287,000 monthly organic visitors. #notiontemplate has over 180M views on TikTok. The mechanic does not require Notion's scale to start; it requires presence so that compound interest has a surface to land on.
The mechanic in one sentence: each template page is a small SEO bet that sits in a search result at 3am, when I am asleep, and signs up a small percentage of clickers.
Who it is for: the freelance designer who Googles "weekly client review template" on Sunday night because she has a meeting Monday and does not want to start from scratch. She does not know me. She does not need to. The template solves her actual problem in 30 seconds, and the signup happens as a side effect of solving it.
What it taught me: I assumed the bottleneck would be the code. It was not. The bottleneck was sitting down to write 15 templates that are actually useful, not garbage to fill a directory. I ended up using Flowly to draft them, which is either a tight feedback loop or selection bias. I am still figuring out which.
I shipped three standalone tool pages at flowly.run, each one fully functional without an account.
/pomodoro-timer
/freelance-rate-calculator
/time-tracker
Each has a soft signup CTA at the bottom. None of them is gated.
Why it works: "pomodoro timer" alone is 100,000+ monthly searches. CAC is zero. Tool-to-signup conversion lands in the 3 to 8% range when the tool is genuinely useful and the CTA is not aggressive. Bannerbear, Tally, and many others built their early traffic on free tool pages like these.
The mechanic in one sentence: people who type "pomodoro timer" into Google are not in the market for a productivity SaaS today; they are in the market for a 25-minute timer. I give them the timer. A small percentage of them think, while the timer is running, "I should be tracking this more seriously," and the CTA catches them at that exact moment.
What it taught me, in advance: SEO takes 6 to 12 months. I have not ranked yet. I am building tools my future self will thank me for, not tools that move this month's number. That is an honest tension you need to make peace with before you ship work that will not pay off for a year.
At the end of each week, every user can click "Share this week" and get an image: hours worked, project count, longest streak, Powered by flowly.run. The weekly digest email automatically appends the share link, so users who never visit the analytics page see the loop anyway.
Why it works: Spotify Wrapped mechanic. The content celebrates the user's accomplishment; my brand is the attribution. Every shared card is a free acquisition impression delivered by my most invested user. Granola's growth coverage attributes a meaningful share of their early lift to this exact pattern.
Why it actually works (the human part): people share evidence of discipline. A freelancer who logged 38 hours of focus time this week and is quietly proud will share that image to the LinkedIn audience of other freelancers and small founders who care about discipline. She is not advertising me. She is advertising herself. I am the credit line at the bottom.
What I have not solved: social media crawlers do not run client-side JS, so the personalized image preview falls back to a generic one in Twitter and LinkedIn thumbnails. The personalized card only shows when a human clicks through to the share page itself. Worth knowing before you ship.
Every monthly subscriber who hits day 30 of paid gets a one-time email: "Switch to annual and save $48." Daily cron. Dedicated upgrade page that pre-fills the plan switch.
Why it works (the data): Baremetrics measures monthly plan retention at about 68% and annual plan retention at about 92%. Annual subscribers churn roughly 3 to 5x less than monthly. Involuntary churn (failed payments) drops by up to 95% because the card runs once a year instead of twelve times. OpenView and ProfitWell document meaningful month-2-to-4 conversion from monthly to annual when prompted at the right moment.
Why it actually works (the human part): a monthly subscriber at month two is invested. She has felt the product work for eight weeks. She has absorbed it into her week. The annual plan is not "spend more money." It is "lock in the price you are already paying and stop thinking about it." Most people, given the chance to stop thinking about a recurring decision, will take it.
What it taught me: I had been quietly losing this revenue for weeks before I shipped the email. The boring email turned out to be worth more than any of the clever ones above. If you only ship one feature from this list, ship this one. It is the highest leverage email you are not sending today, and it is the closest thing to free money in this entire post.
For balance, four features I considered through the same 30-day test and rejected.
AppSumo lifetime deal. Failed the test on a different axis: the audience is price-sensitive in a way that anchors the perception of the regular tier downward. The 70% revenue share also hits worse at low margin. Revisit at year two.
Aggressive trial extension on email verification. Tested the manipulative-versus-helpful line and it felt manipulative. "Verify your email and get +3 days" reads as a bribe, not an extension. Cut.
Heavy LinkedIn personal posting. Wrong audience for a $12 a month freelancer product. LinkedIn CPL math does not work at this price point even with organic. Defer until I have business-cohort data.
Removing the free tier. Reverse trial only works if there is a free tier on the other side to land on. Removing it would kill the activation mechanic that is doing all the work today. Sometimes the right answer is to not change anything.
A few things I expect in the comments. Saving the round trip.
"A referral program is pointless this early." Fair on the absolute numbers this month. But the cost of installing the loop now is the same as installing it at 10x the user base, and the only loop I cannot measure is the one I did not build. I would rather have an unfired referral mechanic in month two than rebuild it from scratch in month twelve.
"You shipped 10 features in 4 days; quality has to be bad." Reasonable concern. None of these touches the core product. They are bolt-ons: emails, modals, public pages, a handful of scheduled jobs. Every one has tests. Every one is feature-flagged so I can flip it off if conversion drops 10% for three consecutive days without redeploying. The honest tradeoff I made to ship this fast: I have data on almost none of them yet. I will know in 30 days.
"This is just the OpenView playbook with extra steps." Yes. The playbook is public for a reason. Most founders read it and do not ship it. I shipped it.
"None of this matters if your product is not good." Agreed. Product comes first. Flowly has paying customers and healthy visit-to-signup conversion. The product holds. Distribution was the bottleneck; the 10 loops above are the fix. If your situation is the inverse (great distribution, leaky product), this list is not for you and I wish I had your problem.
The metric I care about is trial-to-paid conversion 30 days from now. The whole thesis is that compound returns on 10 loops beat one viral spike nobody can reproduce on purpose.
I will come back in 30 days with which of the 10 actually moved a metric, which were a waste, and which one I would have built differently. If that interests you, follow this account. Part 2 will have the data.
You can look at the actual implementations live:
Public roadmap: flowly.run/roadmap
Templates gallery: flowly.run/templates
Free pomodoro tool: flowly.run/pomodoro-timer
The product itself: flowly.run. Free tier, 14-day reverse Pro trial, no card.
The “paying with effort instead of dollars” line is painfully accurate.
I’ve been noticing the same thing with SEO, Product Hunt, Reddit, and social traffic. A spike feels good, but if the system stops working the moment you stop pushing, it starts feeling more like rented momentum than real leverage.
"Rented momentum" is a better phrase than anything I wrote — stealing that.
The thing that changed my thinking was realizing the spike isn't the problem. The spike is fine. The problem is when the spike is the strategy — when the whole plan is "we'll do another launch" or "we'll post more." At some point you're not building a business, you're booking a treadmill.
SEO is the interesting edge case in your list, because it can become owned — but only after the compounding phase kicks in, which takes 6–12 months and feels like nothing is working the entire time. The templates gallery and the free tools I shipped are bets on that phase. Right now they're producing zero. In 9 months they might be the top acquisition channel. That gap between effort and return is the whole game with owned loops, and most people quit before the payoff.
What's your current ratio of owned vs rented in your stack? Curious whether anyone has actually found the right balance or if it's always a rebuild.
I think I’m still heavily rented right now honestly.
Most of the things I’m working on still depend on me actively pushing:
posting, shipping new pages, testing positioning, participating in communities, trying different SEO branches, etc.
The only parts that start feeling slightly owned are the things that keep getting discovered even when I stop touching them for a while:
older SEO pages
useful comments
GitHub mentions
workflow-specific content
But the weird part is that none of those looked valuable at the beginning.
Early on they all just felt like “nothing is happening.”
That last observation is the one worth pinning.
The owned things never look valuable at the start — that's almost definitional. If the return were obvious early, it would already be captured. The problem is that at month two, a slow-burn compound asset is completely indistinguishable from a dead end. You only know which one it was after enough time has passed.
Your list is also interesting because every item on it had a one-time creation cost and stayed discoverable without maintenance. That's probably the actual signal for what "owned" means in practice.
Are your SEO pages broad keyword plays or specific workflow/problem pages? The specific ones seem to age better in my experience.
"Loops not campaigns" is a nice line but you literally do not have data yet. Come back in 30 days. Until then this is theory dressed up as a playbook.
Fair shot. The data caveat is real and I called it out in the skeptic section, so we are on the same page. Two of the ten already have measurable lift inside Flowly: the trial-end personalization (modal CTR up roughly 1.7x in the small sample I have) and the behavioral email branching (one variant has 2x the click-through of the old flat-drip version). The other eight are bets backed by other companies' published data, not mine. I will come back in 30 days with the conversion delta on the full set. If you want me to tag you in part 2, drop your username.
Quick question on the shareable stats card. How did you handle abuse? Could a user just fake their week, sign the token themselves, and farm impressions?
The backend signs; the frontend never has the secret. The verification endpoint looks up the user's actual stats for that week server-side, no client input on the numbers themselves. So the worst a malicious user can do is share a link to their own real week, repeatedly, which is the whole point of the loop.
What I have not solved: nothing stops a user from sharing an empty or embarrassing week. I considered a minimum-threshold gate ("share unlocked at 10+ hours tracked") but it felt user-hostile. Open to thoughts on whether a soft warning ("your week looks light, share anyway?") is worth the friction.
We did the public roadmap last year on a B2B tool and it went the other way. Half the upvotes were from competitors trying to scout our pipeline, and the noisiest voters were the loudest free users who never upgraded. We ended up restricting voting to authenticated paying users only. Worth thinking about before you let logged-out users near the upvote button.
This is the comment I came here for. Mine is auth-required for voting today (logged-in only, anyone can read), which dodges half the problem you describe. The "free users vote loudest and do not pay" part is the one I am watching.
Two things I am planning if it gets noisy. First, weight votes by account age and plan tier so a day-old free account counts for less than a paying user who has been around for months. Second, add a small "is this blocking you right now" tag separate from the upvote. Those are different signals: one is "I would be happy if this existed," the other is "I would pay more if this existed." Most public roadmaps mush them together and end up with a queue that is optimized for delight, not willingness to pay.
Did the noise actually drive bad prioritization for you, or did it just make the page ugly? Trying to figure out where the real cost is.
First-time founder here, this is exactly what I needed. Stupid question: where do you draw the line between a "loop" and just a feature? Is calendar sync a loop? Is dark mode?
Not stupid; it is the question that does the work.
The working definition: a feature is a loop if using it produces an asset that either pulls in another user, or raises the cost of leaving for the current user. Calendar sync is a retention loop because the moment a user connects their Google Calendar, the switching cost goes up dramatically; she is not going to redo that connection in another tool casually. Dark mode is neither: nobody signs up for dark mode and nobody stays for it.
Most product features are not loops, and that is completely fine. The mistake is treating non-loops as if they were and being surprised when they do not produce growth. Build features for the user. Build loops on top of features. Different jobs.
Disagree on #6. Referral programs are a tax on your trial conversion before you have PMF. Build last, not now.
Two friends told me this exact thing before I shipped it. The counter: the referral mechanic costs me approximately zero per month while inactive (one cold table, one webhook branch, a Settings link nobody is forced to visit). The "tax on trial conversion" critique is real only if I aggressively push the share CTA during onboarding or trial-end, which I am not. The CTA lives in Settings. Nothing in the activation path mentions it.
If the data in 30 days shows the Settings link is somehow hurting conversion, I will pull the surface. The mechanic stays. Half-built referral systems are the things that bite you in month twelve.
How did you decide which features to ship as loops vs which to leave as one-off launches? I have a backlog of growth ideas and no framework for picking.
The 30-day test from the post is the main filter. The second filter I run is the cost asymmetry between "install now" and "install later." Referral was a good example: low cost now (one table, one webhook condition), high cost later if I have to bolt it on after a viral moment.
The third filter is whether the feature creates a public artifact. Templates pages, the public roadmap, the share card: all of them produce something Google or a social feed can crawl. That public artifact is what compounds. A clever in-product feature that nobody outside the product ever sees cannot compound.
If a feature passes the 30-day test, the cost-asymmetry test, and the public-artifact test, it goes on the build list. If it fails two of three, it gets cut.
Posting from someone who has run growth at two productivity SaaS. Half this list is correct, half is local maxima. Behavioral emails (#2) and M2A upsell (#10) are universally right. The shareable stats card and public templates are right for some niches and wrong for others, and freelancer time-tracking is probably one of the wrong niches because your users will not post their hours to social. The NPS auto-route to G2 is clever and I am stealing it. The referral program at your stage is theater. The public roadmap will eat 4 to 6 hours a week of your time within 90 days. Otherwise solid.
This is the most useful comment on the post. Let me take it line by line because every point is doing real work.
Agreed on behavioral and M2A being universal.
On the stats card being niche-dependent: I think you are partially right but the assumption I am testing is narrower than "freelancers will post their hours to social." Freelancers do post evidence of discipline, but to a specific audience: other freelancers and small founders on Twitter and LinkedIn, in the "showing my work" subgenre that is actually pretty active. I am not betting on Wrapped-scale virality. I am betting on a narrow, qualified loop inside the community my users already inhabit. If 30 days of data shows zero shares, I will pull the email integration first and the button second.
Agreed in principle on referral being theater this month. The defense is that the cost of having the mechanic latent is roughly zero and the cost of bolting it on after a moment of momentum is high. It is an insurance policy this quarter, not a growth lever.
The public roadmap eating 4 to 6 hours a week is the claim I most want to dig into. What ate the time: responding to upvotes individually, prioritization arguments with the team, or meta-conversations about why a long-promised item still had not shipped? My current plan is to batch responses to one 30-minute slot a week and use canned replies for "we hear you, status has not changed." If that is naive, I would rather know now than at week twelve. What did your roadmap look like at the 90-day mark?