After experiencing both success and failure as an indie hacker, Arsen Ibragimov built an AI product and used his cofounder's agency as the testing ground. Now, Topliner is bringing in five figures per month.
Here's Arsen on how he did it. 👇
I’ve been building things for a long time — way before I knew what a "startup" was.
In high school, I wrote small programs in Pascal and Delphi. I built utilities that solved real pain for classmates, and I even sold them to university students. Around the same time, I did dropshipping, but that was before it had a cool name. It was just: Find demand, find supply, don't hold inventory, move fast.
That early phase shaped my philosophy:
Software is leveraged only if it removes friction.
Distribution matters as much as the product.
Cashflow teaches you faster than theory.
My first serious business was e-commerce. We grew it to $1M revenue, and I sold my stake in 2014.
Then, I bootstrapped a MarTech SaaS for Instagram. It hit $1M ARR and 10k daily users, but I had to shut it down due to platform risk. That hurt, but it taught me a lesson: Never build your house on someone else’s land.
Then came the wildest chapter: I built a consumer fitness app with Khabib Nurmagomedov. We raised ~$800k and had massive distribution. But we failed. That failure taught me more than any success.
With the Khabib app, we had ultimate fame but weak retention. With my current company, Topliner, I inverted the model: Solve a boring, painful B2B problem where the product is the workflow.
I wake up each morning with one thought: I want to create impact. The bigger the surface area, the better.
Executive search is the perfect place for this. It is a wealthy market, but the tech is outdated. Workflows are fragmented. People burn out doing manual tasks.
The real trigger for getting started, though, was GPT-4. My cofounder and I saw it and realized: "This changes the economics of work."
Our first idea was small: a curated database of top CFOs in the DACH region. But as we started building, we realized that building a database is building a static map of a moving world. We didn't need a map. We needed an engine that mapped the world in real-time.
So, today I’m building Topliner. It is an AI-native Operating System for Executive Search agencies. We built it in close partnership with The Big Search, a leading boutique agency in Europe.
We are currently in the five-figure monthly revenue range and growing steadily.

We started with the most boring part: company research.
Research is the "engine room" of executive search. It usually takes weeks of jumping between LinkedIn, Google, and spreadsheets. Worse, many agencies rely on their old internal databases and networks. But if you do that, you aren’t hiring from the market. You are hiring from your memory.
So we focused our AI on research first. The results were shocking:
We reduced 6 weeks of manual research work down to 6 hours.
Quality went up (fewer blind spots).
Unit economics improved drastically.
Our V1 was simple: You describe a role. Topliner finds relevant companies and enriches them with data (headcount, funding, customers).
At first, we were naive. We thought: "User gives a prompt -> AI does magic -> User gets a candidate." A black box.
We quickly realized nobody trusts a black box in high-stakes hiring. If the AI says, "This is the best CTO for you," the recruiter needs to know why. Maximum transparency. Evidence for every claim. The human can intervene at any step.
AI shouldn't replace the expert; it should give the expert superpowers. So we built for that.
We also had zero tolerance for hallucinations. In a consumer app, a bug is annoying. In executive search, a hallucination is a disaster. If the AI invents a fact about a candidate, we lose trust. If it misses a candidate, we lose the outcome.
So, we spend significant time on guardrails. The human always makes the final decision. Our job is to make the human faster and sharper, not to force blind trust.
We keep the stack pragmatic and shipping-first:
Backend: Node.js and PHP (Laravel)
Frontend: ReactJS
Data: MySQL
Caching/queues: Redis
AI: OpenAI and xAi models + our own orchestration layer for hundreds of AI agents
Infra: Azure + OVH + internal services for enrichment and processing
We reached five figures per month with zero ad spend.
We have three main drivers for growth:
Distribution through The Big Search. We had a testing ground with real clients immediately. We didn't need to hunt for beta testers.
High-signal content. I don't post "5 tips for hiring." I post forensic breakdowns of how talent markets move. I treat recruiting as engineering, not HR. This attracts the right people: founders, operators, and partners.
The founder network. Founders talk to founders. When people see how fast we execute, they ask, "How?" Intros happen naturally. These conversations start with trust, not cold outreach.
Building this inside a real agency (The Big Search) was the biggest unlock.
Most SaaS founders have to guess what the customer needs. We don't guess. We deploy Topliner inside The Big Search. It has to survive real mandates and real deadlines every day.
Every search mandate was a product test. Every "this feature is annoying" was a bug report. We didn't have to guess what users wanted. They were sitting right next to us.
Also, the partnership dynamic helps. Learco handles the business, relationships, and delivery pressure. I handle the system and product. The split is clean, and we move fast.
As far as our business model, we keep it simple with two streams:
Platform revenue: Agencies and partners use Topliner as their infrastructure. We take a revenue share on the search work executed through the platform.
Talent market mapping: We sell high-precision market intelligence to VCs, PE funds, and Enterprises. We show them exactly what the talent pool looks like for a specific role, faster and deeper than any traditional firm can. We’ve already worked with companies like Miro, Dutch Amazon BOL, Flo Health, and funds like Permira, Prosus, N47, and Atomico.
Here's my advice: Start where you have unfair access to reality. Use your own workflow, your own data, or your partner's problems. Don't build for an imaginary customer.
Don’t aim for "a product people like." Aim for "an outcome people pay for."
And ship into real constraints early. Deadlines and paying customers are the best product managers in the world.
Here's what I have planned:
Near-term: Make Topliner the default workflow for every partner inside The Big Search.
Medium-term: Package the OS so other boutique firms can use it. We want to power a "house of brands" — distinct agencies all running on our engine.
Long-term: Executive Search seems boring to many indie hackers. But think about it: The people these firms place run the companies that shape our future. By building the OS for this industry, we aren't just saving recruiters time. We are making sure the right leaders find the right seats faster. If we do this right, we increase the velocity of innovation globally. That’s a mission worth waking up for.
You can follow along on LinkedIn and X. And check out Topliner!
Leave a Comment
The line about never building your house on someone else's land after the Instagram SaaS shutdown is something every builder needs to read twice. And the black box vs glass box thinking is exactly right. People do not trust what they cannot see especially in high stakes decisions. Building in public myself and this whole post is a masterclass in thinking before shipping.
Using an agency as both a testing ground and distribution channel makes a lot of sense. Curious — how did you decide which internal tools were worth turning into products?
Also, did having real client problems make validation easier compared to starting from scratch?
Thanks for your genuine sharing!
I’m curious: since neither you nor your partner had a background in Executive Search, what was the initial spark that pulled you into this industry?
Also, what was the biggest driver that kept you so deeply committed in the early days? Looking for some inspiration here, Thanks!
Hey 👋
Quick founder dilemma.
If you’re early-stage and building AI or SaaS tools, is it smarter to combine everything into one product to build a stronger brand, or keep separate focused products under the same umbrella?
What would you do and why?
Appreciate honest feedback.
Hey Leo, once you have your mvp, I'd just ask your customers.. Several brands keep a "Feature Request" system that allows them to know what their users want next.. so use that to define the limits of your "bloat" lol and keep all of the other cool stuff you can do elsewhere. I remember when I developed an AI music generating tool to go along with my music licensing platform I already had established back when Tensorflow was a thing (2014 ish).. I didn't use tensorflow, just my own algorithms using command-line stuff like SOX and FFMPEG... And my knowledge of music theory. Some of my users liked it, but most did not. I might have been a bit early considering now AI music is proliferating.. But anyways, I got rid of it and just doubled down on what my customers really wanted. Ended up 10Xing revenue since then.
Great read. What stood out most was the idea of building from real workflow pain instead of theory. Using the agency as a testing ground seems like a major advantage, and the shift from fame-driven distribution to a painful B2B workflow problem feels like a very smart lesson.
We recently built an AI agent that qualifies inbound leads and books demo calls automatically. Happy to show a quick demo if helpful.
作为一个才刚刚接触AI的新人你会给我什么建议
The "webhooks breaking at 2am" experience is real. The part most teams skip is making webhook delivery durable — fire-and-forget works fine until it doesn't. The outbox pattern (write the event in the same DB transaction as the state change, dispatch from there) eliminates the failure window almost entirely.
very interesting keep sharing
"Start where you have unfair access to reality"
is the most honest piece of product advice I've
read this year.
The black box vs glass box distinction is also
underrated. In any high-stakes workflow — hiring,
finance, legal — users need to see the reasoning,
not just the output. Trust doesn't come from
accuracy alone; it comes from legibility.
I'm working on a tool that compresses AI context
windows (log files from 600MB to 10MB, prompts
reduced 40-60%) and the same principle applies.
The compression needs to be explainable, not just
fast, otherwise developers don't trust what they're
sending to the model.
Curious: how long did it take inside The Big Search
before you felt the product was ready for external
customers?
great
Respect!! to be honestly! I should learn from you
The black box vs glass box framing is the right way to think about AI adoption in high-stakes professional workflows. Consumer AI tools can afford to be opaque — the cost of a bad output is low and reversible. Executive search is the opposite: the AI recommending a CFO candidate needs to be interrogatable, because the recruiter has to stand behind the recommendation to the client. "The AI said so" is not an answer anyone will accept when a hire costs $200k+ to undo.
The platform risk lesson from the Instagram SaaS is worth more than most people realize. Executive search is actually one of the few workflow categories where there's no platform to be dependent on — the data is sourced from everywhere, the output is human judgment, and the delivery is direct to client. You can't get API'd out of existence.
The agency-as-testing-ground structure is also underrated as a funding model alternative. The Big Search gave you paying users, real feedback loops, and distribution in one structure — that's what most pre-seed rounds are supposed to buy. You got it without dilution.
Curious how you handle the hallucination guardrails technically. Is it a retrieval-only architecture where the AI can only cite facts it actually found, or a post-generation verification layer that checks claims against source data?
The "unfair access to reality" framing is the most honest advice in this entire post.
I'm building an AI-native operating system for SMBs and organizations — automating workflows across Notion, n8n, Make, and custom GPT connectors — and the biggest unlock we had was exactly this: we stopped guessing and started deploying inside our own existing clients. We run a 25-year-old IT services company (PC Doctor) in Ecuador with 500+ real projects. That client base became our lab — and today PC Doctor is nearly fully automated.
Every automation we built got stress-tested on real mandates, with real deadlines and real stakes. We've documented 93% reduction in manual work time and $202K annual value freed up for one organization alone. Not theoretical — live and verified.
The "glass box vs black box" section hit hard too. In B2B, especially for business owners who've spent decades doing things a certain way, trust is the product. The AI has to show its work. Every step has to be auditable. The moment we added full traceability to our automations, adoption inside client organizations accelerated significantly.
The Khabib story is the most valuable lesson here. We had a phase where we were building features because they were exciting, not because they were sticky. Recognizing that pattern — and pivoting to boring, high-friction, high-value workflow problems — was the real turning point.
Distribution without depth is just a vanity metric. Workflow depth without distribution is just a good internal tool. The agency model solves both at once.
Agency as testing ground + distribution channel is a smart model — you validate against real client pain before productizing. The AI tools angle makes even more sense here because the margin protection is real when you can deliver quality at scale.
One thing I've found that helps deliver consistent AI outputs for clients: prompt structure matters more than model choice. I built flompt (https://flompt.dev) for this — a visual prompt builder with 12 semantic blocks that compiles to optimized XML. Big difference in repeatability for client deliverables.
A ⭐ on https://github.com/Nyrok/flompt would mean a lot — solo open-source founder here 🙏
The "black box vs glass box" realization is something I ran into hard while building DocuMind — an AI chatbot that answers questions from your own uploaded documents.
Early on I thought the magic was the point. User uploads a PDF, AI answers questions, everyone's happy. But the first time someone asked "why did it say that?" and I had no clean answer, I understood exactly what you mean. In high-stakes contexts — and honestly even in low-stakes ones — trust is the product. The AI is just the mechanism.
The fix was the same as yours: surface the source, show which document chunk the answer came from, let the human verify. Confidence in the output went up immediately because people stopped feeling like they were talking to a magic 8-ball.
Your point about "unfair access to reality" also resonates. My project started as a practice build with no real users — which means I was building in a vacuum. The feedback loop you describe by embedding Topliner inside The Big Search from day one is something I'm actively trying to replicate now. Finding even one real customer who uses it daily teaches you more than months of assumptions.
One question I'm genuinely curious about: when you moved from internal tool to selling externally, did the pricing conversation change? I imagine what feels "free" inside your own agency feels very different when another firm has to justify it as a line item.
Great story. Using an agency as both validation and distribution seems like a huge advantage. Curious how many customers came from the agency vs outside sources.
I have to say your agency website is one of the best I've seen in many years.
The "building from the inside" angle is something I think about a lot. I'm building a visual bug reporting tool for web agencies (ReviseFlow), and I came from the freelance/agency world myself. That background is both an advantage and a trap — you think you already know the problem, so you skip the discovery phase.
What stood out to me here is how Arsen started with ONE problem (company research) rather than trying to build an all-in-one platform from day one. I made the opposite mistake early on. I tried to solve screenshot capture, console logging, network errors, integrations, and project management all at once. Should have just nailed the "paste one script tag and get visual feedback" part first.
The agency-as-distribution insight is gold too. When you're building for agencies, your first 5 customers can literally become your distribution channel if the tool actually saves them time. Each agency has 10-50 client projects — that's massive built-in expansion if the product sticks.
Curious for anyone building in the agency/freelance tool space: have you found it easier to sell to the agency owner or the individual developer doing the work? I keep going back and forth on who the real buyer is.
Discover reliable carpentry services in Dubai offering custom furniture, woodwork installations, and repair solutions for residential and commercial .
"Software is leveraged only if it removes friction" (in other words, build only what is demanded) is exactly right, but the harder question nobody talks about is: which friction?
Not all friction is structural. Some of it is cosmetic. The difference matters because removing cosmetic friction feels productive but doesn't change the economics of the business. Rebuilding a UI to look cleaner, adding a dashboard nobody checks, automating a step that takes two minutes once a week. These feel like improvements. They rarely move revenue.
Structural friction is different. It's the kind that scales with volume. If every new client means your team spends six weeks doing the same manual research loop, that's structural. The cost grows linearly with the business. Removing it changes the unit economics permanently, which is exactly what cutting research from 6 weeks to 6 hours does for Topliner.
The Khabib fitness app failure illustrates this perfectly. Fame solved the distribution friction, but the product didn't solve the retention friction. Distribution without workflow depth means you're spending all your energy getting people through the door, while the actual problem is that there's nothing holding them inside.
Arsen's move from "fame + weak retention" to "boring B2B + workflow depth" is essentially a bet that solving structural friction in a small, wealthy market beats solving cosmetic friction in a large, shallow one. I think that bet is correct far more often than founders want to admit.
Nice project!
I'm a UI/UX and graphic designer. I help startups with landing pages, SaaS dashboards, and product UI. If you ever need design help, I'd love to collaborate.
Impressive results. What marketing channel worked best for your first customers?
This is a smart and practical approach to early traction. Using an agency both as a testing ground and distribution channel creates a built-in feedback loop and real revenue from day one, which many founders overlook. It’s a strong reminder that real clients, real projects, and real cash flow can validate your product faster than theoretical demand alone.
Turning lived experience into a structured offering like this shows how creative go-to-market strategies can beat reliance on traditional channels, especially in the early stages of growth.
Love this strategy - agency as R&D lab is brilliant!
I'm building tools right now and this makes me think I should test with real users first instead of building in the dark. Did you find it hard to transition from bespoke agency solutions to a standardized product?
Building a 'glass box' instead of a black box is such a crucial takeaway here. In B2B, especially for high-stakes stuff like executive search, transparency is a feature, not a bug. It’s also refreshing to see a pivot from the 'flashy' consumer app (Khabib) to a 'boring' but high-value workflow problem. That’s usually where the real money is hidden.
According to James Fleischmann, agencies can be leveraged as both a testing ground and distribution channel to streamline validation, optimize offers, and scale services strategically to consistently reach $10k or more per month.
Mama I’m on TV!!! 😅
Great story - especially the idea of building with “unfair access to reality” and testing inside a real agency.
Curious: what was the hardest part when transitioning from your initial database idea to building a full AI operating system? And how did you know the market was ready to trust AI in such high-stakes workflows like executive search?
Smart approach. An agency gives real customer problems and instant feedback loops, which most products struggle to get early on. Curious what signals told you it was time to productize instead of staying service-heavy?
"Don't build for an imaginary customer" is the most expensive lesson in SaaS. Most founders learn it after burning 6 months of runway. Arsen learned it by sitting inside the agency.
I took a similar path. Spent 6 years inside agencies managing ad spend across Meta, Google, and TikTok for 100+ brands. Watched the same problems repeat across every client: manual reporting eating 20+ hours/week, platforms inflating attribution numbers, creative fatigue going undetected for days while budget burned.
That agency experience gave me something no amount of customer interviews could: I watched the pain happen in real time, across 100+ different brands, for years. By the time I started building, I didn't need to guess what to solve. I had a list.
The "unfair access to reality" framing is spot on. If you're inside the workflow, you're not building features. You're removing friction you've personally felt.
“Building a database is building a static map of a moving world. We didn’t need a map. We needed an engine.” — that line really stood out.
Also the contrast between fame + weak retention vs. boring B2B + workflow depth is powerful. It’s easy to chase distribution; much harder to build something people depend on daily.
I’m early in my journey and currently building a small multiplayer word game. Reading stories like yours reminds me that distribution and retention are two completely different battles.
Curious — when you pivoted toward workflow-native B2B, what signal convinced you that retention would be stronger this time? Was it early usage behavior, willingness to pay, or founder intuition?
Really appreciate you sharing the long-term philosophy behind it
I built Healthy Desk, a break timer that shows you actual stretches instead of just pinging you. Getting it on the Play Store was the easy part. Getting anyone to download it is a completely different game. 5 installs and a lot of silence. Turns out building something people need and getting it in front of them are two totally separate skills. Did anyone here crack early distribution for a mobile app without a budget?
The "hiring from your memory" line stuck with me — it's a sharp way to describe what's actually broken in executive search. Most firms think they're doing market-wide searches but they're really just recycling their own networks with extra steps. Cutting research from 6 weeks to 6 hours is the kind of number that makes a CFO sign off without needing a demo. Curious though — as you expand to other boutique firms, do you worry about the tool getting tuned too tightly to The Big Search's specific workflows, making it harder to generalize without a messy rebuild?
If I may ask, how long did it take you to feel confident about your startup’s positioning, and would you pay for a tool that helped you get there in under an hour?
Interesting story. It’s always inspiring to see founders start from nothing and build something successful step by step.
Honestly, it took me months of iterating before I felt even remotely confident about the positioning. And even now, it still evolves as I talk to more users.
I think I’d pay for a tool that accelerates clarity — but only if it forces uncomfortable thinking instead of giving surface-level suggestions. Positioning usually improves through friction, not templates.
What helped you most when you were refining yours?
Thank you for the brutal honesty. I appreciate it a lot. I want to provide you with details of the product below and let's see if it could solve your problem and how much you'd be willing to pay for it if such product were developed.
My idea is to use a series of agentic loop and API services to mine keywords and position statements for various startup niches and subniches and how those positions and keywords have performed over the years. I'll build this trend score that continues to evolve.
A founder can paste their url and get insights about their current positioning, how it correlates with our trends data in areas of churn, LTV, acquisition cost per user, even things like VC alignment, competitor comparison, risks, etc. They also get repositioning data which is like close variants of their positioning, marketing statement, etc, so they get to see the market outlook for their startup based on changes they could make and based on real industry data, what works and what often does not.
I plan to track conversions and churn, These data will be anonymized, the moat here is that the whole system will continue to evolve, providing founders with anonymized but real industry data about their positioning.
I plan to track implement feedbacks tracking, this way founders can see the direction their users want them to pivot to and get to compare it with the consensus around that particular pivot across the whole ecosystem. So if users are clamouring for a purple theme, founder gets to see this and also the market implication for their niche if such feature were implemented, they get data such as churn, LTV, etc for products in their niche with purple theme.
For founders without a product yet, they could discuss their project and get suggestions on the most appropriate positioning for that product based on data.
Finally I plan a feature where A/B tests can be performed on the fly with the system manipulating position statement, contents, etc on the founder's funnel assets, landing pages etc based on predefined configuration, data and AI; repeating the tests on the fly to identify the directions that yielded more conversion and less churn for any specific group of audience.
This could help validate an Idea but the main goal is to help validated ideas resonate better with their targeted audience and filter noise from real feedbacks that correlates to higher revenue.
what do you think?
I am currently building my first startup RightCar, a car discovery platform.
Something I'm realizing early is that building the product
is actually easier than figuring out distribution.
Did you focus on audience early or only after traction?
How long did it take you to feel confident about your startup’s positioning, and would you pay for a tool that helped you get there in under an hour?
That makes a lot of sense.
I am still early, but what I’m realizing is that positioning feels less like a one-time decision and more like an ongoing feedback loop with users.
Right now with RightCar, I am learning that clarity doesn’t come from thinking longer it comes from talking to people and seeing how they describe the problem back to you.
A tool that could compress that learning cycle would be extremely valuable, especially for first-time founders.
Are you focusing more on helping founders discover postitioning or validate it quickly once they have an idea?
You are very correct about the continous feedback loop.
My idea is to use a series of agentic loop and API services to mine keywords and position statements for various startup niches and subniches and how those positions and keywords have performed over the years. I'll build this trend score that continues to evolve.
A founder can paste their url and get insights about their current positioning, how it correlates with our trends data in area of churn, LTV, acquisition cost per user, even things like VC alignment, competitor comparison, risks, etc. They also get repositioning data which is like close variants of their positioning, marketing statement, etc, so they get to see the market outlook for their startup based on changes they could make.
I plan to track conversions and churn, These data will be anonymized, the moat here is that the whole system will continue to evolve, providing founders with anonymized but real industry data about their positioning.
I plan to track feedbacks from users, this way founders can see the direction their users want them to pivot to and get to compare it with the consensus around that particular pivot across the whole ecosystem. So if users are clamouring for a purple theme, founder gets to see this and also the market implication for their niche if such feature were implemented, they get data such as churn, LTV, etc for products in their niche with purple theme.
For founders without a product yet, they could discuss their project and get suggestions on the most appropriate positioning for that product based on data.
Finally I plan a feature where A/B tests can be performed on the fly with the system manipulating position statement, contents, etc on the founder's funnel assets, landing pages etc based on predefined configuration, data and AI repeating the tests on the fly to identify the directions that yielded more conversion and less churn for any specific group of audience.
This could help validate an Idea but the main goal is to help validated ideas resonate better with their targeted audience.
what do you think?
Interesting story. It’s always inspiring to see founders start from nothing and build something successful step by step.
that's interesting
This is a great breakdown of how Topliner leveraged an existing agency to validate their AI-powered executive search OS and achieve impressive growth. I especially appreciate the emphasis on solving a specific, painful problem (company research) and the pragmatic approach to building a "glass box" AI that empowers experts instead of replacing them. The "unfair access to reality" point is spot on! Congrats on the success, Arsen and team!
thank you
That's interesting. I'm also running a tech company where we provide contractors. And we are also building RL UI gyms for agents.
Can you tell me more ?
love it and wish you best of luck for future.
thank you
So awesome!
Really like the “start with one boring problem” point. Curious — when scaling B2B tools, where do you see processes breaking first: onboarding, sales demos, or internal workflows?
Great success story and I wish you good luck. the one thing on my mind is that how soon before spending too much time or resources can you realize whether your idea is unique and basically solves a real problem? I am asking this because I am already working on something bigger than my capabilities not knowing whether it is going to be a success and that, sometimes slows me down.
thank you, and good luck to you too!
thats so great
Love this
Love the philosophy — especially "never build your house on someone else's land" and "unfair access to reality."
Quick question on the expansion strategy: You mentioned packaging the OS for other boutique firms.
How are you validating that other agencies will actually pay for this vs. just using their existing workflows?
I ask because I learned the hard way that "we have one successful deployment" doesn't always translate to "other companies want to buy this."
Currently exploring a validation method specifically for B2B SaaS expansions — testing demand with video prototypes before building the full multi-tenant infrastructure.
Would love to hear how you're tackling this at Topliner.
Bootstrapping a $20k/month AI portfolio after a VC-backed company failed highlights resilience, strategic pivoting, and smart resource use. Instead of relying on funding, the founder focuses on profitable AI tools, niche SaaS products, and scalable automation solutions. By validating ideas quickly, leveraging lean development, and targeting high-demand markets, he rebuilds sustainable income streams, proving that failure can become a powerful foundation for innovation, independence, and long-term entrepreneurial success.
The agency-as-distribution-channel model is underrated and this is a great example of it working.
I'm seeing a similar dynamic in my space (AI voice agents for service businesses). The biggest unlock wasn't building a better product — it was getting close enough to the customer to understand how they actually buy.
Service business owners don't search for "AI voice agent SaaS." They search for "how to stop missing phone calls" or "answering service for plumbers." Meeting them where they already are — in their language, solving their exact problem — is what converts.
The agency model is powerful because you get paid to do the market research. Every client conversation teaches you something that makes the product better. It's the opposite of building in a vacuum.
How do you handle the transition from "done-for-you agency work" to "here's a self-serve tool"? That handoff always seems like the hardest part.
This is something I wish more founders talked about. I'm building a mobile app as a solo dev and the hardest part isn't the code to be honest. It's knowing if what you're building actually matches how people behave in the real world. Using an existing agency with real clients and deadlines as your feedback loop is so much better than surveys or guessing. Also, the retention lesson from the Khabib fitness app is brutal but important. Distribution without stickiness is just a vanity metric.
Great write-up! 🙌 I love how you treated your agency not just as a revenue source but as a testing ground and distribution channel for new products. It’s a smart approach — building real solutions for real clients gives you instant feedback and cash flow while validating ideas before doubling down on them.
I’ve taken a similar path with ChimpsDev, using client work to test product concepts, iterate fast, and eventually turn those learnings into scalable SaaS offerings. Seeing the journey to $10k/mo laid out so clearly here is super inspiring — definitely something more founders should consider when starting out.
Thanks for sharing!
How long did it take you to feel confident about your startup’s positioning, and would you pay for a tool that helped you get there in under an hour?
thats really impressive
Instead of trying to build something impressive from day one, he started with the most boring, painful part of the workflow and just made that dramatically better. Cutting research from six weeks to six hours is not a cosmetic improvement. That changes how an agency operates.
Building inside a real agency also feels like the quiet advantage. There is no pretending about product market fit when real clients are waiting. Either it works under pressure or it does not. That kind of environment forces clarity.
The bigger lesson for me is building where you already have access. Workflow, data, distribution. Starting from that position makes everything faster and more grounded.
This is something we think about a lot with home care software too. The closer you are to the real problem, the less you have to guess. And guessing is expensive.
I really enjoyed reading this and the hustle and bubble of starting a startup and ensuring it scales. Your product looks amazing.
This resonates a lot, Arsen. Especially the "black box vs glass box" realization.
I’m currently building an OS for e-commerce operators (Finnito), and I ran into the exact same wall. We initially thought "cool AI images" was the product, but we quickly realized that in high-stakes retail, the "why" matters more than the "magic."
If the AI suggests a specific material or pricing shift, the operator needs to see the logic/evidence behind it or they won't touch it. Trust is harder to build than the tech itself.
The "unfair access to reality" is a great takeaway. I’ve been using a "Material DNA" approach to solve return-rate issues for specific brands, and building those solutions inside the mess of a real warehouse is 100x more effective than guessing from a home office.
Curious—when you were reducing that 6 weeks of research down to 6 hours, did you find the users were skeptical of the speed, or did the "evidence" layer solve the "too good to be true" problem immediately?
I've been trying to do this as well, get full end to end clients while building out your tool. Best for validation and quickest feedback possible.
Cool to see it worked out so well for others
Really enjoyed this breakdown — the idea of embedding early validation inside real workflows is such a smart way to avoid building in a vacuum. 👏
One common friction I see with early SaaS founders is that their landing page or pricing page doesn’t clearly communicate who exactly their tool helps and how much value it unlocks — which often means fewer demo requests and slower revenue traction.
I do async funnel clarity audits for early‑stage SaaS founders — I look at landing copy, conversion blockers, and positioning misalignment and show where revenue is leaking for $150 (delivered in clear notes & screen feedback, no calls).
If someone wants a second set of eyes, I can share a few quick insights.
Love seeing more founders share these real‑world playbooks — thanks for posting this!
Hitting $10k+ per month by using an agency as both a testing ground and distribution channel is a smart growth strategy. An agency gives you real client data to test offers, pricing, and marketing funnels quickly. Once proven, you can turn those services into scalable products or systems. It also provides consistent cash flow while building authority, referrals, and case studies that help attract higher-paying clients and expand distribution efficiently.
The agency as testing ground part hit me hard. We been running app development agency from 7 years and honestly the best products we seen are ones built by founders who lived the problem themselves.
The Khabib story is most honest thing in this post. So many people think big distribution solves everything. But if product dont retain users then no amount of fame will save it. We seen same thing with clients who come to us after spending lakhs on marketing for app that was just not sticky.
One thing I am curious, when you moved from internal tool to selling to other agencies, did the pricing conversation change a lot? Because what agency owner justify internally for their own workflow is very different from what they willing to pay as external subscription.
Running agency myself I know this gap very well 😄
Fantastic, practical insights on using an agency as both testing ground and distribution. The real-world validation approach and focus on solving genuine workflow problems make this an inspiring and highly valuable read for founders.
Really enjoyed reading this — it’s one of those posts that quietly explains why some indie projects turn into businesses and others stay side-projects.
What stood out to me is the sequence: access → workflow → outcome → revenue.
You didn’t start with “AI”, you started with a place where work was already happening. Using a real agency as the lab basically solved three startup problems at once: distribution, feedback, and willingness to pay. Most builders try to validate after building; you validated while operating.
The “glass box over black box” point is especially important right now. A lot of AI tools chase impressive demos, but in high-stakes domains (hiring, finance, legal) trust beats novelty. Giving experts evidence and control doesn’t reduce the value of AI — it creates it. AI becomes a decision amplifier, not a decision maker.
Also loved the idea of “don’t build a product people like — build an outcome people pay for.” That’s probably the clearest explanation of product-market fit I’ve seen in a while.
Great reminder that boring industries aren’t actually boring — they’re just inefficient. And inefficiency + real budgets + real deadlines is where durable companies come from.
this is a gem
This is a great example of building inside real constraints instead of theorizing about user needs. What stands out to me is how much of your edge comes from embedding the product directly into a live workflow. Most SaaS founders try to design from the outside looking in. Also love the “glass box” approach, especially in high-stakes industries where trust is everything. AI that augments expertise rather than replaces it feels far more sustainable. Curious, do you think your distribution advantage through The Big Search was the biggest unlock, or was it the workflow proximity itself?
This is such a solid playbook for SaaS founders. Using an agency as both testing ground and distribution solves the two hardest problems at once: real user validation and go-to-market. No guessing what people need — you build inside real workflows.
Curious - did you notice churn or revenue concentration patterns as you scaled? I've been thinking a lot about how early-stage SaaS founders track revenue risk beyond just MRR... considering there's not that much data to go off of...
the service-first approach is something more AI builders should copy. doing the exact same thing with AI video production right now - started with done-for-you video creation at $500-1k per video, which gave us direct access to what clients actually need vs what we assumed they needed.
the glass box vs black box point is critical for any AI product selling to professionals. we hit the same wall - content creators dont want a magic button that produces a video. they want to see the script, approve the images, adjust the pacing. the moment you give them control at each step, trust and retention go way up.
the 6 weeks to 6 hours compression on research is the kind of 10x improvement that justifies a tool. anything less than 5x and people just keep doing it manually because switching costs are real.
With the audience and credibility you've built over the years, email marketing could be one of the strongest growth channels for you especially for nurturing founders, promoting your SaaS tools, and monetizing your newsletters more effectively.
Man, this resonates so much. The part about having 'unfair access to reality' by building inside an actual agency is pure gold.
Im building a tool for web agencies right now (trying to kill the endless 'whatsapp screenshot' feedback loop clients do lol) and getting that raw, unfiltered daily usage data from day 1 is literally the dream.
Having The Big Search as a testing ground is basically a cheat code. But I'm curious – did u guys ever run into the trap where Topliner became too customized for their specific workflows? Like, how do you make sure the system stays flexible enough for when you open it up to other boutique firms like you mentioned in your medium-term plan?
Also really love the pragmatism with the tech stack. Shipping > overengineering every time. Congrats on hitting 5 figures!
When we talk about automation, we usually focus on speed. But in MedTech, speed is secondary to compliance. If your tests don't provide a clear traceability matrix, you're in trouble.
I found a very useful guide on this by regarding healthcare software testing. It explains exactly how to link manual and automated tests testomatio for a solid audit trail and how to maintain a 'source of truth' that satisfies HIPAA or ISO auditors. Definitely worth checking out their blog if you're struggling with audit-ready reporting.
Really interesting approach using the agency as both validation and distribution.
How long did it take you to feel confident about your startup’s positioning, and would you pay for a tool that helped you get there in under an hour?
Great launch! Curious how you built the AI part?
How long did it take you to feel confident about your startup’s positioning, and would you pay for a tool that helped you get there in under an hour?
I have been building and re-building my software and I think I am spending just as much time building software to test my software. It is actually a bit of reverse engineering as I'm finding parts via testing that aren't working and then I am able to drill in and build those pieces out more in depth.
"Start where you have unfair access to reality" - best advice in this entire post.
I did exactly this. Ran technical support and built payment integrations for a major EU payment provider for years. Every edge case for SaaS subscriptions, EU VAT, webhooks breaking at 2am.
After hundreds of integrations I knew exactly where the pain was. EU SaaS founders stitching together five tools for billing, tax, and compliance. The only solutions were US-built and didn't understand the EU payment landscape. So I started building the solution myself.
The consultancy as testing ground pattern is real. Still running one alongside the product. You're not guessing. You're building from lived experience.
doing the same thing myself, thank you so much for sharing
@arsen - Congrats on successful product that addresses real world problem. And,
Great story. I love that you validated through real agency clients before scaling — that’s a serious advantage most founders don’t have.
If I may add one growth thought: your distribution engine is strong, but your landing page could compound it further.
I’d sharpen it around:
• Clear problem framing
• Exactly who it’s for (ICP clarity)
• Outcome-first value messaging
• Why this wins vs alternatives
• Risk removal / assurance
You’ve already done the hard part (validation). A few positioning tweaks could accelerate conversion significantly.
Would love to collaborate on a small growth case study if you’re open.
I built an ATS resume optimizer — roast my landing page. Check out Resume4UBuddy.
The Khabib story is probably the most valuable part of this whole piece. Having celebrity-level distribution but still failing because retention was weak — that's a lesson most people only learn after burning through their runway. It's easy to assume distribution solves everything.
What caught my attention is the "hiring from your memory" framing for why traditional exec search is broken. I've seen the same pattern in dev hiring — companies keep going back to their existing network instead of actually mapping the market. The 6 weeks to 6 hours compression on research is wild if the quality actually holds up at scale.
Curious about one thing though: with the revenue share model on search work executed through the platform, how do you handle pricing conversations when agencies are used to keeping 100% of their fees? That seems like the hardest sell, not the product itself.
The "black box vs glass box" section hit hard. Building an AI product for B2B, I ran into exactly this. Our RAG-based chatbot would give a confident answer and the business owner would ask "but why did it say that?" — and we had no good answer. Trust evaporated instantly.
The fix for us was the same conclusion you reached: surface the source, show the reasoning chain, let the human override at every step. Once we added citation-level transparency to responses, objections dropped significantly.
Your point about "unfair access to reality" is underrated advice. Most solo builders (myself included) spend months guessing at pain points. You embedded the product inside a real agency with real deadlines — that's essentially paying with equity for a guaranteed feedback loop. Clever structure.
Question: How did you handle the transition when Topliner moved from "internal tool for The Big Search" to "product we sell to other agencies"? I'm curious whether the features that worked internally needed significant re-packaging for external buyers, or the workflow translated cleanly.
Using your own domain as testing ground is massively underrated. Did something similar - built bookkeeping automation tools because I was doing bookkeeping myself and got sick of the repetitive parts. The first users were already in the room.
The agency model gives you something most indie hackers don't have: a feedback loop measured in days, not months. Ship something Monday, use it on real client work Tuesday, know by Wednesday if it's useful or garbage. Worth more than any number of user interviews.
This is such an underrated advantage.
Building inside a real workflow removes 80% of the guesswork most SaaS founders struggle with.
The agency as both testing ground and distribution is incredibly powerful... especially early when speed matters more than scale.
Did you always plan to spin this out as SaaS, or did that emerge after seeing internal traction?
This is such an underrated strategy.
Using an agency as both a testing ground and a distribution channel solves the two hardest problems at once: validation and customer acquisition.
Most indie founders build in isolation and only think about distribution later. Here, distribution is baked into the workflow from day one. That dramatically reduces risk and speeds up feedback loops.
What I find particularly interesting is how this model forces you to build something that actually works in real-world conditions — not just something that looks good in a demo.
I’m curious though:
At what point does the product become independent from the agency? Do you see this as a long-term symbiotic model, or just a launchpad to reach standalone scale?
Really solid execution. Thanks for sharing this.
The 'unfair access to reality' insight is brilliant. Most SaaS founders spend months guessing what users need, but you had a live testing environment from day one. That Khabib app story really drives home the point - fame without retention vs boring B2B problems with sticky workflows.
What strikes me most is the glass box approach. In consumer apps, users might accept some black box magic, but in executive search where one bad recommendation can tank a relationship, transparency becomes part of the product itself.
I'm curious about the partnership dynamics. How do you handle feature requests that help The Big Search but might not scale to other agencies? Do you ever have to push back on custom solutions that would hurt the broader product vision?
the khabib story is the most honest thing in here. had the exact same pattern with my habit tracker - big launch week, then half the users gone by week 3. the agency model basically solves this by default, your clients can't really churn on a whim. did the glass box approach come from clients explicitly asking or did you figure that out watching them use it?
Smart move building inside the agency first, we did the same at Figue.io (first building SaaS for customers, a tech agency, but on top of that, building saas for ourselves). The "unfair access to reality" you mention is huge, you're not guessing what users need, you're living it. We did something similar with reactin.io, started as an internal tool for LinkedIn outreach, then realized other agencies had the same pain. Building from inside means your V1 is already battle-tested. The "glass box vs black box" point hits hard too. In B2B, nobody trusts magic. They want to see the work.
The agency-as-distribution part is underrated. Most indie hackers treat the agency and the product as separate businesses. Using clients as a live feedback loop means your roadmap writes itself. Curious how he decides when a custom client solution is worth productizing vs. staying bespoke.
The agency-as-testing-ground approach is underrated. Having real users stress-test your product before you even launch externally means you can iterate on actual pain points instead of guessing. The "glass box" philosophy resonates too — when clients can see why AI made a decision, trust goes up and support tickets go down. Smart move building where the workflow already exists rather than trying to change behavior.
the support ticket point is huge. AI explainability isn't just a "nice to have" in high-stakes domains — it's literally the product. if a recruiter can't quickly sanity-check why a candidate was surfaced, they won't trust the system. and once trust breaks in enterprise, it's really hard to rebuild.
I think this approach makes validation much faster because you’re solving real client problems instead of guessing what the market wants. It also reduces risk when launching new tools.
Curious — did you face any challenges balancing agency client work with building the product side? Seems like that could get overwhelming at scale.
How long did it take you to feel confident about your startup’s positioning, and would you pay for a tool that helped you get there in under an hour?
Love this framing of “unfair access to reality” — building inside a real search firm and letting actual mandates be your product spec feels like such an underrated edge. The black‑box vs glass‑box lesson also really resonates; in high‑stakes domains like exec search, making the human sharper instead of “replacing” them sounds like exactly the right use of AI.
This is really smart positioning - using the agency as both a testing environment and distribution channel removes so much of the cold start problem. I'm doing something similar with Valen Sentinel - built it initially for influencer marketing agencies who need FTC compliance checking, thinking they'd be the natural first users. Turns out direct brands feel the compliance pain more acutely than agencies do. Did you find your initial assumption about who the customer was changed significantly once you started getting real usage data?