12
28 Comments

10,000 Runs in 14 Days — What the Data Looks Like When a Niche API Finds Its Users

My first post here was called "From 0 to $80 in 9 Days." That was D+9.

Today is D+17. The counter just crossed 11,800.


The update in numbers

| Metric | D+9 (first post) | D+17 (today) |
|--------|-----------------|--------------|
| Total runs | 6,658 | 11,800+ |
| Estimated revenue | ~$80-90 | ~$140-160 |
| External users | 75 | 90+ |
| Actors earning | 12/13 | 13/13 |

The numbers move slower now. In the first week it felt like a rocket. Now it's more like a conveyor belt — steady, predictable, not exciting.

That's actually the good news.


What I've learned since the first post

1. One actor carries 72% of the load.

naver-news-scraper alone accounts for 8,483 of the 11,800 runs. The next closest is naver-place-search at 1,133. If news goes down, my revenue goes down.

I've been writing about this on Dev.to as a concentration risk. It's real. But concentration also means I know exactly what users want — Korean news monitoring is clearly the killer use case.

2. The day/night pattern is a gift.

Traffic drops to near-zero overnight (Seoul time), then surges at 9am KST when Korean businesses open. This pattern has held for over 2 weeks straight. It tells me these are real pipelines, not one-off experiments. Someone automated this.

3. Marketing matters more than more scrapers.

I spent the first 3 weeks building. Since then I've been writing about it — Dev.to series (30 posts), this community, Reddit attempts that mostly got filtered. The writing is generating more signal than any new feature would.

AmandaBrown asked in my first IH post: "Was there a signal from scraper 1 before you built the rest?" The answer is: barely. I built in the dark and got lucky that the demand existed. Now I see the signal. I'd build differently if I started over.


What's next

  • Getting the other 12 actors to grow (right now everything is naver-news)
  • Reddit re-attempt when karma is higher (currently ~1, 30-day lockout zone)
  • RapidAPI listing (Cloudflare Workers are ready, just need the provider setup)

The goal isn't to 10x the runs. It's to diversify so no single actor is 72% of everything.


If you're building niche data tools: the demand is quieter than the hype categories, but it's real and sticky. My users don't unsubscribe. They automate and forget I exist. That's the best kind of product.

Full breakdown: https://dev.to/sessionzero_ai/10000-runs-in-13-days-not-a-spike-a-baseline-4849

on March 30, 2026
  1. 1

    Concentrated. 72% of total runs come from naver-news-scraper, which means roughly 65 of those 90+ users are running news monitoring pipelines. The rest are split — naver-place has the next cluster, which looks like market research or competitive analysis based on the usage pattern (more sporadic, less cron-like).

    The concentration happened quickly — naver-news was the top-run actor by D+3. Part of that is the platform flywheel: Apify surfaces most-run actors in browse and search, so early traction compounds into more traction.

    What's interesting at D+17 is that the other actors found their own small user bases through the same mechanism, just slower. The question now is whether any of the smaller ones has a "naver-news moment" — or whether the distribution stabilizes around one dominant use case with a long tail.

  2. 1

    "Rewriting a pipeline" is the clearest framing I've seen for this. That's exactly the switching cost — and it shifts how I think about the concentration risk.

    The retention risk is low. You're right. But the risk that remains is on my side: if naver changes its structure or scraping becomes harder, those production pipelines break. So the concentration risk isn't "will they leave" — it's "can I keep delivering."

    That reframe actually changes where to invest. Better error handling, field stability, graceful failure over new actor expansion. The users who've built pipelines on top of this don't need more features — they need the thing to keep working.

  3. 1

    The "double down" instinct is already happening in practice — the news scraper gets the most attention on maintenance, field updates, rate limit handling. The other 12 exist but they're not where the energy goes.

    The harder part is "find out what they're building." Apify is API-first with no user accounts or contact layer. I can't survey them. What I can read: consistent daily runs tied to Seoul business hours = production pipelines, not experiments. The use case is almost certainly media monitoring or competitive intelligence.

    Pricing experiment is next. The inference is: these users are running at scale, not testing features. That points toward a usage tier rather than a feature gate. What do you charge for at your API when one use case dominates?

  4. 1

    Mostly inferred — Apify doesn't give me direct contact with users, so I'm reading usage patterns.

    The naver-news-scraper runs on a consistent daily schedule tied to Korean business hours. That's cron job behavior, not exploration. Most likely media monitoring or news aggregation pipelines — companies tracking specific topics, brands, or competitors in Korean media.

    The naver-place scraper has more sporadic usage, which looks more like market research or one-off competitive analysis.

    The clearest signal: users who run at 9AM KST every day for two weeks straight are not hobbyists testing something. They've integrated it into production workflows. What they're building on top of it, I can only infer — but the pattern is consistent enough that "Korean market intelligence pipeline" is the most accurate description.

  5. 1

    Reaching 10,000 runs in just 14 days is a strong validation signal, especially for a niche product. Most APIs never make it beyond the built it, now what? stage.

  6. 1

    10k runs in 14 days is a strong signal, especially for a niche tool. most APIs never get past the "built it, now what" phase.

    question — are the 10k runs coming from a handful of power users or is it spread across many? that ratio matters a lot for pricing. if its concentrated, you might be able to convert those heavy users to a paid tier faster than you think.

    im building something adjacent — an SEO scanning API that audits websites programmatically. similar niche-tool energy. the hardest part isnt the tech, its finding the distribution channel that actually converts. where did your first users come from?

    1. 1

      First users came from IH post #1, then Apify's own organic search/browse. The distribution is more concentrated than I expected — the top user accounts for roughly 40% of all runs alone. So the 10k is more "one power user + growing tail" than evenly spread.

      On the pricing point: you're right that concentration makes premium tier conversion easier to target. The challenge is that Apify's pricing is per-actor-run, so heavy users are already self-selecting into a tier that works for them. Explicit premium tiers are something I need to think about more.

  7. 1

    10k runs in 14 days is solid traction. the niche api approach is interesting because it flips the usual startup advice — instead of building for everyone and hoping someone cares, you built for a specific workflow and let the users find you.

    im doing something similar with an SEO scanning API. built it for my own cold outreach workflow (scanning agency websites before pitching them) and now considering opening it up as a paid service. your data on usage patterns is really useful — did you notice any particular time-of-day or day-of-week patterns in the runs?

    1. 1

      Yes, very clear day/night pattern. Traffic drops to near-zero from ~midnight KST, then surges around 9am KST when Korean businesses open. This has held consistently for 2+ weeks. Weekday vs weekend is also noticeable — weekends are quieter.

      That pattern is actually the clearest signal I have that these are production pipelines, not experiments. Someone set up a cron job and forgot about it. That's the stickiest kind of customer.

  8. 1

    The "they automate and forget I exist" line really resonates. That's the dream for any API or tool-based product — you become infrastructure rather than a product people actively think about. Churn drops to near zero when your users have built workflows on top of you.

    The concentration risk insight is super honest and something most founders wouldn't share publicly at this stage. Having 72% of usage from one actor is both validating (someone loves it enough to build a pipeline) and terrifying. Your instinct to diversify before scaling makes total sense — we're seeing similar dynamics building an AI-powered SaaS where early power users can mask whether you actually have broad product-market fit or just one very enthusiastic customer.

    The point about marketing > more features is something I wish more technical founders internalized. Your Dev.to series generating more signal than new scrapers is basically the "distribution beats product" principle in action. How are you thinking about the next 10x in users — more content, or expanding beyond Korean news into other niches?

    1. 1

      More content, not more niches. Writing is generating more signal right now than any new actor I could build — I'm getting clearer on what the actual use cases are. Going wide before I've saturated Korean news monitoring feels premature.

      The day/night pattern tells me there's still real headroom in the current use case. I'd rather go deeper on naver-news-scraper (better filtering, more granular sources, maybe alerting) than chase 12 other markets at once.

  9. 1

    the concentration risk is real but also a signal — 72% from one use case means you found actual product-market fit in Korean news monitoring, not just scattered usage. I would lean into that before diversifying. what does the naver-news-scraper user even do with it?

    1. 1

      Exactly how I'm reading it. The naver-news-scraper users are mostly running monitoring pipelines — Korean companies tracking competitor mentions, sector news coverage, specific keyword alerts. The daily schedule pattern confirms it's integrated into actual workflows, not ad hoc.

      72% concentration at this stage feels less like risk and more like "here's your PMF, go deeper." I'm planning to do exactly that before touching the other 12 actors.

      1. 1

        "72% concentration as PMF signal" - that framing shift is right. Concentration at this stage means the market told you something. Going deeper before spreading makes sense - you get compounding returns from the niche (referrals, feature requests that reinforce each other) vs starting over with each new vertical. Curious what deeper looks like for you - more actors in the Korean news space, or different data points in the same workflow?

        1. 1

          Different data points in the same workflow, not more actors. The users who are running naver-news daily are already sold on Korean news monitoring — what they'd pay more for is more control: outlet filtering, keyword-based alerts, longer date ranges. That's more value for the same audience rather than a new distribution problem.

          More actors in adjacent Korean news space (Daum, government press releases) makes sense eventually, but only once I understand what the current power users actually need. Asking them comes before building.

  10. 1

    The 'niche is real and sticky' point resonates a lot. I'm building TransitLens, a browser-based GTFS transit data explorer. The audience is small (transit developers, planners, agencies) but they have a very specific problem with no great browser-based solution. The activation insight is interesting too - I've found that getting users to their first 'aha moment' (seeing their data on a map in 10 seconds) matters far more than feature depth.

    1. 1

      The "aha moment in 10 seconds" framing is exactly right — and it scales differently than feature depth does. A feature serves users who are already committed; the aha moment converts users who are still deciding. For niche tools with small audiences, that conversion window is everything.

      TransitLens sounds like it has the same structure: a very specific problem, a specific audience who feels the pain acutely, and no good existing solution. That's actually easier to defend than a broad tool with a vague audience. Good luck with it.

  11. 1

    That’s strong early usage.

    Curious how much of this is repeat usage vs initial experimentation — that usually changes how you think about next steps.

    1. 1

      Heavily repeat. The day/night pattern tied to Seoul business hours is the clearest signal — those are scheduled cron jobs, not people experimenting. I can't see per-account run history directly, but the cadence tells the story: someone set up a pipeline and it's running on autopilot.

      That actually changes how I think about next steps significantly. Experimentation churn is a very different problem than pipeline stickiness. The data says stickiness — which means the next question is depth, not breadth.

      1. 1

        That actually makes a lot of sense — the cron pattern is a pretty clear giveaway.

        And yeah, once it’s on autopilot, the question shifts from usage to how important it is in their workflow.

        Do you have any sense yet of how critical these pipelines are for them?

  12. 1

    The 72% concentration on one actor is scary but also your clearest signal. I run a developer API too and the pattern is always the same. One use case takes off and the others sit there waiting. Instead of spreading effort across 13 actors I would double down on the news scraper and find out what those 90 users are actually building on top of it. That tells you what to charge more for

  13. 1

    The day/night pattern correlated to Seoul business hours is probably the most underrated insight here. That is not vanity traffic - those are cron jobs running on production infrastructure. When your users set it and forget it, your churn rate approaches zero because removing you means rewriting a pipeline.

    The 72% concentration risk is real but I would reframe how you think about it. Right now naver-news-scraper is not just your top actor, it is your product-market fit signal. Before diversifying across 13 actors, it might be worth going deeper on the news use case: are there adjacent Korean data sources (Daum, government press releases, corporate filings) that the same users would also want? Expanding within the use case you have already validated is usually higher-ROI than trying to find PMF for 12 other scrapers simultaneously.

    Also worth noting - at 90 external users generating 11,800 runs, your average user is running about 130 requests in 17 days. That is roughly 7-8 runs per user per day. That kind of frequency suggests monitoring or alerting use cases, not one-off research. Have you talked to any of them about what exactly they are building on top of your data? Knowing that could unlock a premium tier.

  14. 1

    Congratulations on the growth! Here's to hoping it keeps growing.

    If you can share, do you know what kind of businesses are using the API? Or what they are using the API data for?

  15. 1

    "My users don't unsubscribe. They automate and forget I exist" - that's the dream for data tools. Sticky by default because you're wired into someone's workflow, not competing for their attention every day.
    The 72% concentration risk is worth watching but don't rush to fix it by building more scrapers. You said it yourself - writing is generating more signal than new features. The play is probably getting the naver-news-scraper in front of more people rather than spreading thin across 13 actors that each get a trickle.
    The day/night pattern proving these are real automated pipelines and not one-off tests is the most valuable insight in this whole post. That's your proof of product-market fit right there. When someone wires your tool into their daily workflow without you asking them to, you've built something that matters.
    Curious what the Dev.to writing is doing for you in terms of actual conversion vs just awareness.

    1. 1

      Honest answer: mostly awareness, not direct conversion. I can't point to a specific Dev.to article and say "that brought X users." The Apify store itself does most of the actual converting — people find the actor, try it, keep using it.

      What writing does: it creates cross-channel surface area. This IH post came from a Dev.to article getting traction. Someone reads a Dev.to post, searches Apify later, finds the actor. The attribution chain is long and invisible.

      The one concrete signal I have is that my total users went from 22 to 100 over the same period I've been writing consistently. Correlation, not causation — but I haven't done anything else differently.

      1. 1

        Yeah that invisible attribution thing is frustrating but it's real. 22 to 100 users with writing as the only variable is hard to ignore even without clean data. I'd keep going

  16. 1

    D+9 to D+17 and the curve is still holding — that's the best kind of update to read. The 90+ external users on a niche API in 17 days is the part worth paying attention to. Most tools take months to find their first cluster of real users outside the builder's own network.

    Curious what the D+17 distribution looks like — are those 90 users concentrated in one use case or spread across a few different ones? That split tends to determine whether the next phase is doubling down on one segment or staying general.

  17. 1

    "my users dont unsubscribe. they automate and forget i exist" — thats the dream metric right there. recurring revenue where churn is basically zero because youre embedded in someones pipeline.

    i built a niche API too — SEO site scanner. checks title tags, meta descriptions, alt text, heading hierarchy, page speed, schema markup. scores out of 100. runs on a $0/mo linux server.

    the concentration risk point hits home. im in the opposite situation — no single user dominates because i have no users yet. but the lesson is the same: when one thing works, double down on understanding WHY it works before trying to diversify.

    your day/night pattern analysis is smart. the fact that traffic correlates with korean business hours means these arent hobbyists — theyre production systems. thats the stickiest kind of customer.

    curious about the RapidAPI listing — are you planning a free tier to get people testing, then paid tiers for volume? thats the approach im considering for my SEO API.

Trending on Indie Hackers
Never hire an SEO Agency for your Saas Startup User Avatar 98 comments A simple way to keep AI automations from making bad decisions User Avatar 67 comments I shipped a productivity SaaS in 30 days as a solo dev — here's what AI actually changed (and what it didn't) User Avatar 58 comments “This contract looked normal - but could cost millions” User Avatar 54 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments Are indie makers actually bad customers? User Avatar 36 comments