12
24 Comments

Built a curated API marketplace to reduce time wasted on unreliable APIs (feedback welcome)

While working on side projects recently, I kept running into the same issue: finding APIs that actually work takes way more time than it should.

The pattern was usually:

API looks promising
Docs look great
Integration starts… and responses don’t match docs / reliability is inconsistent / you end up switching again
So I built Apives.com — a curated API discovery + marketplace platform.

What I’m trying to do differently:

Curation over volume: fewer APIs, but more reliable/useful ones
Manual checks: submissions go through a basic manual review before listing
Use‑case driven discovery: easier to find APIs based on what you want to build (not just generic categories)
Practical info: clear docs, quick integration snippets, and transparent pricing (where available)
It’s still early and evolving, and I’d love feedback from builders here:

How do you currently find and evaluate APIs?

What’s your biggest frustration with existing API directories/marketplaces?

If you check Apives, what’s the first thing you’d change to make it genuinely useful?

Thanks!

on April 16, 2026
  1. 2

    This is a real problem, and I think the curation angle is stronger than the homepage currently makes it feel.

    A few quick notes after checking Apives:

    1. The hero is polished, but still a bit vague. “Discover APIs. Deploy Potential.” doesn’t immediately tell me why I should use this instead of RapidAPI, Google, or docs-hopping. I’d make the promise much more explicit around curated, manually reviewed, reliable APIs.
    2. Above the fold feels overloaded right now. Hero copy, promo, code snippet, and live runner all compete before the core value lands. I’d simplify the first screen to value prop, trust proof, and one primary CTA.
    3. The sample code can make it look like Apives is its own API product rather than a marketplace. That creates positioning confusion fast.
    4. Your strongest differentiator seems to be manual review / reliability, but I don’t see enough proof of that early. Badges like reviewed, tested endpoint, pricing verified, or last checked would do a lot of work.
    5. The cards are detailed, but I’d make them more scannable with clearer top-level metadata like auth method, pricing, docs quality, and best use case.

    Overall, I think the opportunity is less “add more stuff” and more “make the curation promise impossible to miss.” If helpful, I can do a super short US$1 homepage teardown with screenshots and prioritized fixes: https://roastmysite.io/go.php?src=external_manual_ih_apives_homepageclarity_apr18_usd_presell_hv

    1. 1

      That’s fair, really appreciate this 👀
      We’re shifting more towards “curated + reliable APIs” as the core message. Homepage got a bit crowded, we’ll simplify it and add stronger trust signals (badges, last checked, etc). Thanks for the teardown 🙏

  2. 2

    This is a real problem.

    Finding a working API sometimes takes longer than actually building the feature 😅

    Interesting angle focusing on reliability — are you seeing more demand from solo devs or teams?

    1. 1

      Yeah, it’s a real pain 😅
      Right now we’re seeing more pull from solo devs, but teams are starting to care more once reliability becomes a blocker.

  3. 2

    Curation > volume is the right call here — most API directories fail because everything looks good until you actually try integrating. Biggest frustration for me is reliability + outdated docs. If you could surface real-world signals (uptime, last verified, maybe user feedback), that would make this way more valuable.

    1. 1

      Exactly — volume is everywhere, trust isn’t.
      We’re leaning more into real-world signals instead of just listings.

  4. 2

    the discovery part isnt the bottleneck most builders hit. its that an api that worked 6 months ago is now flaky and no directory has that signal. founders arent looking for "best payment api" - they already know. they need to know which one isnt rate-limiting them to death this week. curious if youre thinking about surfacing ongoing reliability data or sticking to curation at submission time

    1. 1

      Spot on, static curation isn’t enough.
      We’re working on adding live signals via the API runner + periodic checks so devs know what actually works now.

  5. 2

    Hey — this is actually a real pain point. API discovery is still weirdly fragmented, and most directories optimize for listing volume instead of “can I trust this in production”.

    The curation + use-case angle is the right direction — especially if you can make reliability signals visible, not just docs/SEO pages.

    Curious, how are you planning to measure “reliability” in practice during manual review — usage history, uptime signals, or community feedback over time?

    Also, I’m running a small experiment with early-stage builders around infrastructure + workflow tools like this. $19 entry, winner gets a Tokyo trip (flights + hotel). Round 01 is live (100 cap).

    1. 1

      100% agree 🔥
      Discovery is easy, trust is the hard part.
      Goal is: if it’s on Apives, you can rely on it.

      1. 1

        Thanks apives_ecosystem! 100% agree — discovery is easy, trust is the hard part.

        Quick overview of Tokyo Lore: It’s a paid ideas competition where people submit Tokyo-connected business or creative ideas. For $19 you get a custom AI-generated artifact of your idea + a full SPEAR business analysis, plus entry into the round where the winner gets a real trip to Tokyo (flights + hotel booked by us).

        Prize pool has started building — odds are excellent right now while it’s still very early.

        Reliability measurement (usage history, uptime, community feedback) is exactly the kind of closed-loop logic we’re testing. Would you be interested in submitting an idea? Happy to send you the direct $19 link.

  6. 2

    Curation-over-volume is the right instinct — every API directory I've bounced off of hit the same failure mode where 80% of the listings are dead links or mismatched docs. The "docs say X, response returns Y" gap is brutal when you're shipping a side project on a weekend.

    On reliability: do you run any automated contract tests (hitting sample endpoints, validating schema/status) or is it purely manual review at submission time? My worry with manual-only is that an API passes review, then 6 months later the provider silently changes a field and you have no signal until a user complains. A lightweight nightly cron + schema diff could scale with listings. Also curious how you handle auth/paid-only APIs that can't be probed anonymously — self-reported uptime, or user reviews?

    1. 1

      Great question, currently a mix of manual review + live endpoint testing (runner).
      Gradually adding uptime + usage signals too.
      And your experiment sounds awesome 🚀

  7. 2

    This is a real problem — most API directories optimize for volume, not reliability.

    Curious — how are you actually validating “reliability” before listing?

    1. 1

      Right now it’s manual + endpoint testing.
      But moving towards continuous validation instead of one-time approval.

      1. 1

        That makes sense — continuous validation is probably the only way this works long-term.

        One thing I’d flag though — in something like API discovery, users are making trust decisions really fast.

        Even before testing reliability, they’re filtering based on:
        – how clear the platform feels
        – whether it sounds “authoritative” vs experimental

        Sometimes that affects adoption more than the actual validation layer early on.

        Apives works functionally, but it doesn’t immediately communicate what the product does or why it’s more reliable than others.

        Have you thought about tightening the positioning/brand around that before scaling traffic?

  8. 2

    That’s a truly great idea and a fantastic website. Thanks for your work. I think the UX is excellent—I explored a few APIs to understand how it works, and everything was smooth. I quickly got where I needed to be and found all the information I was looking for. Wishing you lots of success with your site and plenty of sponsors!

    1. 1

      Really appreciate this 🙏
      Glad it helped — trying to make API discovery actually useful, not just searchable.

  9. 1

    Looks interesting...

  10. 1

    API reliability is such an underrated problem for developers. The reliability scoring is a smart differentiator — are you tracking uptime over time or more of a point-in-time check during listing?

  11. 1

    My problem is I'm still un-sure about the reliability of certain APIs, how do I check that? Sometimes it works, and sometimes it does not give a response.

  12. 1

    API discovery is actually an uptime data problem dressed up as search. Most directories rank by popularity or recency. The signal that matters is error rate over 90 days, auth stability, and breaking-change frequency. Any of those public would flip the ranking instantly. Are you surfacing live reliability metrics or just curating manually for now?

  13. 1

    This is a solid idea. I currently run an email deliverability API and I’m also a heavy consumer of APIs for my other apps.

    My biggest frustration: 'Zombie Documentation.' Nothing kills momentum faster than an API that looks great but has a mismatch between the docs and the actual JSON response.

    If Apives could show a 'Verified Response' snippet or a 'Last Healthy Request' timestamp, it would be an instant bookmark for me. Curation over volume is definitely the right move.

Trending on Indie Hackers
I built a tool that shows what a contract could cost you before signing User Avatar 120 comments The coordination tax: six years watching a one-day feature take four months User Avatar 80 comments My users are making my product better without knowing it. Here's how I designed that. User Avatar 66 comments A simple LinkedIn prospecting trick that improved our lead quality User Avatar 60 comments I launched on Product Hunt today with 0 followers, 0 network, and 0 users. Here's what I learned in 12 hours. User Avatar 58 comments I changed AIagent2 from dashboard-first to chat-first. Does this feel clearer? User Avatar 39 comments