3
7 Comments

Show IH: CheckAPI – API monitoring that catches silent failures, not just HTTP 200

Hey IH đź‘‹

I'm Sewon, a solo developer from Seoul. I just launched CheckAPI — an API monitoring
tool with a specific focus on silent failures.

The problem I kept running into

Standard monitors check HTTP status codes. But what about this scenario:

  • Your API returns 200 OK âś…
  • But the response body is empty, malformed, or contains {"error": "db_connection_failed"} ❌

Your monitor says everything is fine. Your users are experiencing a broken product.
This is a silent failure — and it's more common than you'd think.

What CheckAPI does differently

Response body validation with three modes:

  • Keyword — must be present ("status":"ok") or absent ("error")
  • Regex — pattern matching for structured responses
  • JSON Path — coming soon

Plus the usual stuff: 5 alert channels (Email, Slack, Telegram, Discord, Webhook),
public status pages, SSL expiry alerts, response time tracking.

Why I built this (beyond the technical reason)

UptimeRobot restricted commercial use on free plans in December 2024. A lot of indie
hackers got caught off guard. CheckAPI's free plan (10 monitors) has zero commercial
restrictions — forever.

##Stack

Open source (MIT): github.com/JEONSEWON/CheckAPI

##Pricing

Free → $5 → $15 → $49/mo

Live at: checkapi.io

Would love feedback from this community — especially around what monitoring features
you actually use vs. what sounds good on paper.

posted to Icon for group Show IH
Show IH
on April 2, 2026
  1. 1

    Catching semantic failures is the part most API monitors skip, and it's usually where the painful bugs hide. The hard part is letting people express "200 but wrong payload, empty result, stale data" without turning setup into a test framework. Curious how CheckAPI handles assertions and false positives, that's where these tools usually win or lose.

    1. 1

      Great question — you nailed the exact pain point most tools miss.

      Right now, CheckAPI handles semantic failures through Keyword + Regex validation on the response body.
      You define what “good” or “bad” looks like directly in the monitor settings (e.g. "status":"ok", userId exists, count > 0, or any regex pattern).

      It’s deliberately simple so it doesn’t turn into a full test framework. You’re not writing assertions in code — just telling us what string or pattern should (or should not) be present.

      On false positives: because it’s keyword/regex based and you control the exact pattern, false positives are actually quite low once set up correctly. We also give you a “test alert” button so you can verify before going live.

      JSON Path + more structured assertions are already on the near-term roadmap (we know many people want to check nested fields, array lengths, etc. without regex).

      Would love to hear your take — what kind of semantic checks have bitten you the most in the past? (empty arrays, wrong status inside 200, stale timestamps, etc.)

      Happy to show you exactly how it works if you want.

  2. 1

    This is a really interesting angle — those “everything looks fine but it’s not” failures are the worst to catch.

    Curious if most people come to this after something breaks, or if you’re seeing teams actually monitor this proactively?

    1. 1

      Thanks for the thoughtful comment!

      Right now, the vast majority of people (including our first 7 users) come to CheckAPI after they’ve already been burned by a silent failure.

      They usually say something like:
      “I thought everything was fine because I was getting 200 OK… until customers started complaining.”

      That pain is exactly why I built this.

      The goal is to help teams shift from reactive firefighting to proactive monitoring — catching the hidden errors before they reach users.

      We’re still very early, but that’s the direction we’re heading.

      Would love to hear your take — have you ever had a silent failure sneak past your current monitoring?

      1. 1

        Yeah, I’ve seen this a couple of times — especially where everything looks green on dashboards but something subtle breaks in the response.

        The tricky part is teams don’t even think to monitor this until it hurts.

        Feels like the real challenge is not just detection, but getting people to care about it before something breaks.

  3. 1

    Congrats on the launch, Sewon.

    Silent failures are a nightmare for developers, so catching what's actually inside the response body is a great focus.

    I am also building something that solves a "silent" frustration for parents called WordyKid.

    It is a tool that lets parents snap a photo of any physical worksheet or book and instantly turns it into a language game.

    It is all about making sure the learning actually happens instead of just "completing" the task.

    Good luck with the growth!

    🙏

    1. 1

      Thank you! Really appreciate it.

      "Completing the task" vs "actually learning" — that's exactly the same problem
      I'm solving on the API side. The surface looks fine, but nothing real is happening underneath.

      WordyKid sounds genuinely useful. Snapping a worksheet and turning it into a game
      is the kind of thing parents will love. Good luck with it! 🚀

Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 151 comments Never hire an SEO Agency for your Saas Startup User Avatar 87 comments A simple way to keep AI automations from making bad decisions User Avatar 65 comments “This contract looked normal - but could cost millions” User Avatar 54 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments We automated our business vetting with OpenClaw User Avatar 34 comments