Hey IH đź‘‹
I'm Sewon, a solo developer from Seoul. I just launched CheckAPI — an API monitoring
tool with a specific focus on silent failures.
Standard monitors check HTTP status codes. But what about this scenario:
Your monitor says everything is fine. Your users are experiencing a broken product.
This is a silent failure — and it's more common than you'd think.
Response body validation with three modes:
Plus the usual stuff: 5 alert channels (Email, Slack, Telegram, Discord, Webhook),
public status pages, SSL expiry alerts, response time tracking.
UptimeRobot restricted commercial use on free plans in December 2024. A lot of indie
hackers got caught off guard. CheckAPI's free plan (10 monitors) has zero commercial
restrictions — forever.
##Stack
Open source (MIT): github.com/JEONSEWON/CheckAPI
##Pricing
Free → $5 → $15 → $49/mo
Live at: checkapi.io
Would love feedback from this community — especially around what monitoring features
you actually use vs. what sounds good on paper.
Catching semantic failures is the part most API monitors skip, and it's usually where the painful bugs hide. The hard part is letting people express "200 but wrong payload, empty result, stale data" without turning setup into a test framework. Curious how CheckAPI handles assertions and false positives, that's where these tools usually win or lose.
Great question — you nailed the exact pain point most tools miss.
Right now, CheckAPI handles semantic failures through Keyword + Regex validation on the response body.
You define what “good” or “bad” looks like directly in the monitor settings (e.g.
"status":"ok",userIdexists,count > 0, or any regex pattern).It’s deliberately simple so it doesn’t turn into a full test framework. You’re not writing assertions in code — just telling us what string or pattern should (or should not) be present.
On false positives: because it’s keyword/regex based and you control the exact pattern, false positives are actually quite low once set up correctly. We also give you a “test alert” button so you can verify before going live.
JSON Path + more structured assertions are already on the near-term roadmap (we know many people want to check nested fields, array lengths, etc. without regex).
Would love to hear your take — what kind of semantic checks have bitten you the most in the past? (empty arrays, wrong status inside 200, stale timestamps, etc.)
Happy to show you exactly how it works if you want.
This is a really interesting angle — those “everything looks fine but it’s not” failures are the worst to catch.
Curious if most people come to this after something breaks, or if you’re seeing teams actually monitor this proactively?
Thanks for the thoughtful comment!
Right now, the vast majority of people (including our first 7 users) come to CheckAPI after they’ve already been burned by a silent failure.
They usually say something like:
“I thought everything was fine because I was getting 200 OK… until customers started complaining.”
That pain is exactly why I built this.
The goal is to help teams shift from reactive firefighting to proactive monitoring — catching the hidden errors before they reach users.
We’re still very early, but that’s the direction we’re heading.
Would love to hear your take — have you ever had a silent failure sneak past your current monitoring?
Yeah, I’ve seen this a couple of times — especially where everything looks green on dashboards but something subtle breaks in the response.
The tricky part is teams don’t even think to monitor this until it hurts.
Feels like the real challenge is not just detection, but getting people to care about it before something breaks.
Congrats on the launch, Sewon.
Silent failures are a nightmare for developers, so catching what's actually inside the response body is a great focus.
I am also building something that solves a "silent" frustration for parents called WordyKid.
It is a tool that lets parents snap a photo of any physical worksheet or book and instantly turns it into a language game.
It is all about making sure the learning actually happens instead of just "completing" the task.
Good luck with the growth!
🙏
Thank you! Really appreciate it.
"Completing the task" vs "actually learning" — that's exactly the same problem
I'm solving on the API side. The surface looks fine, but nothing real is happening underneath.
WordyKid sounds genuinely useful. Snapping a worksheet and turning it into a game
is the kind of thing parents will love. Good luck with it! 🚀