Built the wrong thing at the wrong time - but discovered something worse (or better?)
Hey IndieHackers! 👋
TL;DR: Got my lunch eaten by Claude Code, GitHub, and Linear. But while pivoting, I discovered AI is telling developers to install packages that don't exist - and hackers are already exploiting it.
Some months ago, I had a "brilliant" idea. Like a lot of you, I'd been playing with LLMs (both commercial and open/local ones - the frontier models feel like a whole different class of tool) and getting genuinely useful results for simple dev tasks.
You know the type - version bumps that need a method signature change. Too complex for Dependabot, but tedious enough that no one wants to drop their work context to touch them. They just sit there, contributing to an ever-growing black hole backlog. "We'll get to it when we can justify the sprint work" or some similar statement from leadership that effectively means "we'll never do it". That sort of thing.
So I built an "autofix" tool. Connect to your ticket tracker, tag an issue with "autofix", watch it generate a solid PR.
I got to a functional MVP. It worked! But I kept telling myself: "Let's get this to where I can demo it end-to-end. And add proper accounts! I want a nice, smooth video before I do my outreach."
Then Claude Code, GitHub, and Linear all shipped first-party support while I was still perfecting my demo script. 😅
Classic indie hacker mistake - building in stealth mode too long. The market had spoken: "you're too slow."
But here's the thing - I've been a command line junkie since the late 90s. Did sysadmin work from the early aughts, spent years debugging mixed-signal analog ICs (talk about multi-day feedback loops!), then dove into DevOps, IT security, and infrastructure before shifting to web development and consulting on tech, infra, architecture, and team processes.
Years of consulting meant getting thrown into new languages and codebases and organizations constantly. All that exposure taught me one thing: security failures have patterns. And most of them are known patterns that just haven't been mitigated. Same reasoning as the backlog pileup, but compounded by a certain level of Just Not Wanting To Know. Why scan for vulnerabilities if you unconsciously know you'll never get a chance to spend the time fixing what it finds?
What if instead of waiting for engineers to tag issues, I proactively found and fixed security vulnerabilities?
Started testing on deliberately vulnerable projects (NodeGoat, DVNA, etc.). Results were solid. We can't (yet?) autofix the hairiest cross-repo vulnerabilities, but we can automate away a sizable portion of security backlogs.
The false positive rate was absurdly high at first. The fix required implementing AST enhancement to every. single. pattern. (I'll dive into this technical journey in a future post)
While building this out with Claude Code, I noticed something odd. AI kept suggesting packages that didn't exist:
express-security-validator
react-auth-guard
django-advanced-security
I'd catch these pretty quickly when whatever dependency manager tried and failed to run, so no biggie, right? LLMs hallucinate; it's how they work. That's why we have tooling and tests.
Then I dug deeper...
[This Thursday: How I discovered hackers are already poisoning these AI-hallucinated packages - with proof of 30,000+ downloads]
RSOLV detects and automatically fixes both traditional vulnerabilities AND these new AI-specific threats. Because it's not just about fixing the backlog - it's about getting ahead of it entirely.
Currently:
Questions for you:
Drop a comment - I'm sharing this whole journey as I build.