I tried using ChatGPT/Copilot for our team's PR reviews for a month.
I thought it would save me time. Instead, it wasted it.
The problem isn't that it's "dumb". The problem is that it has no context.
It would flag a function for being "too complex" when that complexity was necessary for our specific business logic.
It would suggest "optimizations" that broke our internal API contracts because it didn't know about the other 50 microservices.
I spent more time marking comments as "Resolved / Won't Fix" than I did actually reviewing code.
It was generating noise, not signal.
I realized I didn't need a "chatty" assistant. I needed a filter.
I needed something that understood the difference between a style preference and a security vulnerability.
So I built CodeProt to fix this for myself.
It's a strict, context-aware analyzer that ignores the fluff and only flags architectural risks and security issues.
It's not trying to be a "senior engineer simulator". It's just a damn good filter.
If you're tired of false positives, check it out.