I spent the day debugging a React app with an AI assistant. The AI correctly diagnosed a complex multi-file bug involving investment_type propagation across a database, a screening filter, and a fallback score — genuinely impressive reasoning across thousands of lines of code.
Then it introduced a broken JSX ternary without a fragment wrapper. A bug that any junior developer would catch in 30 seconds.
What followed was two hours of the AI confidently blaming everything except itself — wrong file, missing deploy, cache issues, console commands — before finally looking at its own code and finding a missing <>…</>.
So yes, AI can absolutely be both smart and stupid at the same time. And the answer to “how do you know which one you’re dealing with” is: you don’t, until it’s too late.
The dangerous part isn’t when AI doesn’t know something. It’s when it doesn’t know what it doesn’t know — and fills the gap with confident explanations that send you in the wrong direction.
The pattern I noticed: AI performs best on tasks that require connecting many pieces of information simultaneously. It performs worst on simple, mechanical tasks that require nothing more than careful attention. Ironically, those are exactly the tasks where we stop checking its work.
Trust the AI on the hard stuff. Double-check the simple stuff. And when it starts blaming you — that’s usually when you should start looking at its code.
That’s been my experience as well.
Pattern recognition and cross-referencing context can be genuinely impressive, then it falls over on something painfully basic with complete confidence.
The dangerous bit is exactly that confidence. A wrong answer delivered hesitantly is manageable. A wrong answer delivered like certainty sends you down rabbit holes.
Feels less like intelligence and more like uneven competence depending on the task shape.
Well done. "Uneven competence depending on task shape" is probably the most accurate description of AI I've read.