1
3 Comments

The uncomfortable problem I’m seeing in technical hiring

I’m building a SaaS around GitHub evaluation.

But the real problem I’m observing isn’t scoring.

It’s this:

Most recruiters don’t actually know what to look for in a GitHub profile.

They look at:
• Stars
• Commit count
• Contribution graph

Developers optimize for visibility.
Not structural quality.

That creates noise on both sides.

I started building a “technical maturity index” — but now I’m questioning something deeper:

Is the problem evaluation?

Or is it signal education?

Curious:

If you hire engineers,
what’s the one thing you struggle with when reviewing GitHub?

Not asking for validation.
Trying to understand the friction properly.

on March 2, 2026
  1. 1

    Technical hiring is broken because the tools changed faster than the interview process. Companies still test for memorized algorithms while devs are shipping with Claude, Codex, and Cursor daily.

    The new skill isn't "can you invert a binary tree" — it's "can you orchestrate multiple AI tools to ship faster." That's why I built TokenBar (https://www.tokenbar.site/) — helping devs manage the AI tools they actually use.

  2. 1

    The hiring problem is real. AI is changing what "technical skill" means. The devs who will thrive are the ones who can effectively use 4-5 AI tools and know which one to reach for in each situation — that's the new meta-skill.

    I'd argue that the ability to manage multiple AI tools efficiently (knowing rate limits, switching between providers based on task type, not wasting time on the wrong tool) is becoming as important as knowing a language or framework.

    The hiring process hasn't caught up yet. We're still testing for skills that AI handles and ignoring the orchestration skills that actually differentiate productive devs.

    1. 1

      That’s a strong point.

      The “meta-skill” layer is evolving fast — and most hiring systems are still evaluating static technical signals instead of adaptive capability.

      What’s interesting is that GitHub history might start reflecting this shift indirectly.

      Not just what you code — but how you structure problems, iterate, integrate tools, and evolve over time.

      I wonder if the next hiring gap isn’t about AI usage itself — but about detecting orchestration patterns in real work.

      Curious — do you think this meta-skill can be measured objectively, or is it still too contextual?

Trending on Indie Hackers
Your AI Product Is Not A Real Business User Avatar 114 comments Stop Building Features: Why 80% of Your Roadmap is a Waste of Time User Avatar 60 comments I built an enterprise AI chatbot platform solo — 6 microservices, 7 channels, and Claude Code as my co-developer User Avatar 38 comments The Clarity Trap: Why “Pretty” Pages Kill Profits (And What To Do Instead) User Avatar 34 comments I got let go, spent 18 months building a productivity app, and now I'm taking it to Kickstarter User Avatar 22 comments I went from 40 support tickets/month to 8 — by stopping the question before it was asked User Avatar 17 comments