1
4 Comments

Show IH: alignor – a markdown file that scores your AI outputs against your Vision

AI makes you faster. It doesn't make you more directed.

I kept shipping things, checking tasks off — then looked up and realized
none of it moved what actually mattered.

So I made 'alignor (markdown file)' — paste it into Claude, ChatGPT, or Cursor as a
system prompt. Define your mission and visions at the top. Every task
your LLM generates gets a WHY, a vision tag, and a self-score at the end:

VERDICT: PARTIAL — nothing moved V2 this week. Intentional?

That question is the whole product.

One file. No install. No SaaS. Free + open source.

posted to Icon for group Show IH
Show IH
on March 25, 2026
  1. 1

    Congrats on building Alignor 👏
    That’s a very useful tool for anyone working with AI!
    I help founders test their apps and give structured feedback on usability, bugs, and improvement opportunities.
    If you want, I can run a quick test and share a clear report before more users start using it happy to help!”

  2. 1

    Interesting angle, especially if the scoring stays close to plain English instead of turning into another eval DSL people have to learn. One thing that helped in similar tooling was separating "did it follow the vision" from "is the output actually useful", because those drift in different ways. Would be curious how you handle subjective products where the vision itself keeps moving week to week.

    1. 1

      Great questions these are the two hardest edges of the problem.

      On separating "did it follow the vision" from "is it useful":
      alignor intentionally only scores the first one. Usefulness is
      context-dependent and hard to define upfront. Vision alignment
      is something you can define so that's the scoring target.

      On moving visions: that's actually a feature, not a bug.
      Visions are meant to change as strategy evolves. When your
      vision shifts week to week, that's a signal worth surfacing either your strategy is genuinely pivoting, or you're
      rationalizing drift. alignor makes that visible.

  3. 1

    <What I built>

    alignor.md — a single markdown file you paste into Claude Projects, ChatGPT Custom Instructions, or Cursor rules as a system prompt.

    You fill in two things:

    • Mission: why you exist (timeless, never changes)
    • Visions: concrete future states you're aiming for (the actual scoring targets)

    After that, every task list your LLM generates gets:

    • A [WHY] — forces the LLM to articulate the vision connection
    • A [VID] — which vision this serves, or NONE
    • A [OUT] — a concrete expected outcome, not vague words
    • A self-score that flags anything that drifted

    The thing that changed how I work.

Trending on Indie Hackers
I've been building for months and made $0. Here's the honest psychological reason — and it's not what I expected. User Avatar 156 comments Agencies charge $5,000 for a 60-second product demo video. I make mine for $0. Here's the exact workflow. User Avatar 152 comments This system tells you what’s working in your startup — every week User Avatar 49 comments 11 Weeks Ago I Had 0 Users. Now VIDI Has Reviewed $10M+ in Contracts - and I’m Opening a Small SAFE Round User Avatar 40 comments I built a health platform for my family because nobody has a clue what is going on User Avatar 15 comments Most teams think they have a detection problem. They don't. User Avatar 8 comments