1
3 Comments

Show IH: alignor – a markdown file that scores your AI outputs against your Vision

AI makes you faster. It doesn't make you more directed.

I kept shipping things, checking tasks off — then looked up and realized
none of it moved what actually mattered.

So I made 'alignor (markdown file)' — paste it into Claude, ChatGPT, or Cursor as a
system prompt. Define your mission and visions at the top. Every task
your LLM generates gets a WHY, a vision tag, and a self-score at the end:

VERDICT: PARTIAL — nothing moved V2 this week. Intentional?

That question is the whole product.

One file. No install. No SaaS. Free + open source.

posted to Icon for group Show IH
Show IH
on March 25, 2026
  1. 1

    Interesting angle, especially if the scoring stays close to plain English instead of turning into another eval DSL people have to learn. One thing that helped in similar tooling was separating "did it follow the vision" from "is the output actually useful", because those drift in different ways. Would be curious how you handle subjective products where the vision itself keeps moving week to week.

    1. 1

      Great questions these are the two hardest edges of the problem.

      On separating "did it follow the vision" from "is it useful":
      alignor intentionally only scores the first one. Usefulness is
      context-dependent and hard to define upfront. Vision alignment
      is something you can define so that's the scoring target.

      On moving visions: that's actually a feature, not a bug.
      Visions are meant to change as strategy evolves. When your
      vision shifts week to week, that's a signal worth surfacing either your strategy is genuinely pivoting, or you're
      rationalizing drift. alignor makes that visible.

  2. 1

    <What I built>

    alignor.md — a single markdown file you paste into Claude Projects, ChatGPT Custom Instructions, or Cursor rules as a system prompt.

    You fill in two things:

    • Mission: why you exist (timeless, never changes)
    • Visions: concrete future states you're aiming for (the actual scoring targets)

    After that, every task list your LLM generates gets:

    • A [WHY] — forces the LLM to articulate the vision connection
    • A [VID] — which vision this serves, or NONE
    • A [OUT] — a concrete expected outcome, not vague words
    • A self-score that flags anything that drifted

    The thing that changed how I work.

Trending on Indie Hackers
I relaunched my AI contract tool on Product Hunt today - here's what 400+ founders taught me User Avatar 129 comments I shipped 3 features this weekend based entirely on community feedback. Here's what I built and why. User Avatar 116 comments $36K in 7 days: Why distribution beats product (early on) User Avatar 116 comments I've been reading 50 indie builder posts a day for the past month. Here's the pattern nobody talks about. User Avatar 112 comments Finally reached 100 users in just 12 days 🚀 User Avatar 106 comments We made Android 10x faster. Now, we’re doing it for the Web. 🚀 User Avatar 71 comments