AI makes you faster. It doesn't make you more directed.
I kept shipping things, checking tasks off — then looked up and realized
none of it moved what actually mattered.
So I made 'alignor (markdown file)' — paste it into Claude, ChatGPT, or Cursor as a
system prompt. Define your mission and visions at the top. Every task
your LLM generates gets a WHY, a vision tag, and a self-score at the end:
VERDICT: PARTIAL — nothing moved V2 this week. Intentional?
That question is the whole product.
One file. No install. No SaaS. Free + open source.
Interesting angle, especially if the scoring stays close to plain English instead of turning into another eval DSL people have to learn. One thing that helped in similar tooling was separating "did it follow the vision" from "is the output actually useful", because those drift in different ways. Would be curious how you handle subjective products where the vision itself keeps moving week to week.
Great questions these are the two hardest edges of the problem.
On separating "did it follow the vision" from "is it useful":
alignor intentionally only scores the first one. Usefulness is
context-dependent and hard to define upfront. Vision alignment
is something you can define so that's the scoring target.
On moving visions: that's actually a feature, not a bug.
Visions are meant to change as strategy evolves. When your
vision shifts week to week, that's a signal worth surfacing either your strategy is genuinely pivoting, or you're
rationalizing drift. alignor makes that visible.
<What I built>
alignor.md — a single markdown file you paste into Claude Projects, ChatGPT Custom Instructions, or Cursor rules as a system prompt.
You fill in two things:
After that, every task list your LLM generates gets:
The thing that changed how I work.