1
0 Comments

The context window is not your working memory

While building TokenBar, I noticed a weird behavior change in my own AI workflow.

As context windows got bigger, I got sloppier.

I stopped restarting chats.
I pasted in more raw tool output.
I carried old requirements and dead-end reasoning much longer than I should have.

Nothing looked broken, so it felt efficient.

It was not.

Three things got worse:

  1. Review got slower
    The model had more baggage, so I had more baggage to reread too.

  2. Prompt quality dropped
    I relied on the model remembering old context instead of stating the current task cleanly.

  3. Spend got harder to predict
    One long chat quietly turned into the default workflow.

What actually helped was not another after-the-fact dashboard.

It was seeing token usage live while I worked.

Once I could see the token trail in real time, I started using it as a workflow signal:

  • restart instead of dragging stale context forward
  • summarize logs instead of pasting everything back in
  • stay on a smaller model until the task really needs more context

Big context windows are useful.
But they are easy to mistake for permission to be messy.

That lesson is a big reason I built TokenBar for macOS.
https://tokenbar.site/

on May 10, 2026
Trending on Indie Hackers
I've been building for months and made $0. Here's the honest psychological reason — and it's not what I expected. User Avatar 168 comments Agencies charge $5,000 for a 60-second product demo video. I make mine for $0. Here's the exact workflow. User Avatar 152 comments This system tells you what’s working in your startup — every week User Avatar 52 comments 11 Weeks Ago I Had 0 Users. Now VIDI Has Reviewed $10M+ in Contracts - and I’m Opening a Small SAFE Round User Avatar 44 comments 7 years in agency, 200+ B2B campaigns, now building Outbound Glow User Avatar 13 comments Show IH: WeProcess. Integrated platform or another all-in-one stretched too thin? User Avatar 9 comments