1
0 Comments

Your context window is a budget, not a trophy

When I started using models with huge context windows, I assumed bigger meant easier.

What actually happened was I stopped noticing waste.

I would keep old specs, tool output, stack traces, and half dead prompts in the session because the model could still handle it.

The result was not just higher spend. It was slower decisions.

The pattern I watch now is simple: if the context window keeps growing but the task is not getting clearer, the session is probably getting worse.

A few habits that changed my workflow:

  • start a fresh session when the task changes
  • summarize the useful bits instead of dragging the full transcript forward
  • keep logs and large outputs out of the main chat unless I need them
  • stay on smaller models until the hard step actually arrives

That is why I built TokenBar for macOS.

It keeps live token usage visible in the menu bar while I work.

For me, token counting is less about billing and more about knowing when my workflow is drifting.

https://tokenbar.site/

on May 10, 2026
Trending on Indie Hackers
I've been building for months and made $0. Here's the honest psychological reason — and it's not what I expected. User Avatar 177 comments 7 years in agency, 200+ B2B campaigns, now building Outbound Glow User Avatar 79 comments This system tells you what’s working in your startup — every week User Avatar 53 comments 11 Weeks Ago I Had 0 Users. Now VIDI Has Reviewed $10M+ in Contracts - and I’m Opening a Small SAFE Round User Avatar 46 comments The "Book a Demo" Button Was Killing My Pipeline. Here's What I Replaced It With. User Avatar 38 comments I built a desktop app to move files between cloud providers without subscriptions or CLI User Avatar 24 comments