In my previous post, I mentioned that I started thinking about building my own product. It wasn’t a sudden decision — it just accumulated over time.
I kept running into the same problem: too much information in Telegram.
Chats, channels, discussions — everything grows faster than you can read it.
And the thing is, you almost never need everything. You just need a specific answer to a specific question.
— Is this topic being discussed?
— What do people think about it?
— Was there anything important already?
— Is this chat even worth my time?
At some point I caught myself thinking: why am I doing this manually?
If I already use AI for writing, analysis, and coding — why not let it read chats for me?
That’s how the idea came up.
To build an AI copilot for Telegram that answers questions based on chat content instead of me scrolling through it.
Conceptually, it was simple:
I ask a question →
AI reads the chat →
it gives me a clear answer.
I started with the basics — building the features I personally needed.
And now I’m curious to ask not myself, but you:
Voted for summarizing chat. My reasoning: the pain you described ("too much info, I only need a specific answer") has two different solutions — search-for-an-answer and summary-of-what-happened. Summary is the one I'd reach for first because it removes the need to know what I'm looking for. Search needs a question; summary just needs attention.
The "why am I doing this manually" framing is the real pain. Rooting for the MVP ship.
when I started building this, I naturally focused on just two core things.
The first one is chat summaries.
Not just raw messages, but a meaningful recap over a period of time — so I don’t have to read the entire stream to understand what actually happened.
The second one is answering specific questions.
A lot of the time I go into chats with a clear intent — to find a piece of information, or to understand whether something has already been discussed. And that’s surprisingly hard to do efficiently.
Those two use cases felt the most important to me, so I decided to focus on them first instead of trying to solve everything at once.
Along the way, a few other ideas started to emerge — but I’m keeping the scope tight for now.
Will share more soon on how it’s actually working in practice.
Summaries + Q&A as your two core bets — that's tight scoping. The "keeping scope tight" part is the hard part, not the picking. Most people pick 2 and then silently add 3 and 4 by week 3.
Looking forward to seeing how it actually performs — the summaries in particular. That's the feature where most AI tools write a good-sounding paragraph that doesn't actually tell you what happened.
That’s a fair concern, and I don’t think this is something that can be “solved” upfront.
For me, the level of reliability is something that will come out of real usage — through testing, feedback, and actually seeing how it behaves across different types of chats.
I do think prompt design plays a big role here.
From what I’ve seen so far, the quality of the output can vary quite a bit depending on how the task is framed, so that’s definitely something I’ll keep iterating on.
At the same time, I don’t expect this to be perfectly accurate — and I don’t think it needs to be.
Any AI output around text is inherently imperfect.
But if it solves 80–95% of the problem and significantly reduces the need to manually read through everything, that’s already valuable.
That’s also been my experience so far in some real use cases — even with imperfect outputs, it still saves a lot of time.
80-95% is the right bar — chasing 100% is how text-AI tools die in a wet paper bag of edge cases instead of shipping. The harder version of my worry isn't accuracy, it's the "good summary, wrong gist" failure mode where the summary is 95% factually accurate but somehow misses what actually mattered. That one only shows up in real usage too.
Looking forward to seeing how it lands. Following.