If you’re building with AI — especially agents — you need to expect misalignment between what you build and what users actually do.
Agents are flexible by nature. Unlike traditional software, which forces users into specific patterns, AI agents invite open-ended behavior.
People assume the agent can do more than it can. And they’ll happily “misuse” it if it helps them solve their problems.
That’s what happened to us at Jotform — and we used it to our advantage by making a strategic pivot. Here’s how you can do the same.
When we first built Jotform AI Agents, the idea was simple: Let users talk to a form instead of typing.
But within two weeks of going live, we noticed a major gap between what we built and how people were using it.
They weren’t using it to complete forms. They were using it to answer questions.
Questions like:
90% of prompts were customer support queries. We’d built a form-filler. But what users needed was a support agent.
Here’s how we picked up on it — step-by-step — and what I’d recommend any indie founder do if you're building in this space.
We weren’t guessing. We had visibility into thousands of live interactions through our platform.
And what we found was this: less than 10% of the prompts that our users sent were about completing forms.
They were asking the agent support-style questions:
What you can do:
If you’ve launched an AI agent, take a sample of conversation logs and run a basic LLM prompt over them to classify user intent.
You don’t need a full pipeline — just prompt an LLM with something like:
What's the user trying to do? Choose one:
- Ask for information
- Complete a transaction
- Navigate something
- Fill in data
- Other
Even tagging 100–200 rows manually or in a notebook will show you where the real usage is coming from.
Don’t assume your usage matches what you intended. Log the actual conversation intents. It’s where the real product insights are.
One person using your AI agent in a new way? Interesting. Lots of people doing the same thing? That’s a signal.
We saw it right away:
Different worlds. Same use: support.
What you can do: Check for this every week. You don’t have to do it by hand.
Here’s a simple setup:
What to look for:
You don’t need dashboards. You just need a simple way to spot when a pattern is forming.
We didn’t make a big announcement.
We didn’t shut down the form-completion features.
We just started building out the real use case.
That meant:
What you can do:
If you spot an emergent use case with real volume and user retention:
Once we accepted that AI agents were being used for customer service, we realized: Some users don’t need full customization. They just need a chatbot on their site that answers FAQs.
That’s why we quietly launched Noupe, a stripped-down version of Jotform AI Agents focused entirely on one thing: Paste your website URL → Get an AI chatbot trained on your content → Add it to your site with one line of code.
We built it because the pivot signal told us there’s a segment of users who don’t want to build an agent.
They just want an agent.
What you can do: Not all users want full power. If you’re seeing consistent demand for a simple version of your AI product:
If you’re building in this space, don’t wait for churn to tell you something is off. The signal is in the conversations.
This is smart.
Out of curiosity, what's your MRR and monetization strategy?
This is such a great reminder that real usage always reveals the truth faster than our assumptions. The “pivot signal” you describe — tracking actual user intent through conversations — is one of the most practical frameworks I’ve seen for AI builders. Too many founders overfit to their initial idea instead of following where the data (and behavior) leads. Love how you turned what looked like “misuse” into a whole new opportunity. It’s a perfect example of listening to the product in motion.
RT: Track the actual conversations — not just the activity.
Yes!!