6
26 Comments

I Think MCP Will Punish Thin API Wrappers

I read Anthropic's recent post on production agents and MCP, and one line felt like it described the product bet I have been making with FORMLOVA:

Group tools around intent, not endpoints.

That sentence is easy to read as implementation advice. It is that, but I think it is also a founder-level warning.

If you ship an MCP server that only mirrors your API, you may get early demos. You may even get a directory listing. But you are not really giving agents your product. You are giving them your database handles and asking the model to rediscover the product logic every time.

I think that pattern will age badly.

The easy MCP server is probably not the durable one

The easiest MCP server is an API wrapper.

If your product has forms and responses, you expose:

list_forms
get_form
list_responses
get_response
update_response
export_responses

That feels productive. The agent can touch the system. The demo works. A user can ask for the latest responses and get something back.

But the more I build, the more I think that is only the first mile.

The actual product value is usually not in the endpoint. It is in the interpretation that sits above the endpoint.

For FORMLOVA, a response is not just a row. It can be:

  • a real inquiry
  • a sales pitch
  • an uncertain case
  • an input to analytics
  • a workflow trigger
  • a message that should notify the team
  • a label a human corrected and the AI should not overwrite

If the MCP layer exposes only "rows," the agent has to infer the rest.

That is risky, and it is also wasteful. The product already knows many of those rules. The server should carry them.

The small feature that changed how I think about this

Recently I added sales-email classification to FORMLOVA.

When a form response arrives, FORMLOVA can classify it as:

legitimate
sales
suspicious

The obvious feature is a badge in the UI.

The more important feature is what the label lets the product do next.

sales can be excluded from analytics. suspicious can go to a human review queue. legitimate can continue through normal notifications and workflows. A human can correct a label, and that correction should stick.

This is where "intent, not endpoints" stops being abstract.

The user does not ask:

Fetch all responses, inspect spam_label, drop rows where spam_label equals sales,
keep suspicious rows, keep null rows, then calculate this month's conversion rate.

They ask:

Show me this month's real inquiries without sales pitches.

That is an intent. The product should know how to execute it.

Why this is a product decision, not only a technical one

There is a subtle design choice here.

If AI can detect sales pitches, why not block them before submission?

Because a false positive is expensive.

Bot protection belongs before submission. Honeypots, Turnstile, signed tokens, rate limits -- those are there to stop mechanical abuse.

Human-written sales pitches are different. They are real submitted content. If a model silently blocks a real customer because it looked like a sales pitch, the founder may never know the lead existed.

So FORMLOVA classifies after arrival.

The label is visible. The user can override it. The override becomes state. The agent can use that state later.

That is not just a classifier design. It is product semantics. And once MCP clients start operating on those labels, the semantics become part of the integration contract.

The thing I am trying not to do

I am trying not to build "ChatGPT for forms."

That phrase is tempting. It is short. It sounds like a category. But it leads to the wrong product.

If the product is only a chat surface, everything becomes a conversation. That is not always the best interface.

Chat is good for intent:

Exclude sales pitches from this month's analysis.
Show only suspicious responses.
Mark this one as legitimate.
Notify the team only for real inquiries.

But inspection often belongs in UI. A response list should be scanned. A chart should be seen. A review queue should have controls. A risky workflow should be confirmed clearly.

The split I keep using is:

Chat: intent
MCP: meaning, constraints, execution
UI: inspection and correction

That framing has helped me avoid building everything as either a dashboard feature or a chat feature. Some things belong in protocol.

Why Anthropic's post felt like confirmation

The Anthropic post did not make me decide to use MCP. FORMLOVA was already built in that direction.

What it did was clarify why this bet matters.

Production agents are going to run in the cloud. The systems they need to reach are also in the cloud. A remote MCP server becomes a portable surface for your product across clients.

That means the design quality of the MCP server matters more than I think many founders realize.

If the server is thin, the agent sees your product as primitives.

If the server carries meaning, the agent sees your product as work.

That difference compounds.

Every compatible client that adopts MCP can use the same remote server. If the server exposes good product semantics, your product gets better distribution without you rebuilding the integration for each client. If the server exposes only endpoint mirrors, your product arrives everywhere as low-level plumbing.

I do not want FORMLOVA to arrive as plumbing.

I want it to arrive as a form-operations layer.

What this changes in my roadmap

This changes what I prioritize.

It is tempting to add more tools because more tools make the product look larger. But tool count is not the goal.

The better goal is: fewer places where the model has to guess product rules.

So I am looking for patterns like:

  • Can this user request become a stable parameter instead of a prompt convention?
  • Can this label become operational state instead of UI decoration?
  • Can this workflow require confirmation at the server boundary instead of relying on prompt wording?
  • Can this result be returned as a visual surface instead of a wall of text?
  • Can this manual correction be protected from future automatic runs?

These questions are less flashy than "add another integration."

They are also the questions that make the integration usable in production.

The Indie Hacker version of the bet

For an indie product, MCP is not just an enterprise integration story.

It can become distribution.

If agents become a real user interface for work, then every useful MCP server is a new way for users to reach your product. But the products that benefit most will not be the ones that rush out the thinnest wrapper first.

They will be the ones whose MCP layer expresses what the product is actually good at.

For FORMLOVA, that means:

  • forms are easy to create
  • responses are where the work starts
  • labels become operational state
  • workflows can run from that state
  • humans can correct AI decisions
  • chat and UI each do what they are best at

That is the shape of the bet.

The question I am sitting with

If you are building an MCP server for your product, I think the first question should not be:

Which endpoints should I expose?

It should be:

What does my product understand that an agent should not have to rediscover?

That answer is probably your moat.

Not the endpoint. Not the schema. The meaning your product has accumulated from being close to the work.

That is what I want FORMLOVA's MCP layer to expose.

And after reading Anthropic's post, I feel more confident that this is the right direction.


Related:

FORMLOVA is free to start if you want to try MCP-based form operations directly:

on April 24, 2026
  1. 1

    Agreed. The value was never in the wrapper — it was in the workflow and the context around it. MCP just makes that obvious faster. If your whole moat is "we call this API so you don't have to," you never really had a moat.

    1. 1

      Right, and the uncomfortable part is that this was already true before MCP — it just wasn’t enforced. You could ship a thin wrapper, slap a UI on top, and the friction of “users don’t want to read API docs” was enough to keep you in business. That friction was doing a lot of load-bearing work that founders mistook for product value.
      MCP removes that friction almost entirely. The agent reads docs faster than any user, doesn’t get tired of authentication flows, and has no loyalty to your UI. So the wrapper layer that used to be defensible by inconvenience suddenly isn’t. Whatever was real underneath — workflow, opinionated defaults, accumulated state, trust — is what’s left standing. Whatever wasn’t, evaporates.
      The honest version of your line is: MCP isn’t killing thin wrappers, it’s just turning off the life support they didn’t realize they were on.

  2. 1

    Interesting take. The MCP thing reminds me of when REST APIs became standard, everyone suddenly built wrappers, and the ones that stuck were the ones that actually added value beyond just making REST calls easier.

    I think the same thing will happen here. A thin wrapper that just connects an AI to a tool isn't enough. The moat has to come from somewhere else, UX, domain expertise, workflow integration, or the data you already have.

    What do you see as the differentiator for FORMLOVA specifically? I'd be happy to dig into the product if you're open to it.

    1. 1

      The REST analogy is the right one and I think it predicts the shape of the next two years pretty cleanly. The first wave of MCP servers will look a lot like the first wave of REST wrappers — lots of them, mostly thin, and the survivors will be the ones that did something the underlying API couldn’t do on its own.
      For FORMLOVA, the differentiator isn’t “we let agents create forms.” That’s the thin-wrapper version, and if that were the whole pitch I’d already be in trouble. The real surface is what happens around the form, not the form itself. A few things compound:
      The product is MCP-native, not MCP-bolted-on. There’s no dashboard you have to leave the chat to use. Every operation — creating a form, publishing, configuring auto-replies, integrating with Sheets, analyzing responses, building reports — happens through tools the agent calls directly. That sounds cosmetic but it changes the shape of the surface. The tools are designed around what an operator actually wants to do, not around what the database tables look like. So the agent doesn’t have to assemble five primitives to accomplish one obvious task.
      The semantics of the response layer are part of the product. Form submissions get classified, deduplicated, filtered, and locked when a human corrects them. Those aren’t features you turn on; they’re the default state the agent sees. So an agent reasoning about “did this lead reply” or “show me legitimate signups this week” doesn’t have to rebuild that logic in a prompt every time. The product already knows.
      The integrations are workflow-shaped, not endpoint-shaped. set_workflow, auto-reply configuration, Sheets sync, duplicate prevention — these are intent-level operations the agent can compose without understanding the underlying plumbing. That’s the layer competitors will struggle to copy quickly, because it requires opinions about how form-driven workflows actually run, not just API coverage.
      So the moat, to use your word, isn’t the MCP server. It’s the accumulated product judgment that the MCP server makes legible to the agent. A thin wrapper around Typeform’s API would expose 30 endpoints and call it a day. FORMLOVA exposes around 115 tools, but each one is a verb an operator would actually say out loud.
      Happy to have you dig in. The site is formlova.com, and the MCP server is live if you want to connect it to a client and see how the surface feels from the agent’s side rather than from a screenshot. That’s usually where the difference becomes obvious.

  3. 1

    The question at the end is the one worth keeping: "what does your product understand that an agent shouldn't have to rediscover?"
    That's also the design question behind what I'm building — a spec layer where feature nodes carry their own connections to Figma frames, API endpoints, and code. The spec already knows the blast radius of a change. The agent shouldn't have to re-derive it from primitives every time.
    The thin wrapper problem you're describing is exactly what happens when that meaning gets stripped out at the integration boundary.

    1. 1

      The “blast radius” framing is the part I want to steal. That’s the cleanest articulation I’ve seen of why a primitive-only API surface fails the agent, even when the data is technically all there.
      A primitive surface forces the agent to reconstruct relationships every call. Which Figma frame relates to this feature? Which endpoints does it touch? Which code paths break if it changes? The information exists somewhere in the system, but it’s scattered across tables and the agent has to traverse it from scratch each time. That traversal is where errors enter — not because the model is bad at reasoning, but because the inputs are too low-level for the reasoning to be reliable. Five steps of derivation is five places to be subtly wrong.
      A spec layer that already encodes those connections collapses the derivation. The agent doesn’t ask “what does this feature touch” — it reads the answer. And critically, the answer carries the system’s authoritative view, not the agent’s reconstructed guess. That’s the difference between an integration that gets sharper as the product matures and one that stays brittle no matter how good the model gets.
      The connection to the thin wrapper problem is exactly what you said, and I’d put it this way: a thin wrapper assumes the primitives are the meaning. A spec-aware product knows the meaning lives in the relationships between primitives, and the integration’s job is to expose those relationships, not just the primitives. Stripping that out at the boundary is how you turn a sophisticated product into a generic CRUD surface from the agent’s point of view, regardless of how rich the underlying system actually is.
      What you’re building sounds like the same insight applied to software development itself, which is interesting because development is the domain where the cost of an agent re-deriving blast radius is highest. A wrong autoreply on a form is annoying. A wrong assumption about what a code change touches is a production incident. So the meaning-preservation pressure is even higher there. Curious what you’re calling it and where you are with it.

  4. 1

    ‘group tools around intent, not endpoints’ - same thing PMs say about user stories vs. API specs. the API reflects what you can do. intent is what they're actually trying to do.

    1. 1

      Exactly. API design optimizes for system completeness; intent design optimizes for the user’s actual job. With MCP, the LLM is the user — and it doesn’t want 12 atomic calls to accomplish one obvious task. It wants the task.
      This is why I think a lot of “MCP servers” shipped today are going to age badly. They’re just OpenAPI specs in a new wrapper. The winners will be the ones that rebuild the surface around verbs the model would actually reach for.

      1. 1

        yeah, "verbs the model would reach for" is the exact right frame. ran into this building our sprint planner - had all the right primitives exposed but the agent kept chaining 5 calls for what should've been one. rebuilt around jobs not objects. same cognitive load shift, just lands on the model now instead of the dev.

        1. 1

          The “cognitive load shift” line is the cleanest version of this I’ve heard. That’s exactly what’s happening. Pre-MCP, the developer absorbed the gap between primitives and intent — they read your docs, chained the calls, handled the edge cases, and shipped the integration. The cost was real, just hidden inside engineering teams.
          MCP doesn’t remove that cost. It moves it onto the model. And the model’s tolerance for it is much lower, because every extra step is a place where reasoning can go wrong, tokens get burned, and latency stacks up. So the same surface that felt “complete enough” to a developer feels frustrating to an agent, not because the agent is weaker but because it has no patience for boilerplate the developer was silently doing for free.
          The shift from objects to jobs is the right move because jobs are the unit at which the cost actually exists. Five primitive calls aren’t five units of work to a user, they’re one job that got fragmented at the API boundary. Rebuilding around jobs puts the seam back where the user always perceived it. Sounds like you found that line the same way most of us do — by watching the agent flail at something that should have been one step.

  5. 1

    This framing — "intent, not endpoints" — really resonated. I hit a smaller version building a capture tool: shipped a clean REST API early, thought integrations would just plug in, but every client ended up reinventing the same "is this a real note or just a scratch entry" logic on top of the raw rows. Eventually moved that classification server-side and exposed an intent like recent_intentional_captures instead of list_entries with filters. Suddenly the integrations looked useful instead of busy. The part I'm wrestling with is discovery. When the server exposes meaning rather than primitives, tool names and descriptions carry a lot of weight — the agent can't browse your product the way it can browse a REST spec. How are you thinking about tool-name design and descriptions so the agent picks the right semantic tool without you writing 5000-token system prompts? And do you keep any low-level escape-hatch tools for the 5% of cases the high-level intents miss?

    1. 1

      Great example — recent_intentional_captures vs list_entries+filters is exactly the shift. The classification belongs server-side because the meaning lives there.
      On naming and descriptions, my working rules:
      Tool names should read like things the user would say out loud, not like functions. publish_form, not forms.update(status=“published”). The verb carries the intent; the name shouldn’t require the agent to infer side effects.
      Descriptions should answer “when would I reach for this?” before “what does it do?” The agent’s selection problem is closer to retrieval than to reading docs. I write the first sentence as if it were a search snippet — what situation triggers this tool. The parameter docs come after.
      Keep the surface small. Every additional tool dilutes selection accuracy. I’d rather have 20 sharp intent tools than 80 thorough ones. If two tools overlap in description space, the agent will pick wrong roughly half the time.
      On the escape hatch — yes, but I gate it. I expose a small set of lower-level tools (raw query, raw update) but mark them clearly as “use only when no intent tool fits.” In practice the model respects that prefix surprisingly well. The bigger risk isn’t the agent misusing the escape hatch; it’s the agent defaulting to it because the high-level tools are vague. If you find the model reaching for the low-level tools often, that’s a signal your intent layer has a gap, not that the escape hatch is wrong.

  6. 1

    Mostly agree. If a product is just "same API, nicer docs," MCP makes it easier for agents to route around it, but wrappers that own auth, reliability, billing, or an opinionated workflow still have room. Tradeoff is that bolting on MCP too early can turn into infra work nobody pays for, so the unique value has to be clear first.

    1. 1

      Agreed on the distinction. The wrappers that survive aren’t the ones with prettier docs — they’re the ones where the API isn’t actually the product. Auth, reliability, billing, opinionated workflow: those are real surfaces that don’t disappear when an agent shows up. If anything, agents make them more valuable, because the agent doesn’t want to handle retries, rate limits, or per-tenant billing logic either.
      The “too early” point is fair and I’d push it further: MCP is distribution, not validation. Adding an MCP server to a product nobody pays for just gives you a more efficient way to deliver something the market already rejected. The sequence has to be value first, then surface. I think the trap a lot of teams will fall into is treating “we have an MCP server” as a wedge, when it’s really just a channel.
      So roughly: if your moat is the API, MCP commoditizes you. If your moat is everything around the API, MCP amplifies you. The hard part is being honest about which one you actually are.

  7. 1

    This is a strong framing.

    The point about agents having to rediscover product logic every time is probably the most important part. A thin wrapper may be enough for a demo, but not enough for durable product value.

    I especially like the distinction between exposing rows and exposing meaning. That feels like the real gap between “API access” and an actual MCP product surface.

    The part about human corrections becoming protected state also stood out. That seems like one of the clearest places where product semantics become part of the integration itself.

    Curious where you think the boundary should be long term:
    what belongs in MCP semantics, and what should still stay in the UI layer?

    1. 1

      Good question and I think it’s the right one to be asking now, because the answer shapes what you build for the next few years.
      My working line: MCP semantics own the verbs, the UI owns the nouns the human still needs to see and feel.
      A verb is a decision the agent can make on the user’s behalf with enough context — publish, duplicate, route, classify, reconcile. These belong in MCP because the value comes from the agent doing them without round-tripping through a human. If the agent has to ask the user “are you sure?” for every step, you’ve just rebuilt the dashboard with extra latency.
      A noun is something the human needs to perceive directly — a form preview, a chart, a table of incoming responses, the visual layout of a document. The agent can describe these, but description loses fidelity fast. A user looking at their published form wants to see it, not be told about it. UI is still the right surface for anything where the human’s eye is doing work the agent can’t do for them.
      The interesting middle layer is review and correction. When a human edits something the agent produced, that edit carries information the agent should treat as authoritative going forward. That’s what I meant by protected state — it’s not really UI and not really MCP, it’s the contract between them. I think this is the layer most teams will underbuild. The agent generates, the human corrects, and if those corrections don’t flow back into the semantic layer, you end up with an agent that keeps making the same mistake confidently.
      So long-term: MCP for action, UI for perception, and a deliberate handoff layer for judgment. Teams that treat MCP as a replacement for UI will overbuild on the agent side and frustrate users. Teams that treat MCP as just an API channel will underbuild on semantics and lose to ones that didn’t.

  8. 1

    "Fewer places where the model has to guess product rules" is the sentence I want taped above my desk. I've been building a small indie iOS app solo and recently prototyped a thin share-extension wrapper — it worked in demos, but every user request forced me to re-explain the intent in prompts. The moment I pushed a single operation (save-to-inbox-as-mail) instead of primitive save + format + send, everything felt lighter, for me and the model. Your sales/suspicious/legitimate example maps well to that: once a label becomes operational state, the agent stops being a philosopher. One question: when a user overrides a label, do you surface a visible "protected from future auto-runs" indicator in the UI, or keep it hidden? I'm torn on the honesty vs clutter tradeoff.

    1. 1

      That collapse from save+format+send into save-to-inbox-as-mail is the whole game. Once the operation matches the user’s mental verb, the prompts get short and the model stops needing to be told the rules. You internalized the lesson faster than most.
      On the override visibility question — I land on visible, but quietly. My reasoning:
      If the override is invisible, two bad things happen. The user forgets they protected it and gets confused later when the agent “ignores” what looks like a normal item. And the agent’s behavior becomes legible only to whoever wrote the rule, which breaks trust the moment another teammate or a future-self looks at it. Hidden state is fine for infra. It’s poison for anything the user is making decisions against.
      But you’re right that a loud “PROTECTED FROM AUTO-RUNS” badge is clutter. The compromise I use: a small, low-contrast indicator (a subtle dot, or a muted icon) on the item itself, plus a single sentence in any place where the agent reports what it did or skipped. Something like “skipped 3 items you’ve manually corrected.” That way the indicator is ambient when the user is browsing, and explicit when the agent’s behavior would otherwise look arbitrary.
      The principle behind it: the user doesn’t need to see the protection at all times. They need to see it the moment the agent’s behavior would otherwise be a mystery. Show it where confusion would happen, hide it where it wouldn’t. That usually resolves the honesty-vs-clutter tradeoff in both directions at once.

  9. 1

    The "what does my product understand that an agent shouldn't have to rediscover" framing is exactly right, and there's a related trap I've seen sink otherwise well-designed MCP servers: the tool schema description is where that understanding lives or dies.

    The description field on each tool is effectively a compressed prompt that goes into every model's planning context. "list_responses: get form responses" and "list_responses: returns verified form submissions with classification state, human_locked corrections, and spam exclusion already applied based on active filter config" are technically the same endpoint, but the second one collapses about three inference steps the model would otherwise burn tokens trying to figure out.

    Your human-correction-should-stick point is the harder engineering problem. The catch is surfacing it at the schema boundary, not just storing it internally. If the tool returns a correction_locked flag when a human has overridden a label, the model can reason about it before attempting a write. If that signal only lives in your DB, you're relying on prompt wording to stop the agent from clobbering a human decision — and prompt wording doesn't survive multi-step chains reliably.

    The moat you're describing is real. It compounds fastest when the tool schema communicates the product's knowledge rather than just enforcing it internally.

    1. 1

      You named the part most teams miss. The schema isn’t documentation that lives next to the product — it is the product, from the agent’s point of view. Every word in that description is in the planning context whether you want it to be or not, so the choice isn’t “do I write a description,” it’s “do I write the one that does work or the one that just labels the endpoint.”
      The list_responses example is exactly the right shape. The second version doesn’t just describe the call, it tells the agent which questions are already answered. The model doesn’t need to plan a “first check if filtering is applied” step, because the description has already promised that it is. Three inference steps collapse into zero. Multiply that across a 115-tool surface and the difference between a server that feels sharp and one that feels like the agent is groping around becomes almost entirely a description-quality problem.
      Your second point is the one I’d put on the wall. Storing correction_locked in the database is the easy half. Returning it as a typed field on the response, with a description that tells the agent what the flag obligates it to do, is the half that actually changes behavior. Prompt wording in a system prompt is hope. A schema field is a contract. And contracts survive chain depth in a way that hopes don’t — by step four or five of a multi-step plan, system-prompt instructions have been diluted by intermediate tool outputs, but a flag that comes back on every relevant response keeps reasserting itself.
      The way I think about it now: the schema is where the product’s worldview becomes machine-readable. Anything important that only lives in the database, or only lives in the system prompt, is a leak. The first leaks because the agent can’t see it; the second leaks because the agent forgets it. The schema is the only surface where product knowledge stays both visible and durable across a long chain of reasoning. Teams that internalize that early will end up with MCP servers that feel like they understand the domain. Teams that don’t will end up rewriting their system prompts every time a new failure mode shows up — which is exactly the work the schema was supposed to absorb.

  10. 1

    You’re basically saying the MCP layer becomes the actual product surface, not just access to it.
    The interesting part is once you start encoding meaning there, the naming layer starts to matter way more too. If agents are calling “intent-level” tools, the product name itself becomes part of how that meaning is discovered and remembered across clients.
    Curious how you’re thinking about that long term — does FORMLOVA stay as the abstraction, or do you see naming evolving alongside the MCP semantics as distribution grows?

    1. 1

      Yes, that’s exactly the tension I’m thinking about.

      I don’t see FORMLOVA as only a “form tool” long term. The form is the entry point, but the bigger idea is to become a workflow and orchestration hub around form-based intent.

      A form submission is often the first structured signal from a customer, lead, partner, or spammer. If the MCP layer can understand that signal, classify it, route it, and trigger the right next step, then the product surface becomes much larger than the form itself.

      So I’m not too attached to “form” as a narrow category. I see it more as the starting surface for intent capture. FORMLOVA can stay as the abstraction, while the MCP semantics evolve around workflows, routing, classification, and decision-making.

      1. 1

        That makes sense.

        If FORMLOVA is just the entry point, then the bigger risk is the name staying tied to the old category while the product moves beyond it.

        Because “form” feels narrow.

        But what you’re describing is bigger than forms:
        intent capture
        classification
        routing
        decision flow
        workflow state

        That sounds more like an operations layer than a form product.

        So the naming question becomes less “does FORMLOVA work today?” and more “will it still carry the product once the category expands?”

        That’s where I’d be careful.

        If the product becomes the layer that turns inbound intent into action, the name has to carry that wider surface too.

        1. 1

          Fair point, and I take the warning seriously. But I’d push back on the conclusion, not the diagnosis.
          You’re right that “form” as a word feels narrower than what the product is becoming. Where I land differently is on what to do about it. Renaming to fit a wider category usually weakens the product, not strengthens it. Slack stayed Slack when it became infrastructure. Notion stayed Notion when it became a database. Stripe is still “payments” in the name even though half the company is now identity, billing, and tax. The pattern that works is keeping the narrow, memorable entry-point name and letting the product itself stretch the meaning of the word, rather than trying to pre-name the future surface.
          There’s also a structural reason I’m less worried than I would have been five years ago. In an MCP-native world, the agent doesn’t discover the product through the name. It discovers it through tool descriptions and behavior. The name matters for the human-facing entry — landing page, word of mouth, search — and for that, “form” is actually the right anchor, because forms genuinely are the entry surface of almost every web interaction. Contact, signup, application, booking, survey, checkout. The category looks narrow only if you take “form” to mean the Google Forms shape. If you take it to mean “the structured moment where intent enters a system,” it’s one of the broadest categories on the web.
          The way I think about FORMLOVA internally is closer to a concierge than a form builder — the form is the door, but what happens after the door is the actual product. So the work isn’t to rename. The work is to make the positioning catch up to the product, so that “FORMLOVA” reads as “the intent layer that starts with a form” rather than “another form tool.” That’s a content and surface problem, not a naming problem.
          Where I do agree with you: if I ever stopped letting forms be the entry point — if the product moved fully upstream into something that no longer touched form submission at all — then yes, the name would start dragging. I don’t see that happening, but it’s the right thing to watch for. Appreciate the push.

          1. 1

            Exactly — that’s the line I’m optimizing around.

            The name only becomes a liability if the entry point disappears.

            As long as the product keeps owning the structured moment where intent enters the system, FORMLOVA stays anchored to something real and repeatable.

            That’s the distinction I care about:

            “form” as UI element is narrow
            “form” as the point where intent becomes structured enough to route, decide, and act is not

            That’s why I’m less focused on renaming and more focused on making that interpretation obvious fast.

            If people still read it as “form builder,” positioning failed.
            If they read it as “where inbound intent becomes action,” the name is doing its job.

            1. 1

              That’s the cleanest way I’ve heard the test stated. “If they read it as form builder, positioning failed. If they read it as where inbound intent becomes action, the name is doing its job.” That’s the kind of line I’d actually use as an internal benchmark — every piece of copy, every demo, every tool description gets evaluated against that single question.
              It also reframes the work in a useful way. Renaming would have been a one-time decision that locks something in. Positioning is a continuous practice — every surface gets to either reinforce the wider reading or quietly collapse back to the narrow one. Which means the failure mode isn’t dramatic, it’s just drift. A tagline that leans on “forms made easy,” a homepage that opens with form templates, a tool description that talks about fields instead of intent — any one of those, repeated across surfaces, undoes the work without anyone noticing. The discipline is keeping every touchpoint honest about which reading the product actually wants.
              Appreciate the exchange. This was the rare comment thread that sharpened my own framing instead of just decorating it.

Trending on Indie Hackers
The most underrated distribution channel in SaaS is hiding in your browser toolbar User Avatar 185 comments I launched on Product Hunt today with 0 followers, 0 network, and 0 users. Here's what I learned in 12 hours. User Avatar 159 comments How are you handling memory and context across AI tools? User Avatar 100 comments I gave 7 AI agents $100 each to build a startup. Here's what happened on Day 1. User Avatar 98 comments Do you actually own what you build? User Avatar 59 comments Show IH: RetryFix - Automatically recover failed Stripe payments and earn 10% on everything we win back User Avatar 34 comments