1
4 Comments

I don't want to be a content creator for llms

I think, llms are getting out of hand. Yesterday, I read about Tailwind CSS having problems running their business because llms provide all the answers now.

The day before, I put content, I wanted to share freely on my website, behind a login because I didn't want llms to read it once and spit it out forever. Never bringing humans back to my site.

In my opinion, this is bad. I don't want to be a content creator for a machine. I've always avoided "social media" because of that and now llms are turning the whole web into one big data source.

I want a way to share content with humans. Without a llm not giving back in between.

on February 2, 2026
  1. 1

    That is such a valid and timely concern. It feels like the "social contract" of the internet—where you provide value and get traffic/community in return—is being replaced by a one-way extraction model.
    The Tailwind CSS example is a perfect case of "successful" documentation inadvertently making the source redundant, which is a scary prospect for any creator. It’s a tough spot to be in: wanting to be generous with humans while being protective against scrapers.
    Have you looked into things like the GPTBot blocking protocols or the Spawning.ai "No AI" tools? They aren't perfect, but it's a growing movement for creators who want to reclaim their work.

  2. 1

    I get the frustration, but I think there's a middle ground here. LLMs are a tool , they can scrape and summarize content, sure, but they don't replace the actual creation and curation part.

    If someone's putting content behind a login to avoid being training data, that's totally fair.

    But the issue isn't really the LLMs themselves, it's how they're being used and whether creators get compensated or credited. I'd rather see better attribution systems than just locking everything down.

  3. 1

    The Tailwind example is interesting because it highlights a specific business model vulnerability: documentation-as-moat doesn't work anymore when LLMs can synthesize and serve that knowledge instantly.

    But I think there's a nuance worth exploring. LLMs didn't kill the value of content — they killed the value of static, reference-style content. What still works:

    • Opinions and perspectives — LLMs can summarize facts, but they can't replace a human's take on why something matters
    • Real-time context — "Here's what I learned shipping X this week" has a shelf life that training data can't capture
    • Community and interaction — The comments here are more valuable than the post alone because they're a live conversation

    The irony is that putting content behind a login might protect it from crawlers, but it also removes it from the human-to-human discovery loop that makes content spread in the first place.

    Maybe the answer isn't gating content, but shifting what kind of content we create. Less "how to do X" (LLMs own that now), more "here's my experience doing X and what surprised me."

  4. 1

    I think the real shift is that content alone no longer has scarcity, context, intent, and interaction do.

    Static explanations will get absorbed by LLMs. What doesn’t: live thinking, opinionated workflows, tools, and conversations that evolve.

    Sharing “with humans” might mean designing content that expects participation rather than consumption otherwise machines will always win at mirroring it.

Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 142 comments “This contract looked normal - but could cost millions” User Avatar 54 comments A simple way to keep AI automations from making bad decisions User Avatar 52 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments Never hire an SEO Agency for your Saas Startup User Avatar 40 comments The indie maker's dilemma: 2 months in, 700 downloads, and I'm stuck User Avatar 40 comments