I think, llms are getting out of hand. Yesterday, I read about Tailwind CSS having problems running their business because llms provide all the answers now.
The day before, I put content, I wanted to share freely on my website, behind a login because I didn't want llms to read it once and spit it out forever. Never bringing humans back to my site.
In my opinion, this is bad. I don't want to be a content creator for a machine. I've always avoided "social media" because of that and now llms are turning the whole web into one big data source.
I want a way to share content with humans. Without a llm not giving back in between.
That is such a valid and timely concern. It feels like the "social contract" of the internet—where you provide value and get traffic/community in return—is being replaced by a one-way extraction model.
The Tailwind CSS example is a perfect case of "successful" documentation inadvertently making the source redundant, which is a scary prospect for any creator. It’s a tough spot to be in: wanting to be generous with humans while being protective against scrapers.
Have you looked into things like the GPTBot blocking protocols or the Spawning.ai "No AI" tools? They aren't perfect, but it's a growing movement for creators who want to reclaim their work.
I get the frustration, but I think there's a middle ground here. LLMs are a tool , they can scrape and summarize content, sure, but they don't replace the actual creation and curation part.
If someone's putting content behind a login to avoid being training data, that's totally fair.
But the issue isn't really the LLMs themselves, it's how they're being used and whether creators get compensated or credited. I'd rather see better attribution systems than just locking everything down.
The Tailwind example is interesting because it highlights a specific business model vulnerability: documentation-as-moat doesn't work anymore when LLMs can synthesize and serve that knowledge instantly.
But I think there's a nuance worth exploring. LLMs didn't kill the value of content — they killed the value of static, reference-style content. What still works:
The irony is that putting content behind a login might protect it from crawlers, but it also removes it from the human-to-human discovery loop that makes content spread in the first place.
Maybe the answer isn't gating content, but shifting what kind of content we create. Less "how to do X" (LLMs own that now), more "here's my experience doing X and what surprised me."
I think the real shift is that content alone no longer has scarcity, context, intent, and interaction do.
Static explanations will get absorbed by LLMs. What doesn’t: live thinking, opinionated workflows, tools, and conversations that evolve.
Sharing “with humans” might mean designing content that expects participation rather than consumption otherwise machines will always win at mirroring it.