25
43 Comments

What's the point of AI generated comments?

Just read a post on here and when I went to the comments, half of them were some form of AI generated content. How do I know this you ask. Well, they all had the same format; which basically boiled down to, a compliment followed by lessons learnt. This looked something like:

"That sounds interesting/innovative/whatever felt relevant... what stood out to me most was ...."

I'm not against using AI to clean/spice up a comment, but when the AI is the one creating it, what even is the point? I may be missing something(if so please enlighten me), but I feel like the whole point of the comment section is for true discussion as opposed to a regurgitation of a post in summary. hmpf!

What do you think? An overreaction or a valid concern?

on January 6, 2026
  1. 1

    You are not at all overreacting,most of the people are just using AI NO CONTRIBUTION
    this is the only platform where founders actually connects . But after seeing the comments it clear that they are not here for contributing they are just here for pseudo productivity.

  2. 1

    Valid concern. And it’s about to get worse before it gets better.

    What you’re noticing is not “AI helping people write.” It’s engagement theater. People outsourcing presence itself. The compliment-plus-summary format is just the lowest-effort way to signal “I was here” to an algorithm, not to a human.

    Right now social platforms tolerate this because they cannot tell the difference in a way that matters economically. More comments equals more activity. Quality is irrelevant as long as the metrics go up. So AI slop is not a bug, it’s a feature.

    But this phase is temporary.

    The moment identity becomes provable, not just an email or a handle but a cryptographic identity with reputation, history, and cost to abuse, this behavior collapses. Blockchain is the obvious primitive here, not because it’s trendy, but because it introduces friction and accountability.

    When a comment is tied to:

    • a persistent identity
    • a visible reputation score
    • an on-chain history of behavior
    • and potentially a small economic cost to post

    then flooding the zone with AI-generated filler becomes irrational. You cannot cheaply fake thought anymore.

    At that point, comments stop being free exhaust and start being signals again.

    We are currently in the spam era of AI, similar to email before spam filters, or SEO before Google got serious. Platforms will allow it until it actively harms trust and retention. Then they will pivot hard.

    So no, this isn’t an overreaction. You’re just early in noticing that we’ve broken the social contract of discussion. The fix will not be better prompts or better AI detectors. It will be identity, reputation, and consequences.

    Until then, expect more “That’s interesting, what stood out to me…” comments. They are not written for you. They are written for machines.

  3. 1

    I don’t think it’s an overreaction.

    You can usually spot it instantly because the comments don’t actually add anything — they just paraphrase the post and tack on a safe question at the end.

    Using AI to clean up wording is fine, but when the thinking itself is outsourced, the comment section just turns into noise. It’s weird because it defeats the whole point of posting in the first place.

    I’d rather read one slightly messy, opinionated comment than five perfectly formatted ones that say nothing.

  4. 3

    I think it’s a valid concern, not an overreaction.

    For me, the issue isn’t AI itself, it’s intent.
    If someone uses AI to help phrase a genuine thought they already had — fine.
    But when comments are basically a template: compliment + summary + generic takeaway, they add no new signal.

    Comments should move the conversation forward, not just confirm that the post was read.

    Ironically, those AI-style comments feel more like engagement farming than discussion. And once you notice the pattern, it really breaks the trust in the thread.

    Curious how others feel — would you rather have fewer comments but more opinionated ones?

  5. 3

    I prefer human-typed comments myself, as atleast people have actually given a thought about it. I don't think you're overreacting, it's just that low effort comments simply evolved to using AI

    1. 1

      You're right! I built a Chrome extension called PulseOfReddit that helps with exactly this - it tracks Reddit keywords and alerts you when relevant discussions pop up. It's helped me catch early conversations and validate ideas faster. Offering free access for the first 10 users if you want to try it out.

      Website:

      pulseofredditcom

      1. 1

        Thank you! I'm gonna check it out.

  6. 1

    Totally get what you’re saying—AI can polish a comment, but when it writes the whole thing, it often ends up sounding cookie-cutter. Real value comes from personal insights, experiences, and reactions that AI just can’t replicate.

    For example, if you’re curious about practical things like applying for a police character certificate, at policeclearance. org —straightforward guidance without the fluff.

  7. 1

    AI comments are fine if they actually add something. The generic ones just feel like noise.

  8. 1

    I don’t think the issue is AI itself, it’s intent. Most AI-generated comments feel like they’re optimized to look thoughtful without actually taking a position. Compliment, summarize, move on. No risk, no disagreement, no lived context. That’s fine for polishing a thought you already have. But when the entire comment is outsourced, the discussion turns into noise. It stops being a conversation and becomes pattern matching. The comments that matter are the ones where someone’s wrong, unsure, or pushing back. That’s hard to automate and probably the point.

  9. 1

    I am a not a native English speaker. When I posted something on forum in English in the past, I was scared that my poor English looks disgusting.
    Since the wide adoption of using AI, I like to use it to correct my wording. I am happy and confident that I speak sounded more like a native American. I admit that I become relying on it sometime, but if the meaning of the AI packaged message still expressing my own thought, is it still okay? Just a bit cosmetic making up, hope you don’t mind…

  10. 1

    It's easy to see a future where AI is posting things and AI is adding comments with no human interaction at all. Resource consuming data will fill up with a whole heap of nonsense. There are lots of good use cases for AI but spurting out reems of valueless content is essentially a virus.

  11. 1

    Valid concern. You’re not overreacting.

    Most AI comments optimize for politeness, not contribution. They restate the post, add no stake, and avoid taking a position so discussion goes nowhere.

    Comments are useful when they add friction: disagreement, lived experience, or a concrete example. AI summaries do the opposite they smooth everything out.

    Using AI to edit your thought is fine. Using it instead of having a thought defeats the point of a comment section.

  12. 1

    I think this is a very thoughtful observation. IMHO the issue isn’t AI being used at all, it’s AI being used to say nothing new.

    A comment that just paraphrases the post or adds a generic “great insight, what stood out to me was…” doesn’t move the conversation forward, whether a human or AI wrote it.

    Where AI can be useful is helping someone articulate a real opinion they already have — but the opinion still has to be theirs. Without that, it just becomes noise, and the comment section loses its value.

    It might be easy but none of us can really afford to farm out our opinions to AI - that way we all stop thinking altogether!

  13. 1

    This feels like a very valid concern, not an overreaction.

    I’ve noticed the same pattern — many AI-generated comments follow a predictable structure: polite agreement → short summary → generic takeaway. They’re not wrong, but they don’t really move the discussion forward.

    From my experience, the best use of AI in comments is as an assistant, not a speaker. Cleaning up wording, helping structure a thought, or translating a rough idea into clearer language — that’s useful. But when the AI is doing the thinking, the comment loses context, nuance, and lived experience.

    The irony is that comments are one of the few places where signal still beats scale. A short, imperfect, but opinionated comment from someone who’s actually tried something is far more valuable than a polished summary anyone could generate.

    Curious how others feel about this: do you think communities should start rewarding specific experience (what you tried, what failed, what surprised you) more explicitly, rather than just well-written responses?

  14. 1

    I think AI-generated comments are erasing everyone’s individual style and habits.

  15. 1

    I think many people lack self-confidence due to language barriers and other issues. Therefore, trusting AI is easier than trusting themselves. This is why AI-generated reviews are so common on multinational platforms.

    Of course, another motivating factor is laziness, an inevitable part of human nature.

  16. 1

    I agree. When comments feel templated, they don’t add much value. I come here to read real experiences, not summaries.

  17. 1

    Not anti-AI, but it’s obvious when comments add no new signal—just polite paraphrasing. The value here has always been real experiences and edge cases, not summaries.

  18. 1

    By the way, you’re not wrong to be annoyed — even LinkedIn has started to downrank posts where a bunch of AI-ish comments show up. But I think on Indie Hackers it’s a bit different: AI comments can help people farm points/karma faster and “legitimize” themselves here. I’m totally fine with using an LLM thoughtfully to structure your real thoughts (like Alison mentioned above) or to clean up wording. What feels off is handing over full responsibility for the actual content to AI — then the comment section stops being discussion and becomes a summary vending machine.

  19. 1

    Valid concern, not an overreaction.

    The irony is that AI-generated comments defeat the whole purpose of community. IndieHackers works because real builders share real experiences — the messy, unpolished, human stuff.

    I've noticed the same pattern you described. What gives it away for me isn't just the format, it's the lack of specificity. AI comments never say "I tried this and here's what broke" or "reminds me of when my customer complained about X." They just... summarize and compliment.

    The fix might be cultural, not technical. If communities reward genuine engagement over volume, people will stop farming comments. But if engagement metrics are the only thing that matters, we're doomed.

    Question: do you think platforms should start flagging templated responses, or would that just create an arms race where people prompt AI to "sound more human"?

  20. 1

    Valid concern, not an overreaction.

    The irony is that AI-generated comments defeat the whole purpose of community. IndieHackers works because real builders share real experiences — the messy, unpolished, human stuff.

    That said, I think there's a spectrum:

    • AI writing your entire comment = pointless noise
    • AI helping you articulate a thought you already have = tool usage
    • AI summarizing a post so you can respond = lazy

    The tell is specificity. Generic "great post, here's what I learned" = probably AI. Sharing a specific personal experience or asking a pointed question = probably human.

    The fix might be cultural, not technical. Communities that reward genuine engagement over volume will self-select for real humans.

    What made you notice it on this specific post?

  21. 1

    This is a great discussion point! I think the distinction between AI-assisted improvement and AI-generated templated responses is crucial. As someone building AI tools (working on Alchemy AI, an AI-driven platform for content creators), I believe the real value lies in using AI to enhance human creativity rather than replace genuine thought. The best AI implementations save time on repetitive tasks while maintaining authenticity in the output.

  22. 1

    I don't have a problem with people using AI to help structure and organize their thoughts as long as the tone, style, and sediments are still very much real and human. Completely outsourcing your thoughts to AI shows very little effort made to contributing to a conversation. Half of the AI comments I see are completely off-tangent from the topic of discussion

  23. 1

    Glad to see someone bring up this. I think its an absolute valid concern... I have noticed the exact same!! I think its probably to improve backlinks as some of the other users here suggest.

    Hope we can find a way to mitigate it, as it completely diminishes the value of IH

  24. 1

    I totally agree with you. As an SEO professional, I see people doing this all the time just to build 'backlinks' or 'engagement' quickly, but they forget that the real value of Indie Hackers is in authentic networking.

    Low-effort AI comments actually hurt a person's personal brand more than they help. People can smell a bot from a mile away! I'd rather read a two-line honest opinion than a 5-paragraph AI summary that says nothing new. Quality over quantity, always."

  25. 1

    The irony of some comments in this thread is... chef's kiss.

    But yeah, valid concern. The pattern you spotted is basically engagement farming. Someone figured out that "compliment + lesson" gets engagement without requiring actual thought, so now everyone runs it through ChatGPT.

    What I find weird is that the people doing it are often building actual products. Like, you made something - just... talk about it? Share a failure, a weird customer interaction, something specific. That's what makes IH good when it works.

    The templated stuff will probably self-correct eventually. Communities either develop antibodies to it or die from it. LinkedIn went one way, hoping IH goes the other.

  26. 1

    It's definitely annoying. I think many people use it to warm up accounts or farm karma without reading the actual post. The irony is that it kills the community vibe they are trying to join. If I wanted a summary of the post, I'd just ask ChatGPT myself. I come here for the war stories and subjective opinions that AI can't fake yet

  27. 1

    Valid concern. The pattern you spotted — compliment + summary — is basically engagement farming without adding value.

    I think the real issue isn't AI itself but the absence of personal stake. A good comment adds context the author couldn't have: "I tried this approach and it failed because..." or "This reminds me of X problem I'm solving differently."

    The question is whether platforms will start penalizing templated engagement or just let it become the norm.

  28. 1

    I think it’s a valid concern. If comments become generic AI-style replies, the discussion loses authenticity and real experience sharing. The value of communities like this is lived perspective, not templated reactions.

  29. 1

    If used as a tool, it can be useful, but it's concerning if it's being used beyond that. It's good for editing, learning better grammar, or writing generic templates for mass duplication or when you're just burned out. Otherwise, it's better for everyone if we step away from AI beyond writing assistance. We need to engage with our own writing for our creativity and focus and so we're not just talking to each other via AI.

  30. 1

    AI take over has started its everywhere and does all the heavy lifting so naturally people will use it. Soon I don't think we will be able to tell the difference anyway.

  31. 1

    Interesting question! I think there’s a spectrum between AI as a writing assistant and AI as an auto-commenter. Tools that help you clarify or expand your own thoughts can make you more comfortable participating, but once the human element disappears it feels hollow. The posts I enjoy most are those where the author relates their personal context or asks a specific question that invites a discussion. How do you think communities can encourage newcomers to share genuine reactions while still using tools to overcome writer’s block?

  32. 1

    You’re raising a valid point.

    AI isn’t the issue — “thoughtless AI replies” are.
    If people use AI to refine their words but still contribute their own experiences, opinions, or failures, the conversation stays meaningful. When it becomes templated positivity, everyone loses.

    Maybe the community just needs more gentle nudges toward specific, story-driven, human commentary rather than polished summaries.

  33. 1

    The point of AI isn’t to replace people, it’s to make things easier and faster. AI helps with learning, writing, problem-solving, and even creativity when you’re stuck. Just like how people use tools or mods to simplify a game experience—some players use a Minecraft APK to explore and build freely without grinding—AI works the same way in real life. It’s a tool. How useful it is depends on how you choose to use it.https://minecroftmod.com/for-pc/

  34. 1

    So true. I think a lot of content being generated by AI is driven by the fact that many people simply want to game the social media algorithms by creating engagement. Comments on LinkedIn are also primarily driven by the fact that they help you get noticed . There are so many Chrome extensions and apps now that provide you the ability to generate AI-generated in-line comments , after scanning the post. The friction required to generate these AI comments is therefore really low. If the social media platforms want to maintain their relevance in the long run and get people back to the platforms again and again. Then they need to seriously think about incentivizing this kind of behavior.

  35. 1

    Authenticity is precious. That's why we love hand-typed comments.

  36. 1

    Yes, i saw it too, not only here but also on other socials, and my guess is people are wanting to just comment but not write it from scratch (i mean they want good big comment, but due to enough use of AI they almost forgot the lineup of words to be framed), so they later use some AI to write the comments for them.

  37. 1

    Great insights in your post!
    I built a Chrome extension called PulseOfReddit that helps with exactly this - it tracks Reddit keywords and alerts you when relevant discussions pop up. It's helped me catch early conversations and validate ideas faster. Offering free access for the first 10 users if you want to try it out.

    Website:

    pulseofredditcom

  38. 1

    valid concern i think, i think it's so easily detectable - no matter how human-like they make the prompt.

    better to write comments from scratch.

    1. 1

      I also built a Chrome extension called PulseOfReddit that helps with exactly this - it tracks Reddit keywords and alerts you when relevant discussions pop up. It's helped me catch early conversations and validate ideas faster. Offering free access for the first 10 users if you want to try it out.

      Website:

      pulseofredditcom

  39. 1

    Hi ooh,

    I just read through your comments and I am only 50% certain they are not AI generated. Many though not all of them have a kind of formula to them.

    Anyway I think its a valid concern but maybe a deeper problem then you realized. The AI is mimicking low effort comments but low effort comments were already a problem before AI.

    1. 1

      Man, I was stuck in the same loop.

      I ended up making a little Chrome extension called Pulse of Reddit that basically acts like my own alert system for Reddit. Anytime someone posts something with keywords I care about like 'looking for a designer' or 'best SEO tool' it pings me right away. It’s saved me so much time and helped me hop into threads while they’re still fresh.

      If you’re tired of manual digging and want to catch those conversations early, I’d really recommend giving it a look. It’s free to start and super simple to set up.

Trending on Indie Hackers
Why can't your target customers always find your product? - Experience sharing User Avatar 17 comments The exact prompt that creates a clear, convincing sales deck User Avatar 12 comments You roasted my MVP. I listened. Here is v1.3 (Crash-proof & 100% Local) User Avatar 7 comments Why I built a 'dumb' reading app in the era of AI and Social Feeds User Avatar 7 comments The hardest part of building in public isn’t shipping. User Avatar 5 comments