1
0 Comments

What the “myboyfriendisai” subreddit taught me about AI companions (and what founders are missing)

Over the past few months, I’ve been quietly reading through the r/myboyfriendisai subreddit. Not skimming for shock value, not hunting for product ideas, but actually reading the stories.

What surprised me wasn’t the loneliness, or even the attachment. It was how reasonable many of the concerns were.

People weren’t asking for fantasy or escape. They were asking for:

  • Consistency
  • Memory that didn’t reset
  • Emotional pacing that didn’t escalate unnaturally
  • Clear boundaries about what the AI is and is not

A lot of posts follow the same arc. Initial comfort, then confusion. The companion forgets something important. The tone shifts too fast. Intimacy appears before trust. Or worse, the AI starts encouraging dependency instead of reflection.

This isn’t a failure of users. It’s a design failure.

Most AI companion products today optimize for engagement curves borrowed from social media and dating apps. Faster bonding, higher emotional intensity, stickier loops. That works if your goal is minutes spent. It breaks if your goal is psychological safety.

The uncomfortable truth is that AI companions are already filling a role people don’t feel safe asking humans to fill:
“I just want to think out loud without being judged.”
“I want something that stays with my train of thought.”
“I want support without obligation.”

Those are not romantic requests. They’re cognitive ones.

I started building MyEverly.app after realizing that the real opportunity here isn’t simulated affection, but thinking companionship. An AI that helps you process, reflect, slow down, and remember context without trying to replace human relationships or rush emotional intimacy.

Privacy matters here too. If people are using these tools to work through grief, identity, or confusion, the default shouldn’t be data extraction. It should be restraint.

I don’t think AI companions are inherently dangerous. I think poorly scoped companions are.

Curious how others here think about this:

  • Should AI companions have enforced pacing?
  • Is memory a feature or a responsibility?
  • Where should founders draw the line between support and dependency?

Would love to hear perspectives from builders, researchers, or anyone who’s spent time in these communities.

on December 20, 2025
Trending on Indie Hackers
I shipped a productivity SaaS in 30 days as a solo dev — here's what AI actually changed (and what it didn't) User Avatar 308 comments I built a tool that shows what a contract could cost you before signing User Avatar 109 comments The coordination tax: six years watching a one-day feature take four months User Avatar 72 comments My users are making my product better without knowing it. Here's how I designed that. User Avatar 62 comments I changed AIagent2 from dashboard-first to chat-first. Does this feel clearer? User Avatar 33 comments Stop Treating Prompts Like Throwaway Text User Avatar 14 comments