Over the past few months, I’ve been quietly reading through the r/myboyfriendisai subreddit. Not skimming for shock value, not hunting for product ideas, but actually reading the stories.
What surprised me wasn’t the loneliness, or even the attachment. It was how reasonable many of the concerns were.
People weren’t asking for fantasy or escape. They were asking for:
A lot of posts follow the same arc. Initial comfort, then confusion. The companion forgets something important. The tone shifts too fast. Intimacy appears before trust. Or worse, the AI starts encouraging dependency instead of reflection.
This isn’t a failure of users. It’s a design failure.
Most AI companion products today optimize for engagement curves borrowed from social media and dating apps. Faster bonding, higher emotional intensity, stickier loops. That works if your goal is minutes spent. It breaks if your goal is psychological safety.
The uncomfortable truth is that AI companions are already filling a role people don’t feel safe asking humans to fill:
“I just want to think out loud without being judged.”
“I want something that stays with my train of thought.”
“I want support without obligation.”
Those are not romantic requests. They’re cognitive ones.
I started building MyEverly.app after realizing that the real opportunity here isn’t simulated affection, but thinking companionship. An AI that helps you process, reflect, slow down, and remember context without trying to replace human relationships or rush emotional intimacy.
Privacy matters here too. If people are using these tools to work through grief, identity, or confusion, the default shouldn’t be data extraction. It should be restraint.
I don’t think AI companions are inherently dangerous. I think poorly scoped companions are.
Curious how others here think about this:
Would love to hear perspectives from builders, researchers, or anyone who’s spent time in these communities.