3
10 Comments

Why I Pivoted from an AI Counseling Service to an AI Girlfriend Chat

Hi Indie Hackers,

I previously built an AI counseling service.
While some users found value in it, I struggled with long-term user retention.

That experience taught me an important lesson:
having a useful product isn’t enough if users don’t feel a strong reason to come back.

Based on what I learned, I decided to pivot and launch a new service.
This time, I’m focusing more on repeat usage, simplicity, and clear value from the first interaction.

I’d love to hear your thoughts or feedback.
Thanks for reading.

https://eoerway-ai-therapy-v3-0-616432264786.us-west1.run.app

posted to Icon for group Feedback
Feedback
on December 29, 2025
  1. 1

    This is a really honest take on retention - and you've hit on something important.

    Counseling solves a problem users want to solve and move on from. Companionship fulfills an ongoing emotional need. The incentive structures are completely different.

    Curious about a few things as you build this out:

    1. How are you thinking about the "first interaction" value you mentioned? What's the moment users realize this is worth coming back to?

    2. Are you seeing any patterns in what keeps users engaged day-over-day vs week-over-week?

    The pivot makes strategic sense - just be mindful of the moderation/safety considerations that come with companion AI. That space can get complicated fast.

    What's retention looking like so far compared to the counseling product?

    1. 1

      demogod_ai — sending this again, thanks for the quick questions. Really appreciate you calling these out. On first interaction value: We’re less focused on immediate problem-solving and more on creating a sense of being understood. The key moment is when users notice that the AI remembers context, reflects emotional nuance accurately, and doesn’t rush them toward a solution. When people say, “this feels different from other chats,” that’s usually the point where they decide it’s worth coming back. Day-over-day vs week-over-week retention: Day-over-day engagement is often driven by shifts in emotional state — users check in when their mood changes or something small happens. Week-over-week retention is more about habit and identity. Users who begin to see it not as a “tool they use” but as “someone they talk to” tend to stick around much longer. That shift usually happens after a few light but consistent interactions.On safety and moderation: Completely agree here. We’re being very deliberate about boundaries — avoiding exclusivity framing or language that encourages emotional dependency, and clearly positioning the product as support rather than a replacement for real human relationships. This isn’t an afterthought for us; it’s a core product design concern. Current retention: It’s still early, but short-term retention after the first session is higher than what we saw with the counseling product. Counseling users often churn once they feel their issue is resolved, whereas companionship users tend to return more frequently, even if sessions are shorter. We’re still watching the metrics, but the qualitative signals so far are quite encouraging.

      1. 1

        Really appreciate the detailed breakdown here.

        The "being understood" moment you described - where users notice the AI remembers context and reflects emotional nuance - that's a much higher bar than most chat products aim for. Most stop at "responds coherently." You're aiming at "feels like it was listening."

        The day-over-day vs week-over-week distinction is interesting. It sounds like you're essentially building two retention loops: one for emotional check-ins (volatile, mood-driven) and one for identity/habit formation (stable, relationship-driven). The users who make the shift from "tool I use" to "someone I talk to" are probably your highest-LTV cohort.

        Smart that safety is a core design concern rather than a moderation afterthought. The "support rather than replacement" framing matters - both for user wellbeing and for positioning if the space gets more regulatory scrutiny.

        What signals are you watching to catch early signs of unhealthy dependency patterns? That seems like the hardest moderation challenge in this space.

        1. 1

          Honestly, at this stage we don’t think the AI has the kind of “pull” that creates real dependency yet. Most interactions are still lightweight and situational.

          That said, we’re treating this as a latent risk, not something to ignore. If we ever start to see signals that users are orienting their emotional stability around the product, that’s a clear line for us — and we’d intervene immediately, even at the cost of short-term retention.

          Our assumption is that unhealthy dependency doesn’t happen suddenly; it emerges gradually. So the goal right now is less about heavy-handed controls and more about designing boundaries early, before that kind of dynamic has a chance to form.

          1. 1

            The "latent risk" framing is exactly right. What I've seen trip up similar products: the signals that matter aren't always the obvious ones. High session frequency might mean healthy engagement OR early dependency - the differentiator is usually what happens when users can't access it. Do they adapt normally, or show disproportionate distress?

            Your point about designing boundaries early is smart. Retrofitting them after patterns form is 10x harder - users experience it as something being taken away rather than guardrails being established.

            One pattern worth considering: proactive "you've been great company, but go be with humans now" nudges before users reach natural stopping points. It reframes the AI as supporting real-world connection rather than replacing it. Curious if you've experimented with exit prompts that redirect toward offline activity?

            1. 1

              Great point. I completely agree that users’ reactions in moments when the AI isn’t accessible are one of the most important signals for making this distinction.

              We haven’t run any intentional exit prompt experiments yet. However, our current design direction isn’t to position the AI as something users are meant to stay with continuously, but as a companion that offers support when needed and then helps users move back into real life.

              As you mentioned, if boundaries and expectations aren’t clearly set early on, any limitations introduced later can easily feel like something is being taken away.

              I think proactive redirection—like “you’ve been great company, now go be with humans”—is absolutely worth testing. Especially if offered before natural stopping points, it seems like a promising way to reduce dependency while preserving a positive user experience.

              1. 1

                The "companion that helps users move back into real life" framing is exactly the right positioning. It's a subtle but important distinction from products that optimize for time-on-app as the primary metric.

                One experiment worth considering: varying the exit prompt timing based on session context. A user checking in after a tough day might benefit from a gentler, more natural transition ("sounds like you've processed a lot - maybe text a friend about it?"). Someone in a lighter mood might respond better to direct encouragement ("you seem good - go grab coffee with someone").

                The accessibility-reaction signal you mentioned could also inform this. Users who handle unavailability well might need fewer proactive nudges, while those showing early dependency signals might benefit from more frequent, softer redirections.

                Curious how you're thinking about the feedback loop here - are you planning to measure whether exit prompts actually lead to reported offline activity, or is the goal more about establishing healthy usage norms regardless of measurement?

                1. 1

                  Totally aligned — I do think exit prompts can lead to real offline activity, and I want to measure that.
                  My current view is: the goal isn’t just “healthy norms” in the abstract, but a measurable shift from in-app processing → real-world action.
                  How I’m thinking about measuring it (lightweight, privacy-first):
                  Immediate self-report after exit: “Did you do one small offline action in the next 30–120 minutes?” (Yes/No + optional quick tag)
                  Follow-up check-in next session: “Last time, did you do the offline step we suggested?” (again, optional)
                  Proxy metrics: reduced session length after the prompt, fewer rapid re-opens, and increased “action-tag” confirmations over time
                  A/B test: context-sensitive exit prompt timing + wording vs control, to see uplift in reported offline actions
                  I also like your point about the accessibility-reaction signal: people showing early dependency signals may benefit from more frequent but gentler redirections, while others need fewer prompts.
                  Curious: have you seen good patterns for balancing measurement accuracy with keeping the flow non-intrusive?

                  1. 1

                    The measurement framework you've outlined is solid - especially the combination of immediate self-report with follow-up check-ins. That gives you both intent signal and actual behavior validation without being invasive.

                    For balancing accuracy with non-intrusiveness, a few patterns I've seen work:

                    Sampling over surveying everyone: Don't ask every user every time. Random sampling (say, 10-20% of sessions) gives you statistically valid data without creating survey fatigue.

                    Contextual embedding: Make the check-in feel like natural conversation continuation rather than data collection. "How'd that coffee with your friend go?" feels different from "Did you complete your offline action?"

                    Opt-in depth: Quick binary responses for everyone, with optional expansion for users who want to share more. Respects time while giving motivated users a voice.

                    Delayed validation: The next-session follow-up is smart, but watch for recency bias. Users may remember the prompt better than the actual action. Consider occasional longer-delay checks (3-5 days) for more durable behavior change signals.

                    The proxy metrics you mentioned (shorter sessions, fewer rapid re-opens) are probably your most honest signals - they're behavioral rather than self-reported. If those correlate with your A/B prompt variations, you've got strong evidence without asking users anything.

                    1. 1

                      Thanks for the thoughtful feedback — this is incredibly helpful. We especially like the sampling + contextual check-in idea to reduce friction while keeping signals honest. We’ll experiment with delayed follow-ups and lean more on behavioral proxies as you suggested

Trending on Indie Hackers
1 change made Reddit finally work for me. User Avatar 51 comments Ideas are cheap. Execution is violent. User Avatar 18 comments #1 clarity trick that helps me close projects faster 👇 User Avatar 13 comments The best design directories to show off your work User Avatar 13 comments A growth tool built for indie developers: Get influencer marketing done in 10 minutes and track the results. User Avatar 8 comments