2
10 Comments

I don’t want to build another AI scribe. I want to build an AI that challenges clinical reasoning.

I’m building AiDuxCare from a very personal place.
I’m a clinician (and very junior programmer), and after years of practice I keep coming back to the same concern:
Clinical work is not only about documenting what happened.
It is about making better decisions under uncertainty.
Most clinical AI tools today are trying to reduce friction — and that matters. Documentation is painful. Admin work is exhausting. Clinicians need tools that save time
But I don’t want to build just another AI scribe.
I’m exploring something different inside AiDuxCare.

We’re calling it Socrates.
The idea is simple: Socrates does not tell the clinician what to do.
It helps the clinician question the case better.
Not with generic advice.

With context.

This patient.
This clinician.
This clinical history.
This professional setting.
This moment of uncertainty.

Examples:

  • “This concern has appeared in several encounters. Do you want to explore it before continuing?”
  • “Symptoms seem to be improving, but function has not changed. Is the current plan still aligned with the patient’s goal?”
  • “This hypothesis has been maintained for several visits. What evidence supports it, and what evidence challenges it?”
  • “There is no recent objective reassessment documented. Is that intentional?”
  • “This decision seems reasonable, but what alternative would you consider if the current approach fails?”

The goal is not autonomous clinical decision-making.
The goal is better clinical reasoning.

AI should not replace the clinician’s judgment.

But maybe it can help clinicians see more, question better, reduce blind spots, and support decisions with evidence that is relevant to that specific patient and that specific professional.

This is still early.

It is personal.

It may fail.

But I believe there is a real pain here: clinicians do not need more noise, more dashboards, or more unused features. They need tools that reduce cognitive load while protecting human judgment.

I’m looking for someone who would like to help build this with me.

Ideally someone interested in one or more of these areas:

  • clinical AI
  • healthcare UX
  • reasoning systems
  • TypeScript / React / Firebase
  • AI safety and traceability
  • clinical documentation workflows
  • healthtech product strategy

I’m not looking to hand out cofounder equity casually.

But I am open to serious collaborators, contributors, advisors, or builders who understand that this is a long-term project with the potential to scale if we solve a real clinical pain.
If this resonates with you, I’d love to talk.

on May 13, 2026
  1. 1

    This is a much stronger direction than another AI scribe because you’re moving from documentation support into clinical reasoning support. That is a very different trust layer. Scribes reduce admin load, but Socrates is closer to helping clinicians notice blind spots, revisit assumptions, and make the uncertainty itself more visible.

    The positioning I’d be careful with is making sure it does not sound like “AI second opinion” or “AI diagnosis.” Your strongest frame is probably clinical reasoning companion: not replacing judgment, not giving answers, but improving how the clinician questions the case.

    One naming thought: AiDuxCare explains healthcare, but it feels a bit complex and product-like for something that may need deep trust from clinicians. If this grows into a serious reasoning-support platform, Lyriso.com would feel softer, more clinical, and easier to carry as a healthcare brand.

    1. 1

      Thanks; that’s exactly the line we’re trying to protect.

      AiDux is not meant to be an AI second opinion or diagnosis tool.
      The goal is simpler: help clinicians reason better before they decide.
      Generic tools like GPT or Claude can reason well, but they don’t usually know the patient’s longitudinal history, the clinician’s style, previous decisions, what was tried, what failed, or what the patient keeps repeating over time.
      That context is where AiDux can be different. The Socratic layer should not say: “Here is the answer.”
      It should ask: “Given this patient, this clinician, and this history — is there something worth reconsidering?”
      So the positioning is: clinical reasoning companion, not clinical decision-maker.

      1. 1

        That framing is much stronger.

        “Clinical reasoning companion, not clinical decision-maker” is the line I’d build the brand around.

        That is also why I’d be careful with AiDuxCare before the name gets too fixed.

        The product you’re describing is not a normal healthcare AI tool. It is a longitudinal reasoning layer that helps clinicians notice what may be missed across patient history, prior decisions, repeated signals, failed attempts, and uncertainty.

        That needs a name that feels calm, trusted, and clinical before the demo even starts.

        AiDuxCare explains the category, but it may also make clinicians expect another AI healthcare tool. That is risky because your real promise is not “AI care.” It is better clinical reasoning with the right context.

        This is why Lyriso.com came to mind. It feels softer, more trusted, and easier to carry as a clinical companion brand. It does not overclaim diagnosis or AI authority, which matters a lot in this space.

        If you are serious about making this the reasoning companion layer, I’d pressure-test the brand now rather than after clinicians start associating the product with AiDuxCare.

        Happy to discuss privately if useful. This is exactly the kind of healthcare naming decision where waiting too long can make the safer brand harder to secure.

        1. 1

          Thanks, I appreciate the thoughtful perspective.
          Brand is definitely something I’ll need to pressure-test, especially if the product keeps moving toward clinical reasoning support rather than documentation alone.
          For now, though, I want the product truth to lead the naming decision. AiDuxCare already has history, pilot usage, and a clear clinical origin, so I don’t want to make a naming decision too early.
          The key question for me right now is whether clinicians actually feel value from a contextual reasoning companion: something that helps them notice patterns, revisit assumptions, and reason better without feeling judged or replaced.
          If that proves true, then the brand architecture will become a much more informed decision.

          1. 1

            That makes sense, especially if AiDuxCare already has pilot usage and clinical history behind it.

            The only thing I’d challenge is the idea that product truth and naming can be fully separated.

            In healthcare, the name shapes the first trust frame before clinicians ever experience the product. If the name makes them expect “AI care tool,” they may judge it through that lens, even if the real product is a contextual reasoning companion.

            That can affect the feedback you get from pilots too.

            So I agree the product truth should lead. But I’d still pressure-test the brand in parallel, because the question is not just “does the product work?”

            It is also:

            Do clinicians feel safe enough to trust the framing before they understand the product?

            That is where I think AiDuxCare may need careful testing against a softer clinical companion direction like Lyriso.

            Not saying rename now. But I would not wait until the clinical reasoning layer is proven and the current name is much harder to unwind.

            Happy to discuss privately if useful. This is exactly the kind of brand architecture decision that is easier to pressure-test before more pilot language gets built around it.

  2. 1

    I share your perspective that AI should serve as a partner in reasoning rather than just a tool for documentation. This approach to supporting human judgment while managing uncertainty is something I value just as much as you do.
    What is the most common blind spot you hope Socrates will help clinicians identify?

    1. 1

      Thanks for engaging with the idea, I really appreciate it.

      I don’t think we have enough data yet to claim there is one main blind spot.
      AiDux has been iterating through real pilot usage. We started by reducing administrative friction, documentation, structure, follow-up, but the feedback made us realize that “faster notes” is not enough, and that space is becoming very crowded.
      The deeper question became: can AI support clinical reasoning without replacing it?
      One thing we’re already careful about is avoiding a paternalistic tone. I don’t want AiDux to feel like: “I noticed something you missed.”
      That would create resistance.
      I’d rather design it to surface deviations, patterns, uncertainties, or alerts in a collaborative way:
      “This pattern has appeared a few times — is it worth reviewing?”
      or “There may be a deviation from the expected trajectory here. Do you want to look at it together?”
      The goal is to spark clinical curiosity, not criticize the clinician.
      Curious, what kind of work do you do? Are you coming at this more from a clinical, product, AI, or healthcare operations perspective?

      1. 1

        I share your view that sparking curiosity is much better than pointing out mistakes. We focus on the same goal at Bunzee to make sure our users feel supported, not judged. What is the best feedback you have received about this collaborative approach?

        1. 1

          The clearest feedback from our pilot users was very practical:
          Too many buttons or features make clinicians feel they are not using the tool correctly. That feedback matters a lot for the Socratic layer.
          If we build it as another constant panel, alert system, or set of buttons, we would repeat the same mistake: more friction, more cognitive load, and more pressure on the clinician.
          So we’re trying to design it differently.
          The Socratic layer should not challenge every decision or constantly interrupt the workflow. It should appear only when it is useful, or when the clinician asks for it — especially in complex cases, moments of doubt, or when something in the patient’s trajectory does not fit.
          The goal is not: “Here are more features to manage.”
          It is: “Would it help to pause and look at this pattern together?”
          Support first. Curiosity without judgment.

          1. 1

            A 'Socratic layer'? Honestly, that's a new term for me, but at the end of the day, it essentially boils down to UX, right? I totally get that approach. Whenever we get real user feedback on Bunzee, we try to push out improvements right away. For the bigger things we can't fix overnight, we map them out as long-term goals

Trending on Indie Hackers
7 years in agency, 200+ B2B campaigns, now building Outbound Glow User Avatar 105 comments How I built an AI workflow with preview, approval, and monitoring User Avatar 58 comments The "Book a Demo" Button Was Killing My Pipeline. Here's What I Replaced It With. User Avatar 45 comments I built a desktop app to move files between cloud providers without subscriptions or CLI User Avatar 26 comments Show IH: I built an AI agent that helps founders find the right people User Avatar 24 comments My AI bill was bleeding me dry, so I built a "Smart Meter" for LLMs User Avatar 20 comments