2
7 Comments

Built something different from a typical AI app: NEES Core Engine — an AI governance runtime. Looking for early developers to test it.

Hey everyone,
I’m building NEES Core Engine, a system designed to sit between an application and the AI model.

Instead of sending user prompts directly to an LLM and hoping for consistent behavior, NEES acts as a governance runtime that helps control how AI behaves inside real products.

What it does:

NEES is built to govern:

  • intent interpretation
  • response behavior / mode
  • memory scope
  • policy enforcement
  • audit + traceability

So the flow becomes:

App → NEES Core Engine → Model Provider → Governed Response

instead of:

App → Model Provider → Raw Response

Why I built it:
A lot of AI products today are powerful, but they’re still difficult to control consistently.
Developers can get intelligence from a model, but product teams still need to solve things like:

  • unpredictable output behavior
  • inconsistent assistant tone or logic
  • memory handling
  • policy and permission control
  • traceability and governance

That’s the gap I’m trying to address.

Current status

NEES Core Engine is already deployed and working as a live runtime.
It has been tested as a governance layer for AI execution, and I’ve also designed the initial NEES SDK integration path for developer usage.

Right now, I’m opening a limited Developer Access program to validate it with real builders.

Who I’m looking for

I’d love to connect with:

  • indie hackers building AI products
  • SaaS founders using LLMs
  • developers building copilots, assistants, educational tools, CRM AI, support AI, or internal tools
  • teams who want more control than raw direct LLM calls

What selected developers would get

  • early developer access
  • API credentials
  • integration guidance
  • direct founder feedback loop
  • a chance to shape the product direction early

What NEES is not
This is not another chatbot wrapper.
It’s an attempt to build a governed runtime for product-grade AI systems.

A simple way to describe it:
Your model gives intelligence. NEES gives control.

If this sounds interesting

Comment here or message me if you’d like to test it, collaborate, or give feedback.
I’m especially interested in hearing from developers already working with OpenAI / Anthropic / Ollama-based products and who want stronger control over behavior, memory, and execution flow.

Happy to share more details with serious early testers.

on April 19, 2026
  1. 1

    Most people building “governance layers” get stuck not because of tech — but because the category itself isn’t legible.

    Right now this reads like:
    “control layer for LLMs”

    But buyers think in outcomes, not architecture.

    If someone can’t instantly map:
    → when they need this
    → why this over just prompts / wrappers

    they won’t even evaluate it.

    Seen similar products shift just by tightening:
    – category framing
    – naming (governance vs control vs runtime)
    – first-line explanation

    and suddenly adoption changes without touching the product.

    Curious — right now, what exact moment makes someone say
    “I need NEES” instead of just improving their prompt stack?

    1. 1

      Thanks — and yes, that’s exactly the distinction I’m trying to make.

      NEES isn’t meant as “better prompting” or just a wrapper around prompts.

      The goal is to move control out of the raw prompt layer and into a runtime layer that can govern intent interpretation, persona/mode behavior, memory scope, policy checks, and traceability.

      I’ve also built a live proof-of-concept on top of it here: https://naina.nees.cloud/

      That app is part of how I’m testing the idea in practice — not just as a concept, but as a working governed AI experience.

      1. 1

        That helps — but I think this is exactly where the gap still is.

        Right now it still reads like:
        “more structured way to control LLM behavior”

        The buyer moment is usually much sharper than that.

        Something like:
        → “we can’t trust outputs in production anymore”
        → “we need consistency across sessions/users”
        → “we need auditability / control before this breaks something”

        If that moment isn’t obvious, people default back to:
        “we’ll just improve prompts”

        So the shift might not be explaining the system better, but anchoring it to:
        the failure case where prompts stop being enough

        Curious — what’s the first real-world scenario where prompts clearly break, and NEES becomes the only viable option?

        1. 1

          That’s a fair push, and I think this is the real question.

          I agree many apps already implement pieces of this themselves — profile logic, memory, prompt conditioning, tool restrictions, etc.

          What I’m trying to separate is app logic from AI governance logic.

          Right now, in most products, those controls are usually scattered across prompts, backend rules, memory handlers, and feature-specific code. That works, but it becomes harder to keep behavior consistent, traceable, and reusable as the product grows.

          The idea behind NEES is not “apps can’t do this already.”
          It’s that these controls can be moved into a more central runtime layer so they’re governed more systematically across:

          • intent handling
          • persona/mode behavior
          • memory scope
          • policy enforcement
          • tool boundaries
          • audit / traceability

          So the value is less “a single app can’t fake this” and more:
          once AI behavior becomes important enough, teams may not want those controls spread everywhere in ad hoc ways.

          That’s the direction I’m exploring.

          1. 1

            Yeah — I think the break point is simpler than it sounds.

            Prompts work… until:
            → outputs start varying across users/sessions
            → one bad response actually matters (legal, finance, ops)
            → or teams need to explain why something was generated

            That’s usually when “just improve prompts” stops scaling.

            At that point, it’s less about better prompting and more:
            → “we need something we can trust + control consistently”

            If NEES is that layer, I’d anchor everything to that moment — not the system itself.

            Also — small thing: “NEES Core Engine” still sounds internal/technical. If this is about trust/control in production, the name should probably reflect that outcome, not the engine.

            1. 1

              That’s a very strong framing — and I think you’re right.

              The real transition point is probably not “this is a governance system,” but the moment when prompt-based control stops being reliable enough for production.

              Especially when:

              • outputs vary too much across sessions/users
              • one bad response has real business cost
              • teams need consistency, control, and explainability

              That’s the moment I’m trying to anchor NEES to.

              And I also think your naming point is fair — “NEES Core Engine” is probably closer to an internal architecture name than an outcome-facing product name.

              This is genuinely useful feedback. Thank you.

              1. 1

                Exactly — then the name should carry that moment instantly.
                “NEES Core Engine” feels internal, not something teams reach for when things break.
                If the trigger is:
                → “we can’t trust outputs in production”
                the name should reflect:
                → trust / control / consistency
                Otherwise it still feels like another AI infra tool.
                If you want, I can share a couple of directions that map to that moment 👍

Trending on Indie Hackers
I built a tool that shows what a contract could cost you before signing User Avatar 120 comments The coordination tax: six years watching a one-day feature take four months User Avatar 80 comments My users are making my product better without knowing it. Here's how I designed that. User Avatar 66 comments A simple LinkedIn prospecting trick that improved our lead quality User Avatar 60 comments I launched on Product Hunt today with 0 followers, 0 network, and 0 users. Here's what I learned in 12 hours. User Avatar 58 comments I changed AIagent2 from dashboard-first to chat-first. Does this feel clearer? User Avatar 39 comments