13
34 Comments

We automated our business vetting with OpenClaw

I’ve always wanted to build something with openClaw.

But it had to be genuinely useful for our business, not just another toy app for managing tasks.

Today, our Kelviq vetting system runs entirely on OpenClaw, handling initial screening and speeding up customer onboarding.

The Problem:

After launch, we started getting a surge of requests for business verification. Business verification is mandatory for us, as we operate an MoR platform. So every time a profile was submitted, we had to dive deep into the business to decide on approval. It was consuming a massive amount of time, causing constant context switching, and leaving us drained (especially when the outcome had to be a rejection).

So, we decided to automate it.

The Solution:

  1. A profile is submitted, triggering a message to Discord that tags our Kelviq bot.
  2. Our OpenClaw server picks up the request.
  3. The AI agent analyzes the website, compares it against our policies, and makes a decision.

The bot instantly outputs an approval or rejection, a detailed reasoning statement, the appropriate category, and a confidence score.

The Flow:

User input (URL) → Kelviq Backend → Discord → OpenClaw Agent → Web Scraping → Policy Check → VERDICT.

The Stack:

– OpenClaw image deployed on a DigitalOcean
– OpenRouter subscription
– Discord for messaging

Here is the video + story behind building this over a weekend.

Hope this is helpful! And happy to share more details if anyone’s building something similar.

posted to Icon for group Artificial Intelligence
Artificial Intelligence
on April 2, 2026
  1. 2

    This is a really interesting use case. Automating something like verification makes a lot of sense, especially when it's repetitive and time-consuming.

    I like how you structured the flow — Discord as a trigger + AI agent doing the analysis is a smart setup.

    I'm currently experimenting with web scraping and automation too.

    Curious — how reliable are the AI decisions so far? Do you still need manual review in some cases?

    1. 1

      It's very reliable so far. Because we manually checked the rejection scenarios and the reason which agent suggested is exactly the right reason for rejection.

  2. 2

    This is interesting — especially the automation layer.
    I’ve been seeing similar patterns where the real bottleneck isn’t data, it’s coordinating people (vendors, contractors, etc).
    Curious if you’ve tried extending this beyond internal workflows into actually triggering external actions? thanks!

    1. 1

      No, we want a human to take final action.
      And as we are running on openClaw, we are cautious about exposing our API tokens.

  3. 2

    Nice work – clean, practical, and actually useful. Love that you built it over a weekend and solved a real pain (vetting burnout is real). The Discord → OpenClaw → verdict loop is elegantly simple. Might borrow this pattern for our own onboarding flow. Thanks for sharing!

  4. 2

    Love the weekend build energy here! The stack choice is clever — using Discord as the messaging layer means you're not reinventing notification infrastructure from scratch. Quick question on the OpenRouter integration: are you routing to a specific model for the policy checks, or letting it pick dynamically based on cost/performance? I faced a similar tradeoff with my indie app (picking the right backend model for low-latency mobile use), and dynamic routing ended up cutting costs without hurting quality.

    The confidence score output is a really nice touch — it makes the AI decision auditable, which matters a lot when rejecting a business application. How are you handling edge cases where the confidence score is low? Does it fall back to a manual review queue?

    1. 1

      I'm currently using the Gemini Flash model, it gave the best results for this use case, especially since our policy checks are fairly straightforward.

      And for now, we still do manual reviews for rejections.

  5. 2

    Automating the vetting step is smart — it's the part that scales worst manually. The real value isn't just speed, it's consistency: the same criteria applied every time without human variance or fatigue. What's the false positive rate looking like? That's usually where manual vetting still adds value at the edges.

    1. 1

      False positives have been relatively low so far, but we are intentionally conservative with rejections. If something feels even a bit uncertain, the agent just flags it for manual review instead of forcing a decision.

      So humans still handle the edge cases, the system just reduces how often we get there.

      1. 1

        Context-switching is the real productivity killer. The fact that it flags uncertain cases rather than forcing a decision is exactly right. You want the automation to be confident when it's confident, and honest when it's not.

  6. 2

    This is a great example of knowing where AI adds the most value. Business vetting is exactly the kind of repetitive, rule-based task that eats founder time without growing the business. Smart to keep humans in the loop for rejections too. I've been building similar AI-powered automation tools for founders and the pattern is always the same: find the process that causes the most context-switching, automate the 80% that's predictable, and flag the edge cases for manual review. How long did it take from idea to having the agent running in production?

    1. 1

      Thanks @dallo
      I took it as a weekend project, and everything up and running in a day.

  7. 2

    this is a solid use case - giving the agent enough policy context that it can make a real yes/no call, not just flag for review. most automations stop at "here's the data, human decides" but you pushed through to the actual decision. how are you handling edge cases where the agent is confident but wrong? curious if you've had any surprising rejections that passed your manual check

    1. 1

      @ItsKondrat Thank you!

      We do encounter edge cases, when a website is blocked by a firewall or in a different language. In those situations, the agent flags them for manual review and provides a clear explanation.

      Our goal isn't to remove humans entirely, but to reduce wasted time. We still keep a human in the loop for rejections, but we're becoming increasingly confident in the system.

      1. 1

        the "flag for manual review with explanation" part is actually the most important bit - agents that fail silently are way harder to trust long-term. curious what % of cases end up flagged? if it is under 10% that is already solid for vetting at this kind of scale

        1. 1

          It's very less, around 5%

          1. 1

            5% is actually surprisingly low - means the automation is catching most of the straightforward cases. curious though, of that 5% that gets flagged, how often does the human reviewer end up disagreeing with the agent's initial assessment? like is it flagging borderline cases correctly or sometimes just confused

            1. 1

              It's quite rare, and usually only happens when there isn't enough information on the website to make a confident decision.

              1. 1

                yeah that tracks - thin website data is basically the worst input for any confidence model. do you surface those low-confidence cases to users separately, like a 'needs manual review' queue, or just flag them inline?

  8. 2

    What really caught my attention is how you tackled a genuine bottleneck instead of just creating AI for the sake of it. ~

    You took a smart approach by applying it directly to something that was already draining time and energy.

    1. 1

      Thank you, I'm glad you liked it! @MORPHOICES

  9. 2

    "Geo, this is a really smart and practical use of OpenClaw. Love seeing it used for actual business operations instead of just another demo.
    Automating business vetting is such a painful, high-context task that most people still do manually. The fact that your agent can analyze the website, check against your policies, give a verdict + confidence score + reasoning in one go is genuinely impressive.
    A few questions for you:

    How accurate is the agent right now on rejections? Are you still reviewing every rejection manually or trusting it for most cases?
    What was the biggest challenge when building the policy check part?
    How much time are you saving per week now compared to before?

    This is exactly the kind of real-world application I like seeing. Would love to hear more details if you're open to sharing."

    1. 1

      @vuleolabs Glad you liked it!

      1. So far, we're seeing ~95% accuracy in how the agent assigns scores. For now, we still keep a human in the loop for rejections, but we're getting increasingly confident with it.

      2. The main challenge is handling websites in different languages, and cases where a firewall blocks access. In those situations, the agent asks for manual verification.

      3. We are saving a lot of time because it's not just about the time taken to verify things, but also about reducing context switching for all three founders.

      1. 1

        Thanks for the detailed reply, Geo! 👍
        95% accuracy is actually really solid. Smart move keeping a human in the loop for rejections for now.
        The language + firewall issue makes total sense — those edge cases are always tricky for agents.
        Biggest win seems to be reducing context switching. That alone is worth a lot when there are three founders.
        Appreciate you sharing the real numbers!

  10. 2

    This is awesome - nice work!

  11. 2

    Congrats on the launch, Automating incident comms while debugging is a real pain point. The flow from Discord to OpenClaw for instant verdicts is a very practical use of AI agents. Solid execution on the business vetting side. Looking forward to seeing how it scales.

    1. 1

      Thank you for the support.

  12. 1

    We automated business vetting with OpenClaw, turning a manual, time-consuming process into instant AI-driven approvals and rejections.

  13. 1

    “What’s something founders unknowingly do that damages their credibility with investors or partners?”
    Perceptaadvisory.com

  14. 1

    HIRE BEST AI HACKER RECOVERY CRYPTOCURRENCY / BANK ACCT RECOVERY EXPERT OPTIMUM HACKER RECOVERY

    I'm so delighted to be able to share this story with you all

    I put 880k USDT in an online cryptocurrency investment platform and I was scammed out of everything! I lost hope and all my efforts have been given up! I had obligations to my family, and I was worried that I wouldn't keep them. I spoke with a few of my colleagues, but they were all unfavorable in their opinions. When I went online, I found a piece advertising "OPTIMUM HACKERS RECOVERY," a hacking collective with a reputation for recovering cryptocurrency / Bank account recovery, that had received a lot of favorable feedback. I decided to get in touch with them, and within a short period of time OPTIMUM HACKERS RECOVERY was able to retrieve all of my stolen USDT through AI Machine . Being able to recover my misplaced money was incredible. This piece of writing is for everyone who has also lost money investing in cryptocurrencies. please reach out to them on

    Em ail: support @ optimum hackers recovery . com

    Wha tsApp: + 1-256-256-8636

    Web site: https :// optimum hackers recovery. com

Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 151 comments Never hire an SEO Agency for your Saas Startup User Avatar 84 comments A simple way to keep AI automations from making bad decisions User Avatar 65 comments “This contract looked normal - but could cost millions” User Avatar 54 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments