15
54 Comments

11 Weeks Ago I Had 0 Users. Now VIDI Has Reviewed $10M+ in Contracts - and I’m Opening a Small SAFE Round

11 weeks ago VIDI was basically an experiment.

No team.
No funding.
No traditional legal background.

Just a simple idea:

What if founders could better understand financial and legal risk in a contract before signing it?

Since then:

• 90+ contracts analyzed
• $10M+ in total contract value reviewed
• agreements ranging from ~$40K to $6.7M+
• usage across multiple countries
• early repeat usage starting to emerge

One thing that surprised me early:

The strongest usage patterns appear when contracts become materially important to a business decision.

That observation changed how I think about the company.

At first I thought VIDI was simply “AI contract analysis.”

Now I think there may be a much larger opportunity around contract risk visibility before signing.

Still extremely early.

But over the last few days I’ve:

• started speaking with angels/operators
• opened a small SAFE round
• spent a lot of time learning how investors think about workflow businesses vs AI features

One thing I’m learning quickly:

Investors care much less about “AI analyzing documents” and much more about:

• repeat behavior
• trust
• workflow positioning
• whether the product becomes part of real decision-making

Still building every day.
Still figuring things out in public.

Curious if other founders experienced a similar shift where user behavior changed the company thesis.

VIDI: https://vidicontract.tech

If anyone around early-stage investing, AI infrastructure, or workflow software finds this interesting, happy to connect on LinkedIn as well - especially as I start conversations around the SAFE round.

LinkedIn: https://www.linkedin.com/in/meirambek-mukhametkaliuly-2b72272a4/

on May 9, 2026
  1. 1

    The $10M in reviewed contracts is the surface metric.

    What usually drives contract problems - and what the contract analytics probably won't surface directly - is the communication that happened before the document was ever generated. The misalignment usually starts with scope conversations that felt resolved but weren't, deliverable definitions both sides thought were agreed on but actually weren't, or invoice timing that surprised the client.

    Contract review catches the downstream artifact. The upstream problem is the conversation design.

    Congrats on the SAFE round. Curious what your users' most common reported problem was before signing: bad clauses they caught, or expectations already misaligned that the contract couldn't fix?

    1. 1

      Some of that is probably a bit too close to internal product and user insights for me to go into publicly right now.

  2. 1

    The thesis shift section is the most useful part of this post and almost nobody talks about it honestly. I am 46 days into building AgileTask and lived a version of the same thing in fast forward.

    Started thinking I was building "AI sprint planning". Shipped three full redesigns (Cadence, Canvas, Ember, all scrapped) before realizing the actual product was not a planner at all. It was a refusal engine. The thing users responded to was the hard cap of 3 goals, not the AI generating tasks. The constraint was the feature. The AI was the supporting cast.

    Your line "investors care much less about AI analyzing documents and much more about whether the product becomes part of real decision-making" hits the same nerve from the buyer side. The product gets real when it changes what the user decides to do, not when it does something clever for them.

    Curious — when the thesis shifted for you, did you change the landing page and pricing too, or did you keep both static until usage patterns were undeniable? I am wrestling with the same timing question right now.

    1. 1

      Appreciate this a lot 🙌 and honestly I think a lot of these shifts end up being gradual rather than one big obvious moment. Still learning and adjusting things in real time.

  3. 1

    The strongest signal here is not the $10M reviewed, it is repeat use around the moment of signing. For consumer apps I’ve seen the same thing: people do not care that it is AI, they care whether the output is trustworthy enough to use at the decision point. I’d track second high-stakes use harder than raw contracts reviewed.

    1. 1

      Appreciate the perspective 🙌 still learning a lot in real time.

  4. 1

    The shift you described -- from 'AI analyzing documents' to 'contract risk visibility before a business decision' -- is actually the more durable positioning. 'AI analyzing documents' is a feature description. 'Risk visibility before signing' is a job-to-be-done. Products built around jobs-to-be-done survive AI commoditization because the job stays constant even as the implementation changes.

    The investor pattern you're noticing (workflow businesses vs. AI features) maps to this: investors in AI features are betting on your model being better. Investors in workflow businesses are betting on your workflow being stickier than whatever model you're using. As frontier models continue commoditizing, the workflow bet ages better.

    For solo founders specifically, the 'contracts becoming materially important' trigger you mentioned is interesting -- for a one-person operation, any contract above ~0K is high-stakes enough to warrant the review overhead that VIDI removes. The solo founder doesn't have in-house counsel and can't justify 00/hr outside counsel on every engagement. That's a structural wedge that doesn't exist at larger companies who have legal teams.

    What's the typical profile of your 90 users -- solo founders, small teams, or a mix? Curious whether the repeat usage is concentrated in a particular segment.

    1. 1

      Appreciate the perspective 🙌 still very early, so I’m intentionally trying not to lock myself too hard into public assumptions around segments or usage patterns yet.

  5. 2

    Anyone here who raised through a SAFE at the pre-seed stage - what ended up mattering most in investor conversations?

    Still learning a lot in real time around positioning, traction, and how early investors evaluate workflow businesses vs AI features.

  6. 1

    Nice — that usually means your link flow got simpler or your tracking improved.

  7. 1

    11 weeks zero to $10M reviewed is a real number, and the part nobody asks about is what the failure mode looks like when the model gets a clause wrong. Contract review has the property that catching the bug is harder than writing it, especially when the user is not a lawyer. How are you handling the cases where the model summary reads confident and is materially wrong about a liability cap or a renewal clause? That is the question your enterprise pilots will live or die by. The MRR curve is the easy part.

    1. 1

      Appreciate the perspective 🙌 but that gets pretty deep into areas I’d rather keep internal for now. Definitely something I think about a lot already.

  8. 1

    The useful part of this story is that the traction came from a very specific painful workflow, not from a broad AI pitch. Reviewing contracts has an obvious cost of delay and mistakes, so the value is easier to feel.

    For early founders, that is probably the lesson: pick a problem where the user can immediately name what bad outcomes cost them. “This saves time” is weak. “This prevents a missed clause, delayed deal, or expensive mistake” is much stronger.

    Curious what the first few conversations sounded like before the product had proof. Were people already describing the pain in urgent language, or did the urgency only show up after they saw the demo?

    1. 1

      Appreciate the perspective 🙌 but I probably can’t say too much about the early conversations specifically. A lot of the learning there was pretty contextual and evolved over time.

  9. 1

    Reviewing $10M worth of contracts in just one week is an insanely impressive number for a solo project. I’m curious, in the very early stage when you had 0 users, where did you find your first customers and how did you build enough trust for them to hand over such sensitive contracts to a brand new tool? Was it mainly through your personal network or did you have a specific cold outreach strategy?

    1. 1

      Honestly I think a lot of it depends on the founder, the market, timing, positioning, and how the product is communicated early on. Different products build trust in very different ways.

      For me it was more of a gradual process of conversations, iteration, and learning in real time rather than one specific strategy.

  10. 1

    $10M reviewed in 11 weeks on a vertical AI legal product is the kind of
    traction that justifies a small SAFE if you are intentional about why you
    are raising. Two questions worth being clear on before the round:

    1. What is the next 6 months of work that you cannot do without the
      capital? If the answer is "marketing", the capital efficiency question
      flips: would the same dollars be better spent on a content team or a sales
      hire than on burn runway?

    2. What does success look like 18 months from raise? "Big" is not enough.
      The cap table you take today is the optionality you have tomorrow.

    The vertical AI for legal review space is going to get crowded fast. Speed
    to defensibility matters more than speed to scale.

    1. 1

      Appreciate it 🙌 definitely thinking carefully about those questions already. Still early, but trying to stay intentional about how the startup develops from here.

  11. 1

    Yes, literally this week for me. The pattern was the same shape. I was framing my product as a multi-style translation helper, but two peer-founder threads on IH surfaced the actual workflow it owns: translation that happens inside the conversation, instead of leaving for ChatGPT or Google. Helper to workflow-owner is the same category jump as your AI analysis to risk visibility before signing.

    What's interesting in your version is that the trigger was usage data. In mine, the trigger was direct conversation with two peer founders who could see the gap between my pitch and the actual job. Usage data probably tells you the moment matters. Peer feedback can tell you why your current language doesn't claim that moment. Both signals work, but they catch different stages of the same shift.

    The investor framing you're getting also tracks. Repeat behavior in a workflow tool is the moat, the AI is the substrate. Workflow ownership is harder to displace because the user stops evaluating the tool and starts evaluating the absence of it.

  12. 1

    “The strongest usage patterns appear when contracts become materially important to a business decision” is probably the most important line in the post.

    That’s the kind of insight you usually only discover after launch. The product starts as “AI analysis,” but user behavior reveals the real value is risk confidence before commitment.

  13. 1

    The thesis shift you described - feature ("AI analysis") -> moment ("risk visibility before signing") - is the cleanest articulation of this pattern I've seen in a while.

    Watching for the same shift on my side. Building a cloud-bill anomaly detector (pre-launch) and the question I keep wrestling with is whether the value is "tells you when costs spike" or "protects you from the morning you'd otherwise wake up to a $4K surprise bill." Your post crystallizes that those aren't synonyms - the first is a feature, the second is anxiety reduction at a high-stakes moment, and the buyer probably only pays for the second.

    One adjacency worth thinking about: the same emotional shape ("about to commit, want to know what I'm missing") shows up at renewal and auto-renew, not just at first signing. Different ICP - ops or finance, not founders - but the same wedge, and probably your second product not your first. The moat conversation with investors gets easier if you can sketch the line from "first-signing risk" to "every recurring commitment risk" without losing focus.

    The repeat-usage-per-company metric Greg flagged is the right north star - would be curious to see that broken down by contract-value tier in a future post if you end up sharing.

    1. 1

      Appreciate it 🙌 and interesting framing on the cloud-bill side as well. Definitely agree that the emotional layer behind the workflow matters a lot more than I initially expected.

  14. 1

    This is a strong shift honestly.

    “AI contract analysis” sounds like a feature, but “risk visibility before signing” feels much closer to the real pain. Especially for founders, the stressful part is not reading the contract, it’s the fear of missing something important before making a big decision.

    Also interesting that usage increased when the contracts became more important. That feels like a good signal that people don’t just want summaries, they want confidence at a high-stakes moment.

    I’m building something in a totally different space, but I’m noticing a similar pattern — the product idea usually starts as a feature, then users slowly show you what the real problem is.

    Congrats on the progress. 11 weeks is crazy fast.

    1. 1

      Appreciate it 🙌 and yeah, that shift in how users actually relate to the product has probably been the most interesting part so far. Good luck with what you’re building too.

  15. 1

    The thesis shift you described is one of the most valuable things that can happen early — and the fact that you noticed it from usage data (not just intuition) is a good sign.

    One thing worth building now before you scale: a structured way to track that contract data internally. 90 contracts is still manageable in a spreadsheet, but once you're at 500-1000+ reviews, you'll want a proper data layer to surface patterns — which contract types generate the most repeat usage, which risk categories drive the most "aha" moments, which deal sizes convert to retention. That data will also be gold for investor conversations.

    Investors asking about repeat behavior and workflow stickiness are really asking: do you have data to prove this? The founders who can answer with actual query results win those rooms.

    Solid traction for 11 weeks. Keep building in public — it's working.

    If you ever get to the point of querying your contract database for these insights, I have a free SQL diagnostic scripts pack that helps identify data quality issues early: https://growthwithshehroz.gumroad.com/l/psmqnx

    1. 1

      Appreciate it 🙌 and thanks, but I’m all set for now.

      1. 1

        Totally fair — $10M reviewed in 11 weeks is remarkable traction on its own. Rooting for a strong round. If investor conversations ever start touching on the data layer behind those repeat-usage metrics, my SQL query optimization guide covers exactly that kind of reporting work → https://growthwithshehroz.gumroad.com/l/gwiow

  16. 1

    Really interesting progression, especially in only 11 weeks.

    I think your observation about contracts becoming important “decision moments” is very insightful. It feels much bigger than simple document analysis — more like helping founders reduce uncertainty before making high-stakes commitments.

    Also agree with your point about investors focusing more on repeat workflow behavior than the AI itself. A lot of products seem to be learning that lesson right now.

    Curious: what type of users are showing the strongest repeat usage so far?

    1. 1

      Appreciate it 🙌 still very early, so I’m mostly just learning, observing patterns, and continuing to iterate as usage evolves.

  17. 1

    The shift from "AI tool" to "risk visibility before a business decision" is a real positioning upgrade — that's a much stickier wedge.
    The investor feedback you're getting tracks: repeat behavior in workflow tools is the moat, not the AI layer. Once a founder trusts a tool before signing a $500K agreement, that's not a feature — that's a habit.
    Curious how you're thinking about the trust side as you scale. Human review layer? Audit trails? That might be the next thesis evolution. 🧠

    1. 1

      Appreciate the insight 🙌 still very early, so mostly just observing patterns and continuing to learn from usage over time.

  18. 1

    This is really impressive, congratulations Meirambek! Can I make a suggestion? As a solo founder providing a contract analysis tool, I would invest a little time to flesh out your Terms of Service and Privacy Policy; I took a quick look at them and they appear to be very bare-bones.

    It's actually pretty straightforward to have AI help you think through what you need to be buttoned up, based on your product's features, what data you're handling, where your users are located, etc. Or even better, hire a lawyer to take a pass through them.

    Best of luck!

    1. 1

      Appreciate it 🙌 definitely fair feedback and something I’m aware of as things continue developing.

  19. 1

    The shift you are describing (from 'AI contract analysis' to 'contract risk visibility before signing') is the right framing, and most founders never make that move because they get attached to the technical positioning of the first version. You went from feature framing to workflow framing, that is the move investors actually pay for.

    A few things worth thinking about as you open the SAFE, from someone writing checks into early-stage workflow businesses at Henson Venture:

    The metric that moves investor conversations is repeat usage per company, not 90 contracts analyzed. If you can say '14 of our first 30 customers came back within 30 days for a second contract,' that is your slide. Total contract value reviewed is impressive but it is a vanity number to a B2B investor evaluating sticky workflow.

    Decide whether you are selling to the GC/legal function or to the founder/operator who signs the contract. They are different products. The first scales but is a long sale. The second has impulse adoption but tops out around Series A. Your repeat usage data will tell you which one is real for VIDI, do not pick yet.

    On the SAFE: be specific about what the round funds. 'Hire 1 design partner AE and run 12 paid pilots' beats 'hire team and grow' every time at this stage. Investors at pre-seed are buying the next 6 months, not the vision.

    Happy to look at the deck if useful, contracts is an interesting category right now.

    1. 1

      Appreciate that Greg.

      I submitted through your application form on Saturday and uploaded the original deck there. Afterwards I realized I forgot to include one additional presentation, so I sent that separately by email as well.

      Honestly most of what I write publicly is probably just the tip of the iceberg - a lot of the more detailed thinking around the workflow patterns, positioning shifts and repeat operational behavior is already inside the decks.

      Really appreciate you taking the time to go through everything.

  20. 1

    What’s interesting is that VIDI may not need to win “all contract review” — just the small set of contracts where signing risk can materially change the business. That’s a much stronger wedge than generic AI analysis. Have you noticed a repeatable trigger yet: contract value, counterparty type, or clause complexity?

    1. 1

      Appreciate the insight 🙌 still very early, so mostly just observing patterns and continuing to learn from usage over time.

  21. 1

    When you reframe from "AI reads your documents" to "contract risk visibility before you sign." You went from being a feature to owning a moment in someone's decision process. Much harder to displace.

    Same with contracts suddenly being used when they start becoming material to the user. That jump in usage when a contract is a big decision is also your retention number. The product isn't sticky at the little decisions, but it is the infrastructure for your customers' biggest decisions. Which is actually a better business to be in.

    Been enjoying the shift through this thesis with Specc. Started as a ticket writing tool, realised the actual value is owning the feedback->outcome loop. User behaviour always tells you what the product actually is.

    1. 1

      Appreciate that 🙌 and completely agree that user behavior tends to clarify the real value layer over time. Interesting shift with Specc as well.

  22. 1

    Hi Meirambek! Awesome work for just 11 weeks.

    When launching your MVP, where did you first start talking about your idea? How were you able to get enough feedback to actually go through with what you were building, and get enough visibility? I just built a landing page speaking about my idea before committing to it, but I'm having a hard time finding the best place to seek feedback on it.

    1. 1

      Honestly I think it depends a lot on the startup, the market, and the founder’s communication style. Different products get traction in very different places.

      For me it was mostly about testing positioning, talking to a lot of people consistently, and paying attention to which conversations and signals kept repeating over time. Different channels work for different markets.

      Happy to chat more - I’ve probably had 2,000+ startup/founder conversations over time and learned something useful from almost all of them.

  23. 1

    This is a fantastic real-world validation story — from 0 to $10M+ in reviewed contracts in 11 weeks is seriously impressive.

    Quick question: How are you handling the trust gap? Founders putting a $6.7M contract through an AI tool is a big leap of faith — did the early users come from personal networks, or did you do something specific to de-risk that first review?

    Love that user behavior is reshaping your thesis. That's the sign of a founder who's actually listening.

    Wishing you well on the SAFE round.

    1. 1

      Appreciate it - means a lot 🙌
      Still very early, but definitely learning a lot in real time as the product evolves.

  24. 1

    Really impressive trajectory in such a short time, especially the speed at which you’re moving from early usage signals to a structured raise.

    What stood out to me is how quickly things shift once real contract value / revenue context enters the picture. At that stage, it feels less like “early traction” and more like you’re starting to validate whether this becomes infrastructure vs just a useful tool.

    I’ve noticed a similar pattern in other early-stage products: once usage starts connecting to real financial risk or decision-making, the feedback loop changes completely, users stop evaluating features and start evaluating trust.

    Curious how you’re thinking about timing the SAFE relative to that signal maturity, are you optimizing for speed to capital, or waiting for a more stable usage pattern to emerge first?

    1. 1

      Appreciate that 🙌 Still very early, so right now I’m mostly focused on learning, improving the product, and understanding where the strongest long-term fit actually is.

  25. 1

    Strong signal on usage spiking for high-stakes contracts—that’s where this shifts from analysis to decision-making.

    Also agree: investors care more about workflow + repeat use than “AI features.”

    Curious how you’re moving into the pre-signing stage.

    1. 1

      Appreciate it 🙌 still figuring that out in real time honestly. A lot of the learning so far has come from watching where trust and repeat behavior naturally start forming.

  26. 1

    huge progress for just 11 weeks -turning a simple experiment into real usage and real contract volume that fast is impressive
    love how the thesis evolved from “AI analysis” into workflow + decision infrastructure. that shift usually comes from actually listening to user behavior, not hype. congrats 👏

    1. 1

      Appreciate it 🙌
      That shift honestly surprised me too. The more usage came in, the more the behavioral side started standing out over the AI layer itself. Still very early, but definitely changing how I think about the product long term.

      1. 1

        yeah, that’s usually a strong signal - when user behavior starts shaping the product more than the original AI idea

  27. 1

    This comment was deleted a day ago.

    1. 1

      Appreciate it 🙌 interesting shift on your side too, and definitely agree that specific user feedback can completely change how you see the product.

Trending on Indie Hackers
7 years in agency, 200+ B2B campaigns, now building Outbound Glow User Avatar 105 comments How I built an AI workflow with preview, approval, and monitoring User Avatar 53 comments The "Book a Demo" Button Was Killing My Pipeline. Here's What I Replaced It With. User Avatar 45 comments I built a desktop app to move files between cloud providers without subscriptions or CLI User Avatar 26 comments Show IH: I built an AI agent that helps founders find the right people User Avatar 24 comments