9
55 Comments

I can't write a single line of code. I built a multi-AI research platform anyway. Here's what 6 weeks looked like.

I'm not a developer. I don't know Python. I can't read JavaScript. I've never opened a terminal before March 2026.

Six weeks ago, I had $10K and an idea: what if you could make multiple AI models work together on one research question — not just ask ChatGPT and hope for the best?

Today I have a live SaaS platform with 10 users, zero paying customers, and a product that genuinely works.

I built the entire thing by talking to Claude.


The idea that wouldn't leave me alone.

I'm from Christchurch, New Zealand. I trade options for a living — ETH iron condors on Deribit, if that means anything to you.

Trading taught me one thing: no single source of information is reliable. You cross-reference. You verify. You look for what one analyst missed that another caught.

So why do we accept a single AI model's answer as "research"?

I wanted something that worked like a team of analysts:

  • One gathers context and asks clarifying questions
  • One searches the live web for real-time data
  • One writes the deep analysis
  • One does adversarial quality checks
  • One synthesizes everything into a final report

Five stages. Five different AI models. One report.


What "building with AI" actually looks like when you can't code.

It's not magic. It's exhausting.

I'd describe what I wanted. Claude would write the code. I'd deploy it. It would break. I'd paste the error back. We'd fix it. Repeat 200 times a day.

I learned what a "migration" is by accidentally breaking my database at 2am.

I learned what "RLS policies" are because my API returned empty arrays for three hours and I couldn't figure out why.

I learned what "Cloudflare proxy" means because my server-side renders kept failing and nobody could explain why until we bypassed it.

I don't understand most of the code in my repo. But I understand every architectural decision, every data flow, every trade-off. I'm the product manager. The AI is the engineer.


What I actually built.

SANICE AI has three products:

Glass — the research pipeline. You ask a question. Five AI models (GPT-4o, Gemini, Grok, Claude) collaborate through a 5-stage pipeline. You get a 3,000+ word research report with charts, citations, and follow-up chat. Under 5 minutes.

Pulse — automated monitoring. Set up alerts on topics from your research. Get daily email digests when something changes.

Collective — multi-model chat. Talk to different AI models in one interface.

Here's an example report it auto-generated: https://sanice.ai/research/macro/research-the-key-inflation-concerns-and-drivers-observed-throughout-2023


What the numbers look like.

  • 10 registered users (all friends and family)
  • 0 paying customers
  • 0 organic users
  • ~$18/month in AI costs
  • $10K budget, ~$3K spent so far
  • Stack: FastAPI, Next.js, Supabase, Railway, Vercel, Cloudflare

I'm not going to pretend these are good numbers. They're not. But the product works, and I needed to stop building and start talking to strangers. This post is part of that.


What AI changed for me.

It let me play a game I had no ticket to.

I couldn't have built this two years ago. Not because the idea didn't exist, but because the barrier to entry was "learn to code for 2 years first." AI removed that barrier entirely.

It changed the risk calculation.

$10K and 6 weeks is survivable. $10K and 2 years is not. That's the difference between "I'll try it" and "I'll think about it forever."

It made me a different kind of founder.

I don't debug code. I debug decisions. "Should we use Redis or Supabase for rate limiting?" "Should Stage 4 use Gemini or Grok for quality checks?" Those are the questions I spend my time on. The AI handles the implementation.


What AI didn't change.

Nobody knows I exist.

This is the part they don't tell you. You can build the best product in the world in 6 weeks, and if nobody knows about it, it doesn't matter.

I spent 6 weeks building. I should have spent 3 building and 3 talking to people.

Reddit blocks my posts (spam filter). Twitter has 0 followers. LinkedIn gets 12 views. Google is slowly indexing my research pages. But "slowly" doesn't pay the bills.

Judgment is still 100% human.

What to build. What to cut. How to price. Who to build for. AI has zero useful opinions on any of these. It will happily build the wrong thing perfectly.


What I'm doing now.

Yesterday I built a content engine that auto-publishes 2 research reports daily — one trending, one evergreen. The idea is that if I can't find users, maybe Google can find them for me through SEO.

It costs $18/month in AI credits. If it works, it's the cheapest marketing channel possible. If it doesn't, I'll know in 30 days.


My honest questions for this community:

I'm at the "I built it and nobody came" stage. If you've been here:

  1. What actually worked to get your first 10 strangers?
  2. Would you use a multi-AI research tool? For what?
  3. Am I solving a real problem, or did I build something cool that nobody needs?

I'd rather hear "this isn't useful" now than discover it in 6 months.


sanice.ai — free tier, no card required

posted to Icon for group Building in Public
Building in Public
on April 12, 2026
  1. 1

    "Coming from a trading background to building a 5-stage AI pipeline in 6 weeks is an incredible feat of 'prompt engineering' as a PM. To your question about the first 10 strangers: since your tool is great for deep research, have you tried posting a few of your auto-generated reports on niche subreddits (like r/investing or r/macro) as 'free value' posts? That usually draws in the first few non-friends.
    The 'adversarial quality check' stage is a very real solution to the hallucination problem. You should definitely submit SANICE to this competition-This might sound interesting 👇
    You have an idea
    $19 entry
    🏆 Tokyo trip + hotel
    💰 $500
    Round just opened 👉 tokyolore.com

    1. 1

      Thanks for the niche-subreddit suggestion , that's actually the cleanest play I've heard yet. Going to try it this week. On the Tokyo competition — appreciate the nudge but I'm going to pass and stay focused on shipping. Not the right stage for me to spend $19 on anything that isn't servers or users.

  2. 1

    "nobody knows I exist" this is the real wall. not the building part.

    working on something specifically for this problem. people doing real work without the right network, showing up for each other voluntarily. search unsponsored io if curious.

    1. 1

      exactly. It's the part nobody warns you about. Will check out unsponsored.io. If you ever want to try what I built, sanice.ai is free. No card needed. Would be curious what you think of it from a fellow "showing up for each other" angle.
      regarding unsponsored.io, who are the people behind who support serious people?

  3. 1

    Respect for shipping and being honest about the numbers. The "stop building, start talking to strangers" line really hit me — I had the same wall with my own indie app (a lightweight memo tool). I spent weeks polishing features nobody asked for before actually showing it to 10 non-friends, and the feedback loop changed everything: half the features I was proud of got cut, and a tiny detail I almost skipped became the thing users mention most. One question — for the 10 users you have now, how often are they actually running reports? Weekly active use vs. "signed up once" is the signal I'd watch before touching pricing or adding more models to the pipeline. What does retention look like after week 1?

    1. 1

      This comment genuinely stopped me. You're right, weekly active use is the real signal, and I haven't been tracking it properly. Honest answer most of my 10 users signed up once and haven't come back. That's the real data, and I've been avoiding looking at it.
      The bit about a small detail becoming the thing users mention most — what was it for your memo tool? Curious what it taught you.
      If you want to try SANICE on anything (it's free), I'd value an outside opinion on what feels unnecessary in the output. That's the feedback I keep missing.

  4. 1

    The cross-referencing idea is solid. Single-model answers always have blind spots, and anyone who's done real research knows you never trust one source. That trading background clearly shaped the architecture in a good way.

    What I'd push back on a little: the fact that Claude wrote all the code doesn't mean you don't need to understand it eventually. I write code daily and still use AI for maybe 60-70% of the output, but the debugging, the architecture decisions, the "why is this slow" moments - that's where understanding matters. You'll hit a wall at some point where Claude gives you five different solutions and you won't know which one is right without some technical intuition.

    Not saying that to discourage you. 10 users in 6 weeks with zero coding background is genuinely impressive. Just that the next phase (scaling, paying customers, reliability) tends to require deeper understanding of what's under the hood.

    Curious about the cost per report with 5 different AI models running. That seems like it could get expensive fast.

    1. 1

      The pushback is fair and I needed to hear it. You're right that vibe-coding hits a wall I'm already seeing it in small ways where Claude gives me three different solutions and I can't judge which is actually right.
      On cost per report: the 5-model pipeline costs me about $0.30-0.35 per report depending on complexity, but your point about scaling is real. Caching identical questions is something I haven't built yet — that's on my list.
      If you want to pressure-test it yourself, sanice.ai is free — would value your eye on where the architecture is going to break first.

  5. 1

    I love your story a lot, I'm also practically in the same boat as you as someone starting out with an app and some rubbish stats at the moment... what actually got me started into building my app was one Instagram reel that just had me hooked on with how easy the reel creator made it sound, I am about a week into building my budgeting app called Trakly and I have genuinely learnt that not only is there levels to this... but I should really stop watching reels that get my hopes up super high like the previous one haha.

    I haven't given up though and would like to continue and as much as I would love to give any advice, all I can really say is to keep pushing and do not give up on projects like this one you are currently working on. Even if you found out later what you're building isn't useful to anyone at least you tried, and you can still try again and look into something else that could really work out well for you. So that's all I really have to say on my end, best of luck on your journey to working your way up with what you've built, I truly hope you make it far in life :)

    1. 1

      Appreciate this more than you know. I've watched those same reels and had the same "wait, is it really that easy?" crash afterwards. It's not. But keep going — a week in is the hardest part.
      When Trakly is live, send me the link. I'll try it and give you honest feedback. And if you ever want to run a strategic question through SANICE (sanice.ai, free), it might help you think through pricing or positioning — that's what I built it for, because I had the same "what do I even charge?" freeze.

  6. 1

    This is incredibly inspiring! As someone who's also building in the AI space, I love the multi-model approach you've taken. The idea of having different AI models play different roles - context gathering, web search, analysis, quality checks, and synthesis - is brilliant. It mirrors how actual research teams work.

    The 'build by talking to Claude' journey resonates deeply. There's something powerful about being able to focus on the product vision while AI handles the implementation details. Sure, you hit those 2am database migration issues, but you also ship faster than ever before.

    Your trading background giving you the insight about cross-referencing multiple sources is a great example of how domain expertise translates into better product design. Would love to hear more about how you're planning to convert those 10 users to paying customers - that's often the trickiest part for AI tools!

    1. 1

      Thanks the domain-expertise-translating-into-product-design observation is something I hadn't put into words but it's exactly how Glass was born. Cross-referencing wasn't a feature idea, it was just how I already worked.
      Honest answer on converting 10 users to paying: I haven't figured it out yet. I think my homepage is the bottleneck, not the product. I'm paused on homepage rewrites this week and focusing on actually talking to the users who signed up.
      If you want to try it on something real, sanice.ai is free.

  7. 1

    Man, reading this felt like reading my own diary from the last couple months. I'm in almost the exact same spot - built an AI-powered SEO tool, talked to Claude for hundreds of hours, learned what database migrations and RLS policies are the hard way, and now I'm staring at the "nobody knows I exist" problem.

    Your line about spending 6 weeks building when you should've spent 3 building and 3 talking to people - that's the lesson I keep learning and re-learning. It's so much easier to add one more feature than to cold DM someone and ask if they'd actually pay for what you built.

    For your question about what works to get first strangers - I'm literally figuring this out in real time, but the thing that's shown the most promise so far is finding people who are already complaining about the specific problem in niche communities. Not "people who might be interested someday" but people who posted this week about the exact pain you solve. The conversion rate on those conversations is wildly different from generic outreach.

    Also your auto-publishing content engine idea for SEO is smart. We did something similar and it's the one thing that feels like it compounds over time even when you're not actively pushing it. 30 days is the right timeframe to test it.

    Good luck from a fellow "I can't believe I'm actually doing this" founder.

    1. 1

      This whole reply felt like reading a response I wish I'd written. "It's so much easier to add one more feature than to cold DM someone" I've been doing exactly that for six weeks.
      The "find people already complaining about the specific problem this week" angle is the best tactical advice I've gotten in this thread. Going to try it in r/algotrading and r/options this weekend.
      Would genuinely love to hear more about what you're building — if you ever want to swap notes properly, DM me. And if you want to try SANICE on a strategic question (sanice.ai, free), I'd value your eye on it as a fellow "I can't believe I'm doing this" founder.

  8. 1

    i feel you, i'm at the exact same spot as you now, how do i get people in the app and make them stay?
    I wish you success!!

    1. 1

      Wish I had an answer for you, but I'm in the exact same spot. The only thing I'm trying this week: going into niche communities where people are already asking the kind of question my tool answers, and replying with actual output instead of a pitch. Will let you know if it works.
      If you want to try SANICE on anything (free sanice.ai), I'd genuinely like your feedback as someone in the same trench.

  9. 1

    Firstly, you are one of the only people who proved something I am also learning as someone who is building a SAAS (In the taste your own food before you sell it stage) but sucks at coding.

    Truth is: You don’t need a massive budget or a CS degree. Like every business: You just need 1 idea that genuinely solves a problem and makes someone’s life easier/ better. And if you could build what you did without letting the old ‘only coders can build AI’ myth hold you back, I believe you can find the clients too.

    As a copywriter my biggest practical tip is: Identify EXACTLY whose pain you can solve and sell your AI as a solution to a problem that directly impacts their life in a good way rather than as an AI. Because people only care about what you can do for them not what you can do in general.

    Other than that I would recommend finding the streams of distribution (social media, outreach, building an email list, collecting testimonials, asking friends and family to recommend it to anyone that struggles with the problem you solve).

    Track metrics on your input, output and ultimately what sources actually converted to users. Do it every month, repeat what works, let go of what doesn’t and stay consistent.

    You got this!

    1. 1

      "Sell as a solution to a problem that impacts their life, not as an AI" saved. That's the exact mistake I've been making. My homepage currently says what SANICE is instead of what it does for someone.
      The copywriter framing (input / output / conversion sources monthly) is something I haven't set up properly. Going to build that before I rewrite anything else.
      If you ever want to try it (sanice.ai, free), I'd love honest copywriter feedback on the output is it readable, is it useful, what would you cut?

  10. 1

    I'm building a UK cleaning marketplace and currently at the "I built it and nobody came" stage.

    1. What worked to get your first 10 strangers?
      I'm using AI to create short-form reels across social media.
      Things like:
      Finding trusted local cleaners
      Cleaning tips
      Local service discovery
      "Book a cleaner near you" style content
      Then posting across multiple platforms daily.
      The goal is simple:
      Use volume and consistency as a free distribution strategy instead of paid ads.
    2. Would I use a multi-AI research tool?
      Yes — I'm already heavily using AI for:
      Marketing ideas
      SEO content
      Growth strategy
      Messaging
      Product direction
      I'm solo building, so AI is basically my growth team.
    3. Am I solving a real problem?
      I believe yes.
      I run a cleaning business and customers constantly ask:
      Do you know a reliable cleaner?
      Can you recommend someone local?
      There's still friction around trust, availability, and transparency.
      I'm trying to simplify that.
      Right now I'm testing:
      AI content
      Organic growth
      Free distribution
      Trying to get the first 10, then iterate from there.
    1. 1

      Volume and consistency as distribution that's underrated because it's boring. Respect for running it. The UK cleaning marketplace angle is strong because trust is genuinely broken in that category; I remember hating the "who do I call?" problem when I moved.
      If you want to try SANICE on a strategic question (sanice.ai, free) — something like "go-to-market for a trust-based local marketplace" — would be curious what you think of the output. And when your site is live, send me the link. Happy to try a booking and give you honest feedback.

  11. 1

    The statement 'I don't debug code, I debug decisions' appeals to me. Do infrastructure and setup even start to slow you down as things get more complicated or is your existing setup managing it well?

    1. 1

      Honest answer: infrastructure starts slowing me down the moment something touches three systems at once. Supabase RLS + FastAPI auth + Next.js client state is where I get lost. I can debug the code Claude writes for one of the three in isolation, but when a bug lives in the handoff between them, I'm stuck for hours.
      Current setup is holding. But I'm nervous about what happens when I need real concurrency or caching. That's where I'll need to actually learn what's under the hood.
      If you want to kick the tires (sanice.ai, free), would love an engineer's eye on where things feel wrong.

  12. 1

    This is awesome you achieved so much just through prompting. I would still advocate for learning the basics of coding though. At some point, your product will become large enough that Claude will start making mistakes and going in circles. That's where you can put on your developer hat and help it out. A 20 min fix from an AI-assisted developer might take someone three days if they try to solve it through prompting alone.

    1. 1

      You're right and I know it. I've already hit a few "Claude is going in circles" moments. Right now my workaround is asking Claude, Gemini, and ChatGPT the same question and comparing — which is literally the problem Glass was built to solve. Recursive, in a painful way.
      I'm going to start learning the actual basics (Python syntax, not full courses) after this sprint. You've earned a link: sanice.ai is free if you want to see what a non-coder with AI shipped in 6 weeks. Honest feedback welcome.

    2. 1

      Second that, took me 3 months and a update on gtp to solve my issue. 😆

      1. 1

        Three months on one bug is the exact thing that'll push me into learning Python properly. I've been putting it off.

  13. 1

    Really appreciate the transparency here — especially the honest "10 users, zero paying customers" part. Most people skip that. One thing I've noticed with no-code/AI-built products: the hardest part isn't building v1, it's maintaining and iterating when users start requesting features that push the boundaries of what the AI tools can generate. How are you handling that? Do you have a plThe "I built it and nobody came" stage is real, and props for being honest about it. Two things that worked for others in a similar spot: (1) find 5 subreddits where people already ask the type of question your tool answers, then answer their question manually using your tool. If people ask how you did it, share naturally. (2) Your auto-published research reports are a smart SEO play, but add a "powered by 5 AI models" byline with a CTA at the bottom. The reports themselves become the distribution. You are past the hardest part which is actually shipping. The next phase is just reps on distribution.an for when Claude-generated code needs debugging or refactoring at a deeper level?——

    1. 1

      Both parts of your reply landed. Honest answer on maintenance: I don't have a proper plan yet. When something breaks, I paste the error into Claude and we iterate. It works until it doesn't, and the "doesn't" moments are getting longer.
      The 5-subreddit manual-answer tactic is the single clearest piece of GTM advice I've been given. Doing it this weekend. The "powered by 5 AI models" byline on auto-published reports is also going on the list.
      If you want to try SANICE yourself (sanice.ai, free), would value an outside eye on whether the output is actually useful or just "cool.

  14. 1

    Just keep going, talk to people, make videos send emails do wherever you have to do to tell people about your work be proud of it and let people know it. You already did the easy part, and by the way JAVA make me cry for 1 hour at 2am in a Sunday, now start the hard one and believe me when I said that you will get through this too.

    1. 1

      Java at 2am on a Sunday — I feel that in my bones. Thanks for the push. The "hard part is next" framing is honest and I'd rather hear it from someone who's been there. Going to keep going.

  15. 1

    the interesting bit isn't the no-code part. it's the $10k and an idea → 10 real users in 6 weeks. most people with the same setup never get a single user

    1. 1

      This is the comment I'm going to come back to when I'm discouraged. You nailed the actual novelty: not "I built it without code," but "I got 10 real users in 6 weeks." I was burying that stat under the no-code angle.
      Changing how I frame the next post because of this.

  16. 1

    Hey Sanice

    That's awesome, and I can totally relate to this.
    I have ~20 years of experience in design, project management, business strategy... But what I'm not, is a developer.

    I started with tools like builder, figma make, replit, lovable... But they felt like a Wix approach.

    Then I met Claude code. And all the skills I used to put to work with real people, I know utilize with Claude.

    Now, I don't have to wait a week to get a response. Designs are implemented instantly. And with the use of custom /skills and my latest build LayerView, I can rely on Claude to take on agentic role, without expecting 3 refactors.

    Keep shipping, keep learning, I wish you all the best!

    1. 1

      20 years in design/PM/strategy and now leveraging it through Claude that's the real unlock. Skills don't disappear when you can't type syntax; they become leverage. Going to check out LayerView.
      If you want to try what I built (sanice.ai, free), I'd love a designer/PM eye on the flow. Honest take welcome.

  17. 1

    Fellow finance person building with AI here. I trade-ish too (built an earnings analysis tool) so the cross-referencing instinct resonates deeply. No single model gives you the full picture, same as no single analyst covers every angle of an earnings report.

    Your self-awareness about the 6 weeks building vs 3 building and 3 talking split is the most important line in this post. I made the exact same mistake. Built a full scoring engine, polished the UI, added features nobody asked for. Meanwhile zero strangers had ever seen it.

    To answer your question about what worked for first strangers: for me it was SEO through the product itself. Every earnings report I score becomes a page that Google indexes. People searching for specific tickers find the scored page and some of them sign up. It is slow but it compounds and it sounds like your auto-published research reports strategy is the same idea. $18/month for an automated content engine is an incredible ROI if even 5% of those pages rank.

    The honest answer to "am I solving a real problem" is that multi-model research is a real workflow that power users already do manually. The question is whether the people who need it know to search for it. That is the distribution problem, not a product problem.

    Keep posting updates here. The transparency is what makes people want to help.

    1. 1

      This is the comment I should've read first. The earnings-pages-as-SEO strategy is exactly the playbook I'm trying to stumble my way into. And your closing point is the one I keep avoiding: "The question is whether the people who need it know to search for it. That is the distribution problem, not a product problem."
      I've been fixing the product. I should be fixing distribution.
      Would genuinely love to compare notes fellow finance-adjacent builder, similar strategy, would be interesting to see where our SEO approaches diverge. If you want to try SANICE (sanice.ai, free), I'd value a head-to-head take on output quality vs. your tool.

  18. 1

    6 weeks and a live product is solid. but zero paying customers - that's the next wall. not the code, not the build. how are you thinking about pricing?

    1. 1

      Honest answer, pricing is set ($29 to $499/month) but I haven't actually had someone say "yes, here's a card" yet. The wall isn't the price point, it's that I haven't been talking to the right people. Working on that this week instead of more code. Appreciate the direct question.

  19. 1

    This is honestly one of the most real build-in-public posts I’ve seen.

    You don’t have a product problem — you have a distribution + conversion problem.

    Right now, if someone lands on your site, it’s not immediately clear who it’s for or why they should care. That’s fixable.

    I help early-stage SaaS founders turn “cool tech” into something that actually converts (landing pages, funnels, positioning — especially for AI tools like this).

    If you’re open to it, I’d love to help you refine:
    – your landing page messaging
    – user flow from first visit → signup
    – and positioning for a specific niche (traders/founders/etc.)

    No hard pitch — think you’re very close and it’d be a shame if this didn’t get traction.

    Happy to take a look

    1. 1

      Appreciate the honest offer. Straight answer, I'm not going to take you up on it right now, because I think the issue is one step upstream of landing-page copy. I haven't done enough real customer conversations yet to know what language to put on the page. If I rewrite the page now, it's still just me guessing.
      Ask me again in 4 weeks when I've had 10 real customer calls. If I've still got the same confusion, you're the first person I'll reach out to.
      In the meantime, if you want to try the tool (sanice.ai, free), would love your eye on the output itself, curious what a conversion-focused dev notices.

  20. 1

    This is exactly the shift I’ve been noticing — not just that non-devs can build now, but that the cost of trying has collapsed.

    A year ago, this is a 6–12 month commitment. Now it’s a few weeks to get something real in front of users.

    The part I’m still figuring out is what happens after that — building is suddenly the easy part, but getting people to actually use it consistently is a completely different problem.

    Have you found anything that’s worked so far for getting those first users beyond just launching?

    1. 1

      The "cost of trying has collapsed" framing is exactly right and I think it's why this wave of solo-builder stories feels different from 2020. The risk calculus has changed.
      On getting consistent use after first signup: honest answer is I haven't cracked it. Most of my users are one-and-done. That's the real bottleneck, not acquisition. How are you thinking about it on your end?
      If you want to try SANICE (sanice.ai, free), I'd be curious to see if it survives a second session.

  21. 1

    Really enjoyed this — the honesty stands out.

    “I don’t debug code, I debug decisions” is a powerful shift. You clearly built something real, now it’s just a distribution problem.

    You’re at the right point — stop building, start talking to users. That’s where things click.

    1. 1

      You don't have a product problem, you have a distribution problem" is the sentence I needed to read. Saving this. Cheers.
      If you ever want to try SANICE (sanice.ai, free), would be curious how it lands for a fellow indie.

  22. 1

    This is really impressive, especially getting something shipped without a coding background.

    One thing I’m curious about — how are you thinking about distribution?

    I’m building something in a completely different space (pet memorials), but running into a similar challenge where building the product is one thing, but getting people to actually discover it is a whole different problem.

    Right now I’m leaning heavily into SEO, but even there it feels like there’s this weird “waiting phase” where Google just sits on your pages before doing anything.

    Did you focus more on launching fast + iterating, or did you have a clear distribution plan from the start?

    1. 1

      Honest answer: I had no distribution plan. I launched and hoped. Which is why I'm where I am. SEO is my long bet but it's a 3-6 month compound, not a rescue. The thing I'm doing now is going into niche communities and answering real questions using the tool, which is slow but gives me direct feedback.
      Pet memorials is a beautiful and hard niche trust takes forever to build but stays sticky once it does. SEO probably is right for you. When you launch, send the link I'll try a memorial and give honest feedback.
      And if SANICE is useful for any strategic decisions (sanice.ai, free), happy to have you try it.

  23. 1

    The adversarial multi-model approach is the real differentiator here, not the no-code story. watsonfoglift’s suggestion to lead with model disagreements is exactly right. “Where do the models disagree and why” is content nobody else is producing. That’s your marketing and your moat in one move.

    1. 1

      This comment and @watsonfoglift's independently pointed at the same thing "show where the models disagree." Two separate strangers in the same thread landed on the exact same insight. That's not noise, that's signal.
      I was burying the disagreement inside the synthesis to make the final output look clean. That's exactly backwards. Working on a post and a product change to expose it instead.
      You've earned a free account if you want it (sanice.ai, no card needed). If you want to run a real question through it and tell me where the 5 models actually disagreed on your question, that's the most valuable feedback I could get right now.

  24. 1

    This is a fascinating case study in the product-manager-who-codes-through-AI model. What stands out is your point about understanding every architectural decision even without writing the code. That is actually a stronger position than many junior devs who copy-paste without grasping the trade-offs. The 5-stage pipeline approach mirrors how senior engineering teams structure complex systems. You have essentially designed a microservices architecture through pure product thinking. One concern I would flag early: with $18/month AI costs at 10 users, your per-user economics might get brutal at scale. Have you modeled what happens at 1,000 users running concurrent research pipelines? Caching frequently-asked topics and batching similar queries could dramatically cut costs before you hit that wall.

    1. 1

      "Microservices architecture through pure product thinking" is the most flattering framing anyone has given me for what's honestly just "I had no choice but to think in boxes because I couldn't code the monolith."
      On the economics at 1,000 users — you're right, they'd get ugly fast at current structure. Caching identical top-of-funnel questions and batching Stage 2 web searches is the obvious lever. Not built yet — that's on the list.
      If you want to stress-test it yourself (sanice.ai, free), would love a senior engineering eye on where the first crack will be.

    2. 1

      This comment was deleted 26 minutes ago.

  25. 1

    "This is an incredible story of persistence, SANICE_AI. Building a multi-model 5-stage pipeline without writing a line of code is a huge testament to how AI has lowered the floor for non-technical founders.
    The 'adversarial quality check' stage is a brilliant addition—most people don't realize that cross-referencing models is the only way to kill hallucinations in deep research. This kind of 'Product Manager as Builder' approach would be a perfect entry for the current competition. Entry is $19 and the winner gets a trip to Tokyo.
    Prize pool just opened at $0. Your odds are the best right now. Definitely worth a look while you're navigating the 'nobody knows I exist' stage!
    To answer your question: For the first 10 strangers, manual outreach in niche subreddits (like the ETH trading ones you know well) usually beats SEO in the early days."

  26. 1

    You didn’t build a product, you proved execution is no longer the bottleneck.
    Now the real game starts: figuring out if this solves a problem people actually care enough to pay for.

    1. 1

      You proved execution is no longer the bottleneck. Now the real game starts: figuring out if this solves a problem people actually care enough to pay for."
      That's the sentence. Saving it. Thanks

  27. 1

    The cross-referencing insight is the sharpest thing in this post. Single-model answers are the new "I Googled it" — trusted too quickly and wrong more often than people realize. The multi-model adversarial approach is how serious research actually works, and your trading background explains why you see that so clearly.

    On the auto-publish SEO strategy: I'd be cautious with 2 reports/day. The math is appealing ($18/mo for daily content), but there's growing evidence that auto-generated content without editorial depth gets filtered by both Google and AI search engines. A Zyppy analysis found that content updated within 30 days gets roughly 3x more AI citations — but only when it has genuine authority signals like original data and cited sources. Volume without editorial review risks producing the kind of thin content that search engines are specifically learning to deprioritize.

    What if you flipped it: instead of 2 daily auto-generated reports, publish 2 per week that showcase the multi-model disagreement? "Gemini said X, Claude said Y, here's why the difference matters." That IS your differentiator. Nobody else is showing where the models diverge, and that's where the original insight lives. That kind of content earns links and gets cited because it's genuinely novel.

    For the first 10 strangers: the fastest path I've seen is finding communities where people already have the pain you're solving and contributing without pitching. Options/crypto Twitter, fintech forums, trading subreddits. You're from that world — you know the language. One thoughtful thread analyzing a real trade using your multi-model approach would be worth more than 60 auto-published reports for building trust.———

    1. 1

      Really appreciate you taking the time to write this — genuinely one of the most useful replies I've gotten.
      You're spot on about the auto-publish risk. I've been so heads-down building the pipeline that I hadn't stepped back to ask whether volume was actually serving the brand or just filling a content calendar. The Zyppy data point about 30-day freshness is interesting — I'll dig into that.
      The "show where the models diverge" idea honestly stopped me in my tracks. That IS the differentiator and I've been burying it inside the pipeline instead of making it the headline. Something like "Claude recommended holding, Gemini said sell, Grok flagged a regulatory risk nobody mentioned — here's why the disagreement matters more than any single answer." That's content nobody else can produce because nobody else is running adversarial multi-model pipelines. I'm going to test this format this week.
      On the community angle — yeah, I come from the options/crypto side (still run live bots on Deribit). A thread breaking down a real trade through the multi-model lens would probably land way better than any amount of auto-generated SEO content. The trust math is completely different when you're showing your own skin in the game.
      Going to scale the auto-publish down to 2-3/week and make each one actually showcase the model disagreements. Quality over quantity. Thanks for the push in the right direction.

Trending on Indie Hackers
I built a tool that shows what a contract could cost you before signing User Avatar 111 comments The coordination tax: six years watching a one-day feature take four months User Avatar 73 comments My users are making my product better without knowing it. Here's how I designed that. User Avatar 63 comments A simple LinkedIn prospecting trick that improved our lead quality User Avatar 50 comments I changed AIagent2 from dashboard-first to chat-first. Does this feel clearer? User Avatar 39 comments Why I built a SaaS for online front-end projects that need more than a playground User Avatar 15 comments