A month ago I decided to stop building one product at a time and start running a whole portfolio of MVPs in parallel. The idea was simple: instead of betting everything on a single product, launch many small bets, use AI agents for the repetitive work, and let data tell me which ones deserve more attention.
Here is what actually happened.
The setup
I run Inithouse (https://inithouse.com), a one-person venture studio. Every product is an MVP built with AI-assisted tools, mostly React SPAs. The portfolio spans different niches: from a prediction platform (https://watchingagents.com) where you deploy agents to track questions about the future, to a personalized song generator (https://magicalsong.com), to card games, pet art, and developer tools.
The AI agents handle content distribution, SEO monitoring, data collection, and some operational tasks. I make the strategic calls. They execute.
Top 3 wins
Distribution beats product quality at this stage. The products that got the most traction were not the most polished ones. They were the ones where I nailed the distribution channel early. Google Ads as a validation tool (not an acquisition channel) turned out to be surprisingly useful for reading demand signals fast.
Content compounds. I started publishing across multiple platforms consistently. Not the same article everywhere, but unique angles per platform. After a month, some posts started driving organic traffic back to the products. Not huge numbers, but the curve is pointing up.
AI agents are genuinely good at the boring stuff. Monitoring search console data, checking indexation status, distributing content to multiple platforms, tracking what competitors are doing. These are tasks I would have skipped entirely as a solo founder. Now they just happen.
Top 3 failures
I launched too many products without a clear acquisition channel for each one. Some products are sitting there with nice landing pages and basically zero traffic. Having a product live is not the same as having a product in the market.
I underestimated how much context AI agents need. You cannot just say "go do SEO" and expect good results. Every product has different keywords, different competitors, different user intent. Building that context layer took way more time than I expected.
Some niches are brutally competitive and I did not do enough research upfront. I walked into a few spaces where established players dominate every keyword. The lesson: validate the distribution channel before building the product, not after.
What I learned about working with AI agents
The biggest insight is that governance matters more than capability. The agents can do a lot, but without clear rules about what they should and should not do autonomously, things go sideways. I spent a good chunk of the month building guardrails: approval workflows, review steps, and clear boundaries between autonomous execution and human decision-making.
The second insight is that agents are force multipliers for a specific type of work: structured, repeatable, data-driven. They are terrible at judgment calls, creative strategy, and anything that requires understanding why a customer would care. Those parts are still 100% human.
What is next
I am doubling down on the 3-4 products showing early traction signals. The rest stay live but get minimal attention until something changes. I am also investing more in content as a distribution channel because it is the one thing that compounds and does not require a bigger ad budget every month.
The goal is not to have the most products. It is to find the ones worth going deep on.
If you are running a similar multi-product approach or using AI agents for operations, I would love to hear what is working for you. What surprised me most this month is how much of the work is not building, it is deciding what deserves your attention.