(the below is a short report on my experience with AI as a co-founder of sorts)
Since GPT-4 came out, I've been experimenting with pretty much every new piece of AI tech that was out there. I've been working in software for a long time, most of which was in AI, so this innovation was particularly interesting to me. From text to vision, voice, image gen, I checked it all out. Usually by way of side projects, that I typically tend to base on a real need and develop top to bottom (beta stage, or at the very least a full MVP). I feel working on a real product that demands completion is ideal if one wants to truly assess the limitations of such services.
Soon enough, I also integrated AI in my code development, not just as a third-party service the final app would leverage. As many of you will know, any opinion on AI code performance would be irrelevant here, since we're talking about a span of too many years, in a technology that gets better every month.
But last year I decided to do something different. I decided I would launch one of these side projects, and I would do it by going from idea to final release using AI as much as it made sense. Meaning, not just as a code assistant, but rather for every role a new company requires to develop AND market a product. Here we can indeed ponder on the good, the bad, and the ugly, considering we're only talking about a limited (and recent) span of time.
The natural question is if leveraging AI helped or was a waste of time. Not to spoil the ending, but: Pff, of course it helped.
It's funny how this answer today wouldn't surprise anyone anymore, when only last year it was all but obvious. While these models have been able to do pretty impressive things already at the beginning--in this context, by "beginning" I mean when GPT 3.5 came out, I'm not even going back to Bart and the Transformers paper, let alone the decades of ML studies before then--my instinct is that at the time we were all also a bit...easily impressed, so to speak. This new thing was magical, and we all imagined the future without being too concerned with how to build it. We started with models that were slow and way too costly to be integrated in a consumer app (unless one had millions of funding to burn). Therefore, productivity meant AI-assisted coding, mostly (there were a few exceptions), and the truth is that the output was really bad; it was impossible to get anything out of it that wouldn't require so many fixes that it was faster to just skip the AI and do it yourself. It was good that the world now had this tool, but more for non-coders that needed to run an experiment than for actual software engineers.
Not today. Things have changed quite a bit, now. Sure models still hallucinate, but so much less that one can actually work with their output (for code and a lot more). I'm going to caveat this with the notion that it helps to know how these models are best used (prompted), what their limits are, etc., and I'd add that I would avoid anything less than a top-tier model (with high reasoning enabled), but still, up until a few months ago this wasn't even a possibility. To use AI wasn't really being productive, while now my opinion is that it is, and that it'd be absurd to go back to the way things were.
I feel the end of 2025 was an inflection point for AI; one of the reasons I decided to write this was as a milestone in the conversation around AI value. I also strongly believe 2026 will radically change how startups and indie hackers/solofounders operate, which is why I wanted to share my direct experience.
A product isn't just the core piece of software that makes it different from anything else, it's all the other software around it (users, scalability, security), it's the frontend, it's the time spent on designing a UX that won't kill traction at the start, it's about testing and testing and polishing and testing. It's born out of solving a need, but it survives in the wild if users feel at home. A product is also (mostly?) marketing, people need to find it, so it's SEO and alliance strategies and, often, cold-emailing. It's a lot of content, and beautiful imagery, and modern presentations. It's a ton of research and communication and probably a bit of psychology, too. And let's not forget coordination, and decision-making, an architecture that makes cost realistic, and a pricing model possible.
Ultimately any product is a lot of multidisciplinary work. It takes a village. I'm not suggesting one takes it all at once and releases only when their solution is perfect and complete (like we used to do when software was sold in boxes on the shelves of a computer shop), but all of the above will be required at some point; and before then, one has to have at least analyzed all of it, otherwise you're dead in the water. How can you price something if you have no idea what it'll cost to make and run? Do you really wanna get out there and start chasing early users' feedback while you still have all the backlog of tasks you already consider necessary?
So, what was the outcome of my experience working with AI as it were an army of colleagues?
My hope here is that what I reported is somehow helpful to old and new folks in this business. While I do feel things are about to get really weird, I also think that the learning curve is not that steep as long as one is comfortable with changing the way some things have been done so far.