Speed is the killer feature
Become a great Product Manager.
I'm guilty of overlooking speed in the beginning of Indie Hackers' development. I care more about it now, of course. But in some ways it's "too late" — if I'd made better decisions earlier on, the site would be much faster today.
I do have some plans in mind to make the homepage and post pages renderable without our single-page app framework. It already works for anonymous visitors (try visiting the site without being logged in). It will be a bit of a slog to get it working for signed-in users. But I think I'll take a swing at it this year.
Post author here (and IndieHackers lurker :P) - almost every product I've ever worked on has deferred speed. It's one of those challenges that is always easy to put off unless you are ridiculously obsessed with it from the beginning. Even then, it definitely slows down development so for new ideas... I'm not sure if it's worth it or not?
You got Indie Hackers here so people are willing to wait a bit :P The site is pretty fast already compared to many of the products I was thinking of.
Thanks for sharing this, it made my day to see it here :)
Thanks for the great article. I like that you're not dogmatic about it. I know some people getting obsessed with speed and never going anywhere.
I'm speculating here but I believe that as long as something keeps me motivated and moving forward I'm better off. But it's good to find a balance and be conscious enough not to end up drowning in technical debt.
I had an experience on the security end with Electron that's cost me months. Still, I don't know if I would have launched as early as I did if I had been worrying about that.
Also - seeing this post made me realize that my meta description tags aren't pulling in a summary of the article 🤦♂️ There's always something else to fix :)
But I'm curious, do you regret the tradeoff? Or is it just that now is a good time to look back into this. Because it seems you placed your focus on adding more value to the product rather than technical excellence, and this might have been the whole reason you did as well as you did. Right?
I definitely regret it. I only coded things the way I did initially because I thought it would be fun to learn a particular framework, which it was. But if I'm being honest, it would've been fun to code IH no matter what I used. 😆
Ok I'm happy now I'm not the only one saying: oh I want to learn this framework, let me hack my next project with it. It's a big mistake but a fun one. Perhaps the whole reason to keep motivated hacking. I have mixed feelings as you can see.
Man, performance is one of my favorite things to work on....
I agree, it's pretty fun. I also like refactoring work.
what are some of the mistakes you made that resulted in poor performance/speed?
A lot of it was just poorly optimized queries which didn't anticipate scale, which was fine, because those are easy to fix later.
I think the decision to make IH a single-page Ember app was a mistake. Of course I didn't know the site was going to get this big, and it was perfectly fine at the time when the site was smaller. Hindsight is 20/20.
I also should've swallowed the frog and switched from the old Firebase to Firestore when it came out. Probably would've been a week-long project at the time, if that, but I didn't think I could justify it.
it's an extremely hard balance to strike, I'm probably optimizing too much^^
What tech stack would you choose if you had to redo it?
I spend a lot of time in South East Asia, where there can be a 150-200ms delay between my browser and servers hosted in Europe and the Americas.
Lag is most noticeable in webapps that are programmed to require many different requests to achieve a single goal.
For example, I recently looked at an e-commerce site that sent different Ajax requests to pull the product listings, and then the product counts, and then the filter and sorting updates etc...
The lag wouldn't be noticeable with sub 50ms latency, but at 200ms latency it's janky.
Also think about people using satellite internet connections that can exceed 500ms latency.
I think making requests for small parts of the user interface data is not good practice. The first response that loads the page should carry all the information the user needs. Subsequent requests should be for all the information that a large component needs so there's no need for dozens of small requests to update one part of the page. Even when updating a database, a big transaction is more efficient than a series of smaller transactions. The trouble with using components made separately by many developers is that they may not be able to share the same requests, which results in many requests sent to the same service to update the same page. The latency issue is usually fixed with a CDN.
100% agree. I just made massive speed improvements on Nodewood and it's like night and day to develop with.
First, I figured out how to speed up Docker on OSX (works for all Docker on OSX, not just for Nodewood), and then I enabled Hot Module Reloading for Nodewood's UI. Now I'm back to being legitimately delighted to be working on Nodewood features again.
Compiler speed matters. =)
I'm curious how 5G will impact this. I think it will be way easier to spot slow products.
He's kind of totally wrong about the phones and thus the speed being THE killer feature. First of all, Symbian phones, which were the market leader smartphones when the iphone was released were pretty fast. So were feature phones (i.e. dumb phones).
What iphone was a LOT better at than everyone else was UX. Of which speed is one component, of course. It's funny how much people never get it although it happened in front of us, it happened to us. At the time I was working at Nokia Research and I remember my girlfriend telling me how his boss got this wonderful phone that you can take photos with and you can view them, etc. The funny thing is that I had such a phone since 2001. I have been working with smartphones for 6 years then, she knew it, she listened to me when I told her or others what I was doing (and then listen to others responding "yeah, but phones are for making phone calls"). She saw me browsing the net on my phones (a 9210 communicator and then a 9500), send emails from the beach, etc.
Still it somehow didn't register. Because it looked like something that she'd never use. And then the iphone that did a lot less made her and basically everyone understand what a smartphone is. (Even though by then symbian smartphones were pretty common, most people didn't use them as smartphones.)
So no, it's not simply the speed. It's the UX. And even if we talk about speed, it's still not the speed, but it's the perception of the speed, which a lot has been written about: delay (lagging) matters a lot even if speed on average is OK.
I agree with Jonathan Ive, who once said that "ease and simplicity of use are achieved by obsessing with details that are often overlooked."
In one case, that's the core feature of Designtack, when people compare it Canva.
The focus of a frontend or user interface shoud be on delivering the best user experience, not data processing and data management. The frontend should focus on managing communications with external services and updating the user interface. A ligthweight frontend is fast and easy to improve. A distributed backend composed of microservices is also fast and easy to improve. The exception would be a web application that edits files in real time like a photo editor.
But at the same time, putting all that extra effort in having great performance and a distributed backend is wasted if your project ends up having no users. The performance tuning and backend scalability would come in at a later stage, after you find a good enough business model and get some traction.
I wrote a post on my personal blog about choosing microservices as the architecture paradigm as a startup, where I detailed more about my stance on this: https://vladcalin.ro/blog/2021-01-04-microservices-for-a-startup
TLDR, I'm against it because getting microservices requires splitting the data domain, and while you are iterating on a new idea, you have no idea how the data domain will look like. Once you start building microservices, pivoting becomes impossible.
I believe also that it's better to design products that look for users instead of products that wait for users like a princess longs for her Prince Charming. Products must be flexible enough to evolve and rely on data to guide the evolution. This website could be used in a thousand ways to cater to various demographics. In an alternate reality, the background is probably pink and the buttons dark blue.
Microservices are like lego, they are mini applications or services. There's no point in building for the sake of it, they must serve a purpose. If a project requires an image processing functionality, better to build it once as a service so it is available for many projects and can even be sold as a SaaS without having to change the code.
Better, download a suitable Github project and make it a microservice; this can save a lot of time as you don't need to understand every part of the code. The idea is to build something that can be easily moved around and connected to different projects.
Loosely connected parts are easier to change and to remove. For instance last year I decided to abandon a service I no longer believed in, I did not have to change much because it was loosely connected to other parts.
I read your article a few months ago. I agree that components sharing critical data and functionalities should be kept together. As for frequent changes to data models, the flexibility of NoSQL can help.