My first project that I did 2 years ago failed, and I spent too much time (7 months) building out a complex build pipeline with multiple micro-services. I had a main service, edge services, caching layers, etc. I learned a lot but a big waste of time. It was probably over $150/mo for something that didn't make any money.
Currently for Crave Cookie, I pay about $60/mo on Digitalocean which processed $69k in the last 30 days.
My build is very simple, I build my Docker containers locally and push them to the registry, then I deploy all the static assets to s3 (Digitalocean Space), and then ssh into prod and pull and run the container, one single instance on one single machine. That deployment is done quickly in a single shell script.
I use a SQLite database that is backed up to s3 every 3 hours. No external database yet.
That's what's up! So many people, especially from academia over complicate things extremely early in the process. Most folks don't need microservices (ever) and will build out an epic amount of infrastructure that still lacks data center redundancy for high availability, making it all for naught.
"in a single shell script" -- I can't praise this enough... shell scripts are arguably the most powerful tool in your belt, yet so many people throw the kitchen sink at simple problem.
Congrats, and here's to your continued wins!
Nice!
I’ve posted on IH before cautioning against thinking about micro services so early on. One of the main reasons we build out small services at my day job is to allow us to quickly change and deploy things without having to retest huge amount of functionality.
But, at a startup you rarely have that problem early on.
Right now I use a utility to deploy a docket based image to a droplet. Super simple and quick. No autoscaling, but I have had to quickly stand up additional servers before and as able to so quickly.
Anyway, congrats on you setup!
A good question to ask every single person designing infrastructure.
"Are you Netflix"
An estimate is that Netflix spends around $9.6 million per month on AWS hosting. This excludes any pay-roll / salary or contractor costs to code, support and manage that infrastructure.
Their 2019 revenue was twenty billion. This means their monthly spend on infra is 0.05% of their annual revenue.
Do not over-engineer to the point of insanity when you don't even have a single $1 of revenue.
Curious about your sqlite DB, and how it handles the growth. Given the volumes you handle, it must have become pretty big today. Does that impact query performance a lot, how does it behave? I use it for most of my personal projects for simplicity and convenience too but I'm far from your usage volume, so very curious about this!
Hello @DevMunchies, I read your posts about Crave Cookie here in IH and watched your interview in Youtube.
I have a acouple questions for you, it would help me a lot. I have been codding a Student Information System in Django with SQLite. Most o the users of the system will only read data, in the worst scenario maybe 25 to 30 concurrent users could write data in a particular period of a particular date(deadlines to submit grades).
How is your experice with SQLite, how many Read/Writes do you have per day? Would yo recommend it for the kind of job I describe?
Thanks a lot for any help you can give.
https://www.sqlite.org/whentouse.html
You'll have no problems with reading. Writing is only a problem if you have a very high volume (hundreds of writes per second) which you won't have.
SQLite queues any writes to the database so worse case you wait a 10 milliseconds for one transaction to finish.
just make sure you learn about indexing properly and how to see if a select /update statement is using an index (
EXPLAIN QUERY PLAN
...).e.g.
EXPLAIN QUERY PLAN SELECT name, grade FROM students WHERE grade = 'A'
. I better have created an index on that table forgrade
.Thanks a lot @DevMunchies. I will review the docs about indexes in Django ORM.
I planned to use indexes, but with your opinion it became in a must do.
yes very true it should be simple idea solve the problem first and spend minimum as possible depending on the area you are working on.
Impressive finding! At some point, I realized exactly the same and stopped doing all this crazy shit like CI/CD pipelines, Redis caching, and even Docker madness - it's all good but for big players like Google where complexity is at extreme.
Can I suggest a better approach thought which will not require even DigitalOcean server? Make static pre-rendered website, put it on S3 or Firebase Hosting (free up to some limit), and route all orders to Airtable through a simple API call. No-code solution, basically.
Perfect, no additional hustle and extremely fast solution with a lot of possibilities to do custom analytics in the future with almost no cost at all.
we have a pretty robust "admin" backend for managing all the orders and how that integrates with the order confirmation page, driver availability, cookie flavor availability, business hours, etc. That impacts what the form show on the frontend.
airtable would be too simple for our volume now.
What's the general consensus of Heroku here? I love how simple it is, but have heard it can get pretty expensive pretty quickly, is that true?
What’s expansive is very big database.
Otherwise it’s pretty cheap for a small project given the ease of operation. Free during dev, $7 for the smallest dyno once launched. Same with DB then $9 with all the backup/upgrade/fork... And no cost of bandwidth.
In a previous job we used it for a high traffic website with a $4000/month bill and that was a no brainer.
We used some paid addons and paying through Heroku was handy. Lot of them have free tiers.
thanks - I'm on the $7 dyno but already feeling pressured to get the next $25 one and was worried it was going to escalate quickly (and i'm pre-revenue)
Really appreciate this look-see. I am in the midst of building a new productized service and keep reminding myself "simple MVP". That is my new mantra. Keep the wisdom coming please!
I got to the very same conclusion recently. I started my project actually learning about AWS (which took over a month because I can only work on this in the evenings). I visualized my solution using few AWS services. Then after a while I realized that my AWS infrastructure itself costs quite a lot of money even when unused and no traffic (NAT was $20 a month just because it was there running and using a public IP). That made me think that after I add more services (like Redis) the costs would probably sky rocket even if I kept everything in the lowest tiers and that's certainly something that in the beginning would be just a waste of money.
I also decided to go with DigitalOcean and I think I'll have everything running on a cheap virtual box.
Learning AWS was definitely not a waste of time as I am a professional developer and it is very popular service, but I did lose a month for something that I can live without for a very long time.
Hey @DevMunchies! That's some great insight.
Looking back on it now, what do you think it was that lead you to invest so much time on the complex build pipeline and microservices?
Many here to feel that pull and maybe understanding your reasoning then could be helpful for people to here who might be on the way to making the same mistake.
This comment was deleted 3 years ago.
This comment was deleted 3 years ago.