23
25 Comments

Buy your own servers

This is a long story. I'd initially started writing a tweet about this when I realized it deserved to be a much longer post. It's a case for buying servers and using commercial data-centers vs using the public cloud (Azure, Google Cloud, Digital Ocean, AWS).

Before turning to buying my own servers, I was squarely in the public cloud camp. They are cheap to spin up, are across the globe and it's easy to scale infinitely. This is all true until the costs start to add up. Being an indie hacker without any external investors, these costs are real.

When I started out I used Google's App Engine and it worked great. I was able to spin up servers that scaled elastically and self-healed. I wouldn't need to worry about being popular and blowing up (even though I was in a high touch B2B market facepalm). It also allowed me to focus on building the software and not worry about server configuration and all that. In this sense, the PaaS that Google was offering was great. Another thing I got was Google Cloud's High Availability Postgres DB. This ensured that the DB was resilient and, being Google Cloud, could scale with us. I could also schedule daily backups and all that. For all this, I barely paid anything at the start. My bills started off at zero (thanks to Google Cloud Credits) and slowly grew to a baseline of around $150 - which I was more than happy to pay because the business was growing.

Things then took a turn when my front-end servers couldn't respond to traffic fast enough. I started optimizing DB queries and had to start caching (I delayed setting up caches because the architecture was changing rapidly and I din't want it to slow me down). Being a distributed system, the cache had to live in a separate machine. I opted for Redis Cloud servers on Google Cloud and encrypted the traffic between my cache and the servers to make sure no user data leaked. With all these efforts, response times were greatly improved. Unfortunately, before long, the servers were OOMing a lot and I had to scale up to the next tier. Then the next tier etc. (See https://cloud.google.com/appengine/docs/standard#instance_classes) By the end of it all, I was sending Google checks that made the $5 or so I started out with look like child's play. This was split between growing storage costs, ridiculous compute costs and even worse DB costs.

This is the public cloud's bait and switch model. It's great for small loads (Thanks Digital Ocean for running some of my instances for almost 10 years!) but they get you once you're too invested to leave.

It didn't take long for me to realize that I was paying Google more than it cost my ThinkPad every month for less than a tenth of the resources. I get there there's data replication, internet costs and all that but still, Moore's law anyone?

I started looking at data center costs and quickly realized I was drinking the marketing Kool-Aid of giant software companies. I had already written off hosting my own servers before even counting the cost. Sometimes, tech can be an echo-chamber where alternatives are shut down before even been considered. Marketing works.

I ended up buying a used dell PowerEdge server for about $700 with 120GB RAM, 32 cores (2 socket 16 core each), redundant power supplies, redundant network ports etc. I also bought a number of SAS drives. (I didn't even know there was such a thing as SAS drives when I started down this path).

As to where I placed it, the problem with many data centers is that you need to buy an entire cabinet (They sell them in Us). Luckily, I managed to get a reseller who bought space in bulk and sold it to smaller customers for a profit. For 2Us, 2 way power redundancy, network redundancy, being next to a major City's main internet switch and /28 IP block, I only pay $200 a month! The data-center is well secured and even requires fingerprint access. I migrated all my servers (I run my own analytics, schedulers, dbs, front-end, cache from VMs from the same server) and spend an additional $10 on Digital Ocean for external Zabbix monitoring. I still keep all my files and backups on Google Cloud Storage because it's actually cheaper. This is my cover for a worst-case scenario. I should be able to get all services back up in a few hours which is acceptable per my SLA. Maybe someday I'll dockerize everything.

The entire migration took 2 weeks and involved learning about hypervisors, server monitoring tools and maintaining multiple replicas. Services that were compute-only were really easy to migrate and mostly involved DNS changes. DB Migration was the hardest.

I've been running with the new hardware since November, 2019 and I've only had one failed drive (Writing this towards the end of Apr, 2020). Let's face it, there's nothing wrong with a 5 year old server. Server-grade processors from 5 years ago work just fine. Error-correcting RAM too. All I did was get new drives. Unfortunately a new drive failed within the first two weeks of installation but the seller sent a replacement. Thankfully, I was running a RAID 10 setup so everything kept humming just fine. Swapping drives on server hardware is ridiculously simple and doesn't even require a restart! If you're considering this, get actual blade servers for your workload. I've neither regretted buying a used machine nor have I found it hard to run my own infrastructure. The open-source community has really matured and there are a lot of good alternatives out there for hypervisors or even container management. I use proxmox and it's been great.

Surprisingly, running my own hardware has actually been more reliable than the cloud. I had about three Google Cloud outages (US-East1) in 2019 alone. At first, all I could tell my customers was that there was nothing I could do. I ended up setting up a mirror in Google's US-Central for use during outages. There were also days with a ton of network congestion on Google Cloud and customers would complain. Since I moved, there have been zero such complaints. This is partly because all services are co-located within the same machine but also because most of my customers are in the same Geographic region where the server is hosted.

Public clouds are great to start projects with but beyond a certain size, I believe the cost of setting up your own infrastructure is totally worth it. Hardware costs have really dropped and there are a ton of used servers being sold for dirt cheap after their lifetime. New servers aren't that expensive either. There's a high initial cost to configuring everything though and YMMV depending on how much you know about server administration and security. TBH, the public cloud come with a high initial cognitive cost too.

If we manage to grow further and increase our presence in other states, I'm looking forward to setting up a second or series of servers as mirrors of each other.

I hope my experience will help inspire anyone at the cross-roads. As always, think for yourself. My solution may not be right for your company.

Otherwise, stay safe out there and pray we don't die from this virus. If you read to the end, you're a bona fide hero :)

  1. 4

    Really good read. But I don't think it applies to everyone. There's no silver bullet. I've been running services on VMs, dedicated servers, managing servers fleet using Mesos, self-hosted kubernetes, GKE, AppEngine, CloudRun - the list goes on. And if I learned something through the years about running workloads it would be: "It depends". :)

    If your project is small - just use whatever you're comfortable with.
    If it grows, and starts to cause problems(too expensive, too slow) - that's the moment to re-evaluate choices and make decisions basing on your needs.

    1. 1

      Thanks. Couldn't agree more. Each situation is different and calls for different approaches. I just think that people too readily write off buying servers and I wanted to show that it's not too bad sometimes.

  2. 4

    We had a similar, painful lesson-learned experience.

    Beware of managed services between multiple clouds. Make sure you google Egress Data Out Costs -- the costs are scary if you go into your devops willy-nilly.

    If you go with clouds, choose them wisely. The estimated "hosting" prices you see is not the price you get if you're moving data in/out between services (DBs, servers).

    AWS/GCP.. doesn't matter -- all the big players have the same pricing model, and they'll make you pay heavy prices if your growth scales, as they want you to keep/move data within their own clouds.

    So if you're spinning up new servers and choosing new managed DBs (e.g, Mongo, ElasticSearch, Redis) -- I'd highly recommend keeping them within the same cloud family. One less headache to worry about egress.

    1. 1

      I noticed this too. There's a huge out-of-family penalty. With some services, especially storage, can actually go around this since you mostly just send a URL around to the client.

  3. 3

    I thought about going this path for a while. The main issue for me was that I'm not going to host stuff at home, due to reliability issues, and colocation is quite pricey. At least pricey enough that just renting a dedicated server seems to be cheaper than the colocation fees combined with server write off, and renting it gives more flexibility.

  4. 2

    That's why I developed my open-source PAAS coded.sh which can be installed on any server with Ubuntu. Use it on DO and Vultr servers.

  5. 2

    I partially agree, partially disagree. True, that is their business model. But when you start to manage your own databases, well, take the salary of a DevOps engineer and calculate how much time it costs you to do upgrades, growing disks, monitoring memory etc. Think of their PaaS as your first employee - they do all that maintenance for you. For a price.

    And what happens when your physical server goes down? The PSU burns down? The network connectivity drops? The power fails?

    1. 3

      Modern physical servers have a ton of redundancy built-in. If the PSU burns, there's a second one to take over. If network connectivity drops, there's another card that takes over.

      Depending on the class of the data center, most have multiple independent incoming power lines and lots of power redundancies with separate batteries, generators and power lines. A lot of big Fortune 100 companies host their servers in such colocation centers. Learning about all this was really an eye opener for me.

      I agree that they have to charge for the configuration they've done and all the server upkeep. I just think that it's a bit too much for a bootstrapped indie-hacker. If I had lots of investor money and employees, it'll be a different story.

  6. 2

    Thanks for sharing your experience, that's an angle we don't often hear about at this scale. I'm curious what you will say in 1-3 years and if you are happy with that decision.

    It's also amazing how different the server cost can be relative to profits. Usually in high-touch B2B the margins are pretty wide so you must be processing some intense volume.

    1. 2

      I actually had to give some big clients free services to work with them to develop what I have now (still developing) but the other paying clients pay enough to make the whole venture ramen profitable. Super grateful for what it's become but there's still a ton of work to do. AND I've learned that I'm really bad at selling or even finding the right channels to my customers.

      1. 2

        Congrats on the ramen profitability, that's awesome! Selling is hard but I'm sure you'll figure it out.

  7. 2

    Great post.

    How much time you spending on maintenance?
    How quickly you can scale up and down if needed?

    1. 2

      Thanks!

      As far as maintenance goes, almost none. It's just like running an additional VM on the software side. The hardware has been very stable. The server's firmware has alerting built in so it's pretty easy to set-up monitoring and get alerts when things are looking bad e.g. high temperature, someone opens the server, hard disk failure etc. The server market has really developed over the past so many years. I've just had the single disk failure I described in the article.

      Scaling up shouldn't be a problem. I already have a ton of extra capacity but it's easy to add NAS for storage or another machine for compute. Most hypervisors allow for multiple server nodes, data-centers, HA setups, fail-over etc. and it's easy to move resources from one node to another or clone etc.

      If it's ever too bad, it's pretty easy to spin up a bunch of public VMs and create a hybrid cloud. This is one of the reasons I still use Google Cloud for storage.

  8. 1

    Cool story! Don't forget to monitor your hardware ;). The Prometheus node exporter is good for that.

    Glad to hear you did the math and decided to take the leap. I love hardware, I just don't get to play with it anymore. I worked in a data center research department for years and years, so many shiny toys!!!

    Feel free to reach out if you ever need any advice on the bare metal datacenter world.

    1. 2

      Thanks! I'm monitoring with Zabbix. Working great so far. I'm actually loving this hardware stuff a lot more than I thought I would :)

      1. 1

        Hardware is surprisingly fun!

  9. 1

    This is exactly why I started Dogger - a cheap Docker-based AWS cloud host with fixed prices, meant for Indie Hackers. It's also super easy to move away from once you've scaled.

    https://dogger.io

  10. 1

    Who was the data center / reseller you decided to go with?

  11. 1

    Sometimes, tech can be an echo-chamber where alternatives are shut down before even been considered.

    I couldn't agree more. And strangely, the echo chamber usually pushes devs towards making their lives more difficult rather than easier.

    On your main topic of cloud services, I always tend to think of tradeoffs between scalability of availability vs scalability of affordability.

  12. 1

    That's a great post, and it confirm what I heard before. Everybody speak about going in the cloud nowadays, and it makes a lot of sense most of the time, but being aware of the drawbacks is very important too.

  13. 1

    You bought that server very cheap, where did you get it from?

    1. 3

      eBay. I looked for a local seller and offered to pick it up for a discount. A lot of big companies decommission a ton of this every year. I think you can build a really amazing business just reselling these machines

    2. 3

      eBay is a surprisingly good place to get them. There are companies on there that sell fully customizable PowerEdge and other servers for a pretty cheap price.

      I've personally ordered a couple servers and some hardware from OrangeComputers over the last few years, but there's probably a good bit of options out there to explore.

Trending on Indie Hackers
After 10M+ Views, 13k+ Upvotes: The Reddit Strategy That Worked for Me! 42 comments Getting first 908 Paid Signups by Spending $353 ONLY. 24 comments I talked to 8 SaaS founders, these are the most common SaaS tools they use 20 comments What are your cold outreach conversion rates? Top 3 Metrics And Benchmarks To Track 19 comments Hero Section Copywriting Framework that Converts 3x 12 comments Join our AI video tool demo, get a cool video back! 12 comments