It's counter intuitive, we are in the cloud era. It's the default choice and yet, it gets more and more complex (kubernets, docker...) and with new fears like unexpected huge bills.
What if in this new era it was easier and cheaper than ever to bring back the old choice?
I have been thinking about that for a while, and after reading this tweet
and joining some conversations in Reddit, where people used a single managed server to run a business, I have come to the conclusion that for initial phases, it may be a a good thing.
My logic is:
I know that there are also more complex cases, but I can't forget a post in which DHH said with a single server (back then), they paid 4 salaries + profit.
What do you think?
Hello, I joined specifically to comment on this.
About 15 years ago, I had a problem while working as a contract designer with a major publisher. So I built a solution. Next thing I know, I've hired a programmer to build, and system admin to show me the ropes of all this Unix stuff. I didn't do any coding but did become a sysadmin for the joy of it.
Here was my setup:
Here are the expenses incurred:
Income was just under $170,000/year.
Eventually, I found that I had so much extra overhead that I started a white-glove hosting service for select businesses. Over time, the publishing industry imploded and whole departments were laid off. The IT replaced my system with a company-wide DAM at seven figures. Thankfully, my hosting company kept me going for several years until I moved on to other things.
Yes, I spent a lot of time learning how to manage the servers, time well spent on education that pushed me in a totally different direction that earned far more money than I could at my old career.
Now, I'm getting back into projects and am awed and dumfounded how complex things have become. I understand the cattle not pets thing, and my businesses don't need millions of users to be successful, so I may have a different model than others.
I tried learning about containers, JAM stack, all kinds of things, but it's just not clicking for me. Why do I need Github and Netlify and ForestryIO just to publish a static page? Why create a complex container infrastructure before I have 1% load on my servers?
I've toyed with the idea of true bare metal, but don't like it when I can have redundancies using KVM images at all the big providers ready at the push of a button. And if I do need the complex stuff? You can be sure that I'm going to hire someone to help.
Agree, I still deploy most my sites via FTP upload on VPSs or shared hostings. They can easily handle 500k-1M monthly users.
Perfectly valid choice. Just remember, no matter what you choose, you pay for it. Whether that is with your skills, your time, or your dollars.
The complexity you mentioned isn't specific to the cloud. You can have the same complexity on bare metal as well, especially if you want to nicely orchestrate containers. Nothing to do with hardware or the location of servers.
The idea of hosting your own bare metal sounds great, but there are a lot of cumbersome details involved - good internet, public IP addresses, hardware setup and usually a hefty upfront cost.
I think the reality is that something like a VPS in the cloud is just more accessible and cheaper, especially at the early stages. It eliminates most of the complexity while also reducing capital at risk. It brings a lot of other benefits as well - like easier on-demand scaling and allowing yourself to not care about maintenance.
If you're at certain scale already and the compute power you require doesn't fluctuate much, it might be a smart choice to get some servers set up locally, but I think this scenario is very specific and will not apply to the vast majority of people.
It's definitely NOT a "good thing" to use a single managed server to run your business.
You should categorically dismiss the advice of anyone who says that. They are a bonehead. Reddit is full of armchair anthropologists that don't have any idea what they're talking about.
And far be it that you're provider's sales rep will tell you not to purchase one of the most expensive and cost ineffective solutions that they offer lol.
1.) This presents a single point of failure that, although you can recover from if there's a catastrophic meltdown (with the proper disaster recovery plan), you're running a "business", and the integrity of such has just been shattered following the moment your services went down.
2.) Managed servers are expensive, and truly not worth the money. For this reason I am most happy when someone is so naive that they order one from me, lolz.
3.) The hardware that you should be running is probably not affordably justifiable... See #1 above.
4.) It's not your machinery, which is actually free, if you think about it, since you get to write off your business equipment on your taxes over a three year period, which is something that your managed services provider (like me) is certainly taking advantage of.
So, you want to run your business on your own stuff? Well, you're on the right track so let's do a little math just to see how cost effective this actually is:
Buy 3 of these 1u Proliant Servers (and one extra for spare parts, just in case). Subtotal = $1200. dl-360's from two gens ago are generally super powerful AND affordable.
2.) Buy two fully managed, 48 port 1Gbps layer 3 Cisco switches. Subtotal <= $600
This will provide you the redundancy in case one fails completely, which is very uncommon, but I have seen it. Everything redundant should be your immutable policy.
3.) Lease a 42u rack with XC, 1Gbps transit and 20A power - you can get one with a nice 1Gbps blended service for less than $800/mo. Ask the data center to provide you with a separate failover power source gratis and they will - it costs them burning and they make a sale.
Now you have used up 7u of rack space for your business (don't forget the 2u that the redundant PDUs take up and you have about 35u left for expansion ( somewhat less in practice, but you get the gist here).
When you purchase your switches you may wish to program them yourself and then ship them to the data center. When you purchase the servers have then shipped directly to the data center with specific instructions for the NOC engineers as to precisely which ports in the switches to plug your servers into. You have iLO to do everything else remotely, including downloading OSes and mounting virtual DVDs with the install media, configuring your RAID 10 arrays, and doing whatev is needed in your BIOS, setting boot devices, etc.
Install what you want. I like Xen, KVM, and VMware, two are free, but due the sake of simplicity I'll say install the VMware ESXi hypervisors, then your vSphere 7 management on your first VM, and create a cluster with high availability with the three ESXi servers, and then deploy a virtual machine consisting of https://OpenMediaVault.org - install the iSCSI target plugin and create your $75, 000 SAN (for free).
Whatev you wish to use for management was migrated to the high availability cluster when you created it out of your first couple of VMs using Open Stack, OpenNebula, CloudStack, K8s, vCloud Director or just vSphere 7 - whichever free or paid solution(s) or combination thereof that you decided upon. You can do all of this with KVM and Proxmox if you like.
Most data centers charge nothing to hot swap out failed hard drives or redundant, hot swappable power supplies. Fans do involve them sliding your server out while still in the rack and opening the case so there will be a small charge there, but certainly nothing compared to the exorbitant cost of renting managed servers.
When you want to expand and scale up just ship another server or servers to the data center and have them rack it up and plug it into the specific port in the switches that you designate for free. Add the new hardware to your cluster, then retire the old box and sell it on eBay.
When you want to retire hardware just replace them one by one if you like, and you can migrate the running virtual machinery around without downing any of those machines. When you purchase a full rack you are usually given unfettered access to it 24/7 if it's local to you.
Pro tip: purchase your rack and transport during the last three days of any calendar month - sales scum are scurrying about like rodents and willing to even let things go at cost just to make their numbers and get their monthly sales bonuses - this is very true in this industry so keep this in mind. lolz.
You should now be sporting about 384GBytes RAM and somewhere in the neighborhood of half a PetaByte of SAN storage. You can deploy hundreds of VPSes with a shitload of RAM, vCPUs, and disk space in each one -don't think that you are limited to a total of 384 Gigs of total combined RAM or the number of total Xeon cores you physically have either, most machines never use all of their resources and certainly never all of them at the same time, so you can 'oversell' the resources, so to speak, and the hypervisor's will manage and delegate out what is really needed. i.e., a machine with 16GB RAM certainly isn't using that much most, if at anytime. Anything not being currently used can be used elsewhere on another VM that does need it.
Total cost? $1600 (we bought the extra Proliant box with 128GB RAM and replacement drives) + $600 for the layer 3 switches for a grand total of $3000 for your first month of operation and an annual cost of only $9600 + $2200 for your initial investment yielding a grand total cost of $11,800 for your first year and only $9600 every year after that.
If you sell only 16 self, semi, or fully managed VPSes at $50/mo you have reached break even, and it's hard to only sell that much when you have the capacity for easily more than a hundred more. Need to add another server to the cluster for scaling up your capability? Your cost is about $400 shipped, delivered, racked and deployed - all while sitting at your desk at home in your underwear eating hot pockets and drinking diet Dew.
So, you ask, "What should I do with the other three quarters of rack side just sitting there vacant and doing nothing?"
First, remember that you lose a lot when you only purchase a half rack, not just in terms of price but you usually only get a 10A circuit, have to pay for a redundant 10A circuit you'll hopefully never use and you're treated with less than VIP status. If the data center is local you'll also organelles have to make appointments to be let in to work in your stuff coz there other half of the rack is rented by some ones else... So expanding means down time coz your new track night be on and entirely different floor in buildings like One Wilshire, where real estate is at a premium.
So... I dunno, rent out semi-managed Colo space to your friends (meaning they dunt have physical access to their own machines). Have them ship them to your company name at the data center (make sure you get this agreement from your provider, they're more than happy to do so when making the desk with you - not so much later). Oh, and give them a 100Mpbs port speed (you can easily set this on your switches). If they want a Gigabit charge them what you're being charged and get a discount on your next XC.
If they have, day s disk failure, they contact you and you tell your providers NOC team to hot swap the drive for you for free.
Or just enjoy the savings as you gradually expand. It's really up to you. Blackout panels are generally provided free by the facility to keep the hot aisles hot and the cold aisles cold. But make sure they agree to provide with those.
You can also rent out dedicated physical servers - semi or fully managed, do as not to have to provide access to your rack by strangers. You charge about 15% of your customer for the server so it's paid off within a year, give them a 100Mbps dedicated port speed and call it a day. It's paid off within a year and your customer can't just bundle it up and move it to another provider like they can a VM, so they usually pay, in my experience, about 4x what it originally cost you to buy the box in the first place but the time they're ready for something else.
Okay so so I think the only thing I didn't actually mention is relatedted to the PDUs. Some facilities will provide them for you gratis and other's will require you to provide your own. If the latter is the case, ask them about accommodating a vertical type so you don't take up otherwise valuable rack space. And make sure your contract specifies what an additional 20A circuit week cost you later when you need it. I've seen them jack the price for other customers who didn't get this in their contracts.
Okay now... Last but not least... Do you want to save even more money? Because you intimated that you might be able to bring everything on prem and just go with 300Mbps port speed. If that's the case, then by all means do it! Same hardware, but you can buy an enclosed full height rack for less than $800, and even less for one as high as an end table. That's a one time investment in hardware too, not an MRC.
In either scenario, cabling costs are negligible but if you go on prem you really stand to save a lot of money and I generally recommend this to my customers that can benefit from this.
The one thing I've purposely left out of the discussion is IP spectrum. You should know your needs before hand and be certain about that, as well as able to articulate why, then negotiate it (last three days of the month, remember?) As a deal breaker (last) after they give you everything else you want. On Prem will be different, mostly set costs that are rather inflexible, but by the same token, your saving soooo much.
Net blocks of IP Address side can bed all over the place, and I haven't no idea what forward facing services
I wouldn't deviate from the the server model minimum for your cluster, however, as the difference in cost between two and three hypervisor boxes is negligible when going On Prem for your infra, while the performance gains are significant, and if one machine has a complete meltdown in a three machine cluster you're not operating in a degraded environment.
On Prem, I would certainly consider an old workstation with an extra NIC running https://OPNsense.org at your network edge. Even an R Pi could suffice nicely.
I hope that helps!
⛵
.
What a comment! Found this site in the Google feed, and dig the discussion. This is a lot of what I do for my own company, but I know it's rare for most people to know how to build and manage the whole stack like this. For me I like the hands on and ownership aspect. Luckily my colo has open access and is local, so I can come and go as needed. They also offer quarter racks, which I have packed to the gills. Thanks for the detailed write up, as it's nice to see I'm not alone and also see differing product recommendations. I run super hard core data crunching processes though, so no room to sublease. Ive been a fan of buying previous gen Dell servers off eBay or orange computers. Cisco for switching, and Ubiquiti for routing. Using starwind San, but I also use Windows server for certain pieces and was shocked at how well and quickly dfs works after spending a lot of time setting up the SAN. Lots of Windows failover stuff. Most of the backend is UBUNTU server. HA database on Galera Cluster. Redis cluster. Plus whatever else my products need, always done as either HA or failover. Anyways, thanks for the detailed comment!
Thank you very much @Justin_d3mo , I'm very pleased to know that the info I offered served to validate what many actually do in a bubble, wondering if they're an exception to the rule.
Yes PowerEdge servers are just as valid and solid, and I was amused that you not only picked up on, but further, validated the, "two gens back" principle, to maximize RoI while still being contemporary hardware in production.
Once again, thanks and you have a great day!
⛵
.
For most of us the best thing is probably a VPS like Digital Ocean or Amazon Lightsail.
I never left it. Not really bare-metal, but VPSs, like really cheap ones ($5 to $20/mo) for each app that I deploy. If it needs to scale I just scale vertically (increase server specs) and then horizontally once it reaches vertical limit. I don't go with bare metal as I don't want to bother with all the security and virtualization stuff, so I just go with VPS.
I have been working on a local cloud using VMware ESXi and remote cloud solutions using the same platform. The costs for bare-metal are indeed very low, including for rented hardware.
👍 I specifically second your point on »Less layers and services to rely on« for all those of us who don't really need all those fine cloud flexibilities.
If you can save a lot of $$$ by switching to bare metal - it makes sense. Otherwise, it's premature optimization.