I'm wondering where all of you deploy your apps and what database hosting you are using.
In the past, I used Heroku to deploy and use mLabs for MongoDB hosting.
I don't like that I have to pay 7$ for each app on Heroku and mLab charges 15$ for database hosting.
So I created my own server on DO to do this stuff but wouldn't exactly call it production ready.
AWS seems like a logical place but all these cloud platforms scare me because I feel like I could be left with a gigantic bill if I'm not careful.
What is everyone using?
If you are trying to optimize for simplicity and cost at the expense of features I would suggest using digitalocean.
If you are trying to optimize for flexibility, I would chose AWS -- most of their services provide a free tier which is great for testing out new services/features that they offer -- and nice for a hobbyist/bootstrapped budget. I also feel like AWS has the best corpus of documentation and stackoverflow questions which are invaluable for helping you dig yourself out of a hole when you make mistakes or encounter unexpected issues.
In my experience, Google Cloud Platform is super nice and super new, and super fast -- but there are several headaches that come with that. The most annoying thing is the lack of public documentation and inability to search for unfamiliar issues, since it is still relatively new.
I have also worked with massive deployments (petabytes of data and bandwidth) in both AWS and GCP, and in my experience AWS is by far more reliable. I've experienced many network and service outages in GCP, but the hardware is definitely faster/nicer.
I'm a mac programmer who (besides objective-c/swift) is fluent in python and C++. I haven't done distributed programming but I have reasonable experience with systems programming.
How can a programmer like me get started with something like AWS and devops in general? I have tried dipping my toe many times but I tend to feel lost. Most of the tutorials online are for non-programmers or newcomers. I mean, I am a newcomer too, but I am familiar with unix and it's innards in general.
Is there a book / website /what-have-you that helps people familiar with OS / systems programming get into devops?
I know this is not the right place to ask this question, but I really feel lost. :)
(I should mention, I don't have much experience with networking, other than using sockets here and there. The only direction I can think of is to pick up Richard Steven's TCP/IP books, which I feel is a bit of a overkill to just gork AWS/architecture concepts)
I was in a situation similar to yours a few years ago. If you google for information on “DevOps” you’re probably going to find a ton of stuff on continuous integration and deployment (CI/CD), operating services in production at scale, security automation, and lots of other topics which are crucial for engineers in a well funded, high productivity company. I am in that target audience, and while those kinds of docs are great, unless your goal is to work as a DevOps Engineer or Site Reliability Engineer (separate things, but they go to some of the same conferences) they’re probably not what you’re after.
If you have a specific task in mind (set up a web server to use SSL, install and configure Redis) then I’ve found the smaller cloud providers (Digital Ocean, Vultr, Linode) to have great docs which are generally applicable. “<topic> for sysadmins” sometimes works pretty well, as a lot of what DevOps seems to be about is system administration at scale.
Digital Ocean seems to always have docs around whatever I am trying to do.
Thank you, I will try doing this.
Hey @bibhas I've been thinking about putting together a site with some tutorials and other resources for getting started with DevOps in an accessible way geared towards the audience you're describing. There are a ton of scattered resources but most are attached to specific products or consulting firms. That can lead to a bunch of very opinionated material that isn't always best for someone looking to learn concepts and how to string the technologies together effectively. What do you think? Would that be something you'd use and potentially pay for in the future? I might put a separate post together to look for wider community feedback on IH as well!
Hi Victor! Yes, I'd pay.
Cool! I've already got ideas about some good topics to cover. Gonna put together a post to get more community input. In the meantime please let me know if you hear of a specific DevOps topic that you'd be interested in learning more about.
Ok, will do. Let me think.
I would go through the Docker getting started guide and learn how to build and run your applications in containers. Then it’s really easy to build images based off your latest github repo and push the code to production.
This sounds like a great idea. I'd still need to setup ec2 with docker myself, isn't that right?
If you use aws fargate you don't have to manage the underlying ec2 instances, just the networking in the vpc and what's running on the containers. It has a wizard which will set up most of the infrastructure for a simple configuration via cloudformation for you.
Yeah. I’m using DO currently but same thing.
As long as docker is installed on the remote machine, you:
build your image locally,
push it to your docker registry,
ssh into your remote machine,
pull down the docker image to the remote machine,
Stop any currently running containers and run the newly pulled image.
That all can be moved in a script that you run after you push code to GitHub, or use something like Jenkins to do that when your GitHub repo changes.
Firebase
Firebase is fantastic. So inexpensive, so easy to use, and provides so much.
I'm trying Firebase too. You get a lot of integrated services. Just transactional emails missing.
What do think about / have you used Cloud Functions to run scheduled tasks, other logic that needs to be server side, etc.?
I am not using scheduled tasks but I have payments, logs, user analytics details on firebase functions (for https://www.watermark.ink ). Firebase functions are easy to write and run.
For scheduled tasks you can even have a external triggers to functions through specific custom url.
Haven’t tried it
I've gone through a couple of setups in the past. When I first started out, I had a single VPS at Linode and ran multiple sites from there on the one database. I learned a lot about servers through that period. Then I moved to different servers for each project for better reliability and separation (usually with DigitalOcean).
Now I use my own service, Codemason. It's like Heroku but you can bring your own servers and you don't have to pay $7 an app
The deployment experience is very similar to Heroku (
git push, automatic build) and you can add a database with a click.If you'd like to try it out, shoot me an email ben(at)codemason.io and I'll set you up with a nice long free period because you're a fellow IndieHacker :)
So looking into it a bit more deeply, given the containers that are running, are you orchestrating the containers with Kubernetes under the hood or something else? Additionally, I haven't created my service just yet, but does it support environment variables in the web UI?
Additionally, I feel kind of off deploying my database to a container, just for persistence reasons. When they add services like Postgres or Redis, is it in a container or is it actually installing those dependencies onto the server itself?
Lemme know if there's a better place to ask these questions. :)
EDIT: Finally stopped looking via my phone and got on my computer to check it out and saw that it does support environment variables. This is so ace and I'm actually hard-pressed to give this a whirl.
Really appreciate that, thank you :)
I go into a bit more detail about this in the docs but all the orchestration is handled by Rancher. Codemason wraps around Rancher which does a lot of the heavy lifting when it comes to orchestrating Docker containers.
I see you've updated your comment but yep, you can manage environment variables through the UI
We're big believers in containerisation (particularly Docker). Basically everything that Codemason runs on your server is a container and since everything is containerised, it cuts down on the complexity of running things on other peoples servers for them.
These days a database running inside a container performs on par with one running the host [1] and is built into Uber's significant database system [2]
For databases/services requiring persistence on Codemason, simply schedule them onto a specific server (Configuration > Scheduling) and mount a volume so your data is persisted.
Alternatively you could do your database as an external service and connect to it (e.g. with one of DO's one-click apps)
[1] https://mysqlserverteam.com/mysql-with-docker-performance-characteristics/
[2] https://eng.uber.com/dockerizing-mysql/
This looks really awesome and I'm going to take a look at it for some projects of mine. Great work!
I think that AWS is a logical choice if you are looking to build a production ready product and want full control. They do make it tricky to have an overall view on everything that you have running by not giving you a dashboard of all your running instances in every category but if you limit yourself to one region and don’t turn on auto scaling you can be reasonably sure that you won’t end up with a massive bill. Also they show you the estimated bill for the month.
If you are looking for a unified solution that gives you a db and a way to run backend code without managing servers you should check out our product https://base.run which we designed for devs who are looking to build an mvp quickly that is production ready. One great thing that you get for free in Clay Base is a spreadsheet like UI that you and others on your team can use to interact with the data in the db without building an internal admin panel. It’s free for our early users and we make it easy to migrate it to any cloud providers if you need to in the future.
Netlify has an amazing free tier.
And adding new services all the time. They recently opened up the beat for Functions. It's kinda limited right now, especially the 10s execute time limit, but it's great having it rolled in.
Try putting dokku on your DO server. It is configured in a very similar way to Heroku.
This is what I use for my own projects. (I actually moved to vultr from DO, but both are fine in my experience).
This a nice way to go when you have a lot of personal/side projects you work on.
I do not know about how stable it is for production use but it works nicely for me when I quickly want to deploy something (I would use Heroku normally, but the fact that a free dyno stops running after half an hour is sometimes a problem, and paying 7$ for every thing I want to test out is maybe to much for someone).
I wrote about how to set up Dokku on Linode here (https://deployo.me/blog/install-and-manage-dokku-on-linode/).
Disclaimer: I built Deployo but you can of course do the same steps without it.
I've tried deploying React or Vue.js app on https://surge.sh with https://www.appbase.io/ as datastore. Great for static sites
https://medium.freecodecamp.org/surge-vs-github-pages-deploying-a-create-react-app-project-c0ecbf317089
Microsoft Azure because I know the language and I found it easier to setup than AWS. Albeit this is probably too expensive.
I knew I couldn’t be the only one. So rare to hear azure mentioned in this world. But it is an obvious choice for the .net stack.
I agree! AWS is all the rage but I couldn't figure out how to deploy a scraper job on it (is it even possible?). It was easy-peasy with .NET & azure!
Same here. My stack is .NET core/sql server so it's a good fit. I think one of the biggest benefits is the deployment slots and how easy it is to do dev->staging->production deployments without much risk. With the lowest tier standard pricing model I think I pay around $65 for app, database, and documents server and it operates super smooth. More expensive than some of the hosting solutions here but pretty approachable for a MS stack that can scale to enterprise levels.
True! I'm creating a crypto API that I hope can remain on the free tier. I've got a simple web app (containing a .net core web API project) and I'm using CosmosDB (free for 12 months). So far so good, the only downside is that CosmosDB seems quite slow. I have a job that takes 12 seconds, while it only takes 4 seconds on a simple PHP stack.
Heroku is the bill I'm extremely happy paying every month. Never having to think or worry about dev ops is incredibly important for a small team.
Same! If you are making something that generates revenue, paying the extra cost for Heroku is well worth it.
Same. The free plan is great to get something up. And $7 to avoid any dev ops is really worth it. Especially for product validation.
You can set billing alarms in AWS... say, if it goes over $500/month, it will send you an alarm.
I'm using https://nanobox.io, free and really simple.
It serves two purposes for me, separated development environments and super easy deployment on my own DigitalOcean, AWS, Gcloud, etc account without having to deal with devops.
Keep it simple. Heroku in the past and DO now sounds like your propjects are not yet require all the hokus pokus required for unlimited scalability.
Forget about cloud, Docker and all those trandy DevOps gimmicks. Get a bare metal server [1] from Hetzner or OVH. Concentrate on your product first. Or you will end up with a perfectly scaling system without customers.
I run some small Laravel projects and use Linode servers managed by Laravel Forge and deployments with Laravel Envoyer.
It’s not a requirement to use Laravel applications. This all amounts to about $25/month.
It really depends on what your immediate needs are. AWS is on the expensive side relative to other cloud providers like Google Cloud Platform or Digital Ocean. You can certainly deploy your production workload to DO and operative effectively. If you ever need to transition to AWS it'll be there waiting for you.
I use Azure, but I'm also a C#/.NET programmer. The best perk of it is using my free monthly MSDN credits for development. You can also apply to Microsoft's BizSpark program to get some Azure credits to spend as well.
I've been running FormAPI (https://formapi.io) on Heroku, but I've spent the last few days trying out Convox (https://convox.com/), which runs on my own AWS account.
I was spending around $75 per month on Heroku, so running a cluster on AWS costs a lot more, but I have autoscaling and much more control over the servers. I also prefer to work with Docker images, instead of Heroku buildpacks.
I'm running a cluster of 3 "t2.medium" servers, plus RDS and Elasticache (both t2.medium). My monthly bill will be around $169.85, plus a few dollars for S3 and CloudFront.
However, I have $5,000 in free AWS credits (thanks to Stripe Atlas!), and they expire in about 2 years. I'm already making enough to pay for the AWS hosting costs, but hopefully I'll have more customers by then and I'll actually need all of those servers.
Api Gateway / Lambda / Dynamodb on aws could fit you well because you only pay for compute used and a small amount of baseline dynamodb reserved capacity. The whole thing will autoscale as needed, not require babysitting, and generally "just work". If you use Apex Up it makes it really easy, including things like environment variables, fast and simple deploys, ssl certs, domains, a reverse proxy built in so you can use any standard web framework, and centralized log querying, plus great pro features like active warming, alerting, and encrypted environment variables.
I use AWS. It's a long learning curve initially but worth it finally.
My workflow for all my products is:
at first any app will be deployed to a (big) DO droplet. It hosts some smaller projects, either just started or still small enough so it doesn't eat away on the memory/cpu of the other apps
next, if a product grows, I move it to Heroku. And set up all that is needed, eg. PG DB, Redis, "deploy street" (staging, production).
The biggest reason for this is costs. The DO droplet is running anyway and can host multiple bigger Rails apps easily.
When switching it's a matter of setting up Heroku, deploy app, import DB and change DNS records.
Since I always use a Git based deploy workflow, on my side nothing changes when switching.
My default has just been to use heroku but recently got a DO droplet and would like to put everything there. Currently have my blog running off it though, my server knowledge is still growing so not entirely sure how it get multiple apps running on that droplet, but your work flow sounds ideal to me!
I'm using Rails with a React frontend single page app. I really like my setup right now - just enough complexity for a good price and continuous deployment. Main parts - DigitalOcean droplet, S3 and Cloudfront from AWS, and Cloudflare for SSL and caching.
I deploy my frontend to AWS, backend to DigitalOcean, and have my domain configured with Cloudflare. Cloudflare is nice because when a user goes to my site, they will hit Cloudflare first and in most cases will get a cached copy of the app - no hitting the server necessary.
As a bonus I have multiple domains hooked up to a single $5 droplet and have nginx managing the traffic, so I can pretty much deploy as many low traffic websites to one droplet as I want. Although, since the $5 droplet only has 1GB memory, I'm enabling memory swapping and limiting a droplet to max 2 rails apps.
All in all, I'm spending about $10/month for 2 droplets to support my multiple projects.
What programming language are you wanting to deploy?
Why would Digital Ocean be less "production ready" than AWS? In either case, you're probably running off of commodity hardware with ubuntu (or something similar) and starting from some reasonable defaults.
AWS has some interesting options like Lambda and RDS that can help with cost reduction and smoothly going through extreme scaling, but neither are remotely required to be production ready and many gigantic operations (including Amazon itself when it was a $100B company) run without them.
GCP, their Docker registry and
gcloudCLI makes it easy to useAWS:
elastic beanstalk for rails, S3 for static website (landing page, etc), API gateway, Certificate Manager for SSL
That works well but I think elastic beanstalks deploys are really slow and the configuration settings are not well documented so that’s something to watch out for
if you're deploying rails, then there's an option to deploy with your assets + gems already compiled/bundled.
in my experience, precompiling assets + bundling gems is what takes the most time each deploy.
Good to know I’ve mostly used it with node. Also definitely varies significantly based on whether you choose rolling updates and how many servers there are to update.
For my clients I always use heroku because I don't want them to depend on me to keep it going if I can't.
For myself it depends. Most projects I start on Digitalocean. If it makes sense to run it in a kubernetes cluster I've run them on AWS. I'd probably use Heroku more if I wasn't using Elixir these days. The cost tradeoff isn't a big deal until you hit scale and the time saved for most setups is pretty big.
I am guessing that you are using Heroku to deploy a Node/Ruby application? You might be able to get away with something like Cloud66 and Digital Ocean. This combo is just as easy to deploy as Heroku but, scales much easier over running your own DO instances or orchestrating items through AWS.
We started using GCP (Google Cloud Platform) and absolutely love it. We got some initial $300 credits that lasted for more than few months before we have to use our credit card. I would highly recommend using GCP for your website and database needs.
I'm yet to release anything except fun projects for my family, but so far everything has been on Linode - it has similar drawbacks to Digital Ocean however.
I'm currently working on an MVP which I'll throw on Linode initially but if it proves worthwhile will likely go to AWS.
I almost always start with Heroku for the simplicity. Eventually move over to AWS or DO if things require more than the entry level plans on Heroku.
Docker has made most of these transitions to different platforms pretty easy.
Oh this issue, i've cried of happiness when i found forge and runcloud for php. They are both supporting database management/backup/supervisord/background process/api/bitbucket & git, one click ssl setup and many other futures for mainly php. Do not get scared by the bills of aws but get scared of server management as a job. Don't know your stack but there is probably a server management sass for you too find that early and you'll save tons of time.
I am in a similar place. Not that I've rolled my own, but I am using Heroku and mLab. I have been satisfied with both, and I don't mind paying for secure and up hosting. However, I saw posts about people putting their front ends and back ends in different places and not using Heroku for production, so I am a little nervous about my choice and VERY INTERESTED in other approaches.
Hi i tried once RDS from amazon it was good but very expensive for my low budget. same EC2, later i found some local companies that offer dedicated servers for a lot smaller price. I deployed my own Postgres DB and host also some services and stand alone processes, anyway i pay like 300euros for a really massive machine so i am glad (its 16 cores, 256GB ram, etc...)
i am happy to give a link if you fail to find something similar.
To manage your own database is not so hard but at start it takes some time, like a day or two.
If you're using MongoDB or Redis, check out this startup discount for 90% off DBaaS for 12 months. It works on AWS and Azure, and you actually host through your own cloud account so you can use any free credits you have, or Reserved Instances to save a ton on long-term hosting costs. Same deep toolset as mLab and Atlas, but with more management/monitoring tools like full admin access, customizable instance type and replicas, monthly reports, etc. https://scalegrid.io/pricing/offers/startup-program.html
Has nobody used SSD Nodes?
https://blog.ssdnodes.com/blog/comparison-vultr-vs-digital-ocean-vs-linode-vs-ssd-nodes/
I use Digital Ocean. It's simple, has great documentation, flexible and cheap too. I use Heroku only when prototyping NodeJs apps and bots. My main stack is Laravel + React/VueJs. I found I only need Digital Ocean for https://bajetly.com
If you have the same stack, you can manage your deployment pipeline using Laravel Forge that integrates seamlessly with AWS, Linode and Digital Ocean for cheap.
DigitalOcean. Easy to setup. Very predictable expenses. We are running few servers and cost is around 100 USD. Support is one of the most likly feature in DO. Documentation is much better than other provider.
Recently we have tried Google cloud platform with 300 USD free credits. It looks great. We started to use Google Storage for our backup. It is very cheap. We only worried about suppprt from Goolge cloud. Does anyone have an experience regarding the support?
Yes. I got in touch recently because of something on my bill i couldn't explain and they were very helpful and effective.
Services like Heroku and mLab are priced that way because they know that companies get tremendous value out of being able to focus on what actually matters for their businesses. In other words, you need to make sure that you will earn at least as much as you are saving by not using those services. Most people can't do that, so they use Heroku (or Google App Engine, or whatever else you need).
I've use DigitalOcean.com for a handful of my projects over the years and its been pretty good to manage costs. NGINX webserver, PostgreSQL database.
do you ever worry about how to scale on DO?
DigitalOcean and Linode