49
89 Comments

You Probably Don't Need Servers For Your SaaS

I've built very involved products (including APIs) without booting any servers or paying for any fixed cost infrastructure at all. I don't think I'll be going back to servers any time soon.

If you can build your project without servers at all there are clear advantages:

  • Less upfront IT operations cost
  • Pay for only what you use
  • You don't have the worry (as much) about scaling

There are three basic ways to build a product without servers:

  • Use a hosting platform that is app specific (like a Wordpress host or a Jamstack service)
  • Use a no-code platform
  • Use FaaS (Functions as a Service) and BaaS (Backend as a Service)

Right now I'm building a no-code platform that I think will be a great choice for new projects but since it's not up yet (follow me to know when it is), here's how I use serverless to run my projects:

I follow the Jamstack pattern where I have a static webpage (no code running on the server when you view the page) backed by microservices. This scales really well, is fast, and very inexpensive to run.

  • I host the websites on Amazon S3 (a file storage Service) with CloudFront (a CDN) in front of it. This costs me pennies a month until the site takes off, then I might be paying dollars! CloudFront also lets me run code on certain events (like a cache miss) to insert dynamic content.
  • I build my sites with Next.js, which lets you build your site with React JS but compile it to static pages.
  • I launch microservices using a FaaS called AWS Lambda, which is practically free until my product takes off. And I use DynamoDB for a database, which is also very inexpensive to start since you pay only for what you use.

This entire setup costs me < $1.00 a month to host a non-trivial SaaS app.

And if the app takes off, it scales automatically! Of course this is a double edge sword because my bill will also go up at that point but I think it is a fair tradeoff because it is cheaper to get started and when the site is getting no traffic (like in the middle of the night) I pay nothing!

And as a benefit, the costs grow linearly proportional to how many customers I have, so it makes predicting the cost associate with each customer, much easier. No giant leaps in server cost as you hit scaling milestones. I can confidently say, "at X users my IT costs will be Y".

A year ago I would often run into use cases that I couldn't handle with this pattern but as time goes on, I run into less and less of those.

I'm an AWS guy myself, but of course there are other options for serverless. Google, for instance, I hear is very good and CloudFlare has some products.

This setup, of course, requires code and technical knowledge. It's not for everyone. But if you want to do it without code, give me a follow so you'll know when I launch the no-code platform I'm working on.


Who else is building without servers?

Also if you have any serverless questions, leave them in the comments. I'm happy to share what I've learned!

  1. 24

    I would go for servers, faas and paas don't give enough flexibility and limit the possibility of a whole lot of things. You can't switch languages, nor get an off the shelf cm line app to run or process stuff for you. You have to be restricted with what the provider offers. I think servers less is good for very elementary apps but the feel of Linux through SSh is something else.

    1. 7

      This article was written for you!

      I used to agree with you fully. Now not as much. So I'm sharing the evolution of my thought process in the hopes of being helpful.

      I am running extremely complex non-trivial backends serverless on AWS with reasonable performance.

      At one point I was lead on a team that handled a decent sized Kubernetes cluster. However, with AWS EFS (NFS) mounts inside of Lambda, edge functions, Docker containers as functions it's a tight race now. Pretty much the only issues I've found that still exist:

      1. Cold start times (which depending on your use case may or may not be an issue)
      2. Anything that requires a persistent socket connection that isn't Websockets (you can do Websockets now with AWS serverless).
      3. SQL compatable databases. Yes, Aurora Serverless exists but it has nightmare cold start times.
      4. Specialty databases like ElasticSearch and MongoDB still need provisioned capacity.

      RE #1, for most use cases the customer won't notice the cold start time.

      ---

      Now, to change topics: Regarding SSH, I get it, the control you have with SSH is amazing. But for production workloads you may want to consider ditching it in favor of "immutable infrastructure" and "infrastructure as code". It is harder to setup but much more secure and you don't have to worry about hot fixes and things accidentally getting reverted when you make it impossible to do a hot fix.

      I could write a whole article on Immutable Infrastructure (probably not the right content for Indie Hackers).

      Even when I'm a 1 person team I do immutable infrastructure and infrastructure as code. Just incase one of my projects takes off, it makes PCI, SOC II, and onboarding new team members much easier :)

      If I must have SSH, I actually have an alarm setup that pages on call if anyone SSHes into a machine (that can be temporarily overridden if someone gets authorization).

      Example: When your infrastructure is immutable an attacker can't put a rogue script on your server. Of course that means you also can't hot patch things but hot patching tends to cause issues in the long run. The 10 minutes you save doing a hot patch instead of a proper release have bit me enough times.

      I still use SSH daily on dev machines to try stuff out and get my configuration correct but once my config is solid... SSH goes away.

      1. 1

        Could you expand on immutable infrastructure? I just recently started learning about aws step functions and I think they are quite suitable for the type of work I need.

        1. 1

          Yes, of course. As I mentioned, I could write a whole article just on that so I'm going to just summarize here at risk of oversimplifying...

          To mutate is to change. So immutable basically it means that once something is deployed it can't be changed. For example:

          • If you want to change the configuration of a server you don't change the server that is already running, you boot a new server with the new config and shutdown the old one.
          • If you have a small change to the code, you don't SSH in and change it, you deploy a new server (or container if you are using Kubernetes or similar) with the new code.

          If it is part of your policy, one way to enforce it is to:

          • Disable root on your server.
          • Make the filesystem the code and config is on read only.
          • Disable SSH entirely.

          Why would you do that?

          • It gives you a known good state to roll back to.
          • There is no danger of having an undocumented change (like a hot patch) on your server then losing it when the server reboots.
          • If the immutability is enforced: it means a bad actor can't change your code or config.
          • It makes disaster recovery easier.

          Code running in serverless is inherently immutable since the cloud function runtime is torn down down every single request and once the code is loaded in memory it generally can't be changed.

          In practice, AWS Lambda reuses ("freezes" and "thaws") containers so this isn't quite true but since you can't control when Lambda completely destroys a container, it is effectively true.

          Infrastructure as Code comes in because if you go and directly modify your cloud function in your console, you are mutating it, which breaks best practices. Infrastructure as code makes deploying a new cloud function as easy as manually mutating it so doing it the quick and dirty way is not as desirable.

          The reason I brought it up is because a lot indie developers don't work with immutable infrastructure because it takes more up front setup and policy. And in my opinion, for the security and stability reasons, they should be.

          And if you practice immutable infrastructure, all the sudden a lot (not all) of the reasons not to use serverless go away because an immutable server behaves a lot like a cloud function and has a similar deployment flow.

          For further reading, another devops concept that is very much related that you can Google when you have time is "pets vs cattle"

          1. 1

            I am sold. I am terrible in devops. I just need aws step functions triggered by apis and acting on APIs. Do you think I should use something like terraform to create lambda functions and workflows or this is overengineering?

            1. 1

              Definitely not overkill.

              That's the Infrastructure as Code part. You can do Immutable Infrastructure without Infrastructure as Code but it is, in my opinion, more difficult.

              I personally use AWS SAM + CloudFormation since I am on AWS. I think if you are on AWS to then Terraform may be an unnecessary level of abstraction. But I know a lot of people find Terraform easier to use.

              https://www.serverless.com/ is also very good. Before AWS released SAM I used to use them a lot.

              What I think you should avoid is creating resources directly in the AWS console. The only time I ever do that is in the very rare case where CloudFormation doesn't support a feature.

              1. 1

                Thanks for your thoughts. Really helpful. There are so many options in this field: Sam, cdk, amplify, now serverless, terraform…. I need to choose one to start learning. At this moment, I am inclined towards using SAM (aws seems the best for what we will need). I will start reading the docs today. Maybe even looking for an online course.

                1. 1

                  AWS SAM is just CloudFormation with better tooling and build tools (vs straight CFN you need to zip up your lambda project and upload it to S3 manually). If you learn either one you more or less know how to use the other.

                  If you do straight CFN I'd recommend not building and uploading your zip to S3 directly but rather scripting it with something like Bash or Make.

                  But again, if you learn SAM you pretty much also know CFN so you really can't go wrong.

                  Also, Googling "cloudformation INSERT_AWS_SERVICE_HERE" ... i.e. "cloudformation s3" and you find the exact documentation you need... and it is extremely good documentation (not too long, but still contains all the important things).

      2. 1

        You should know that although you can do web sockets via serverless, there is a time limit to serverless execution, so it still makes sense to run a VM for this.

        1. 1

          I should have specified I was referring to inbound connections. Inbound web socket connects to AWS API Gateway have no time limit. It runs a new function every time you get a message and provides an API to push a message to the connection.

          If you have a use case that requires an outbound connection that is long lived, pure serverless is not a good fit for you. Though you can boot a really small/cheap machine(s) just to do that part and done the rest in serverless.

          The downside is, there is a lag compared to a dedicated machine. So if you need very low latency, then a server may still be a good fit there. But for most applications that ~50-100ms delay is pretty reasonable.

          1. 1

            @acurioso you know of any good resources for setting up websockets on NodeJS AWS Lambda with AWS API Gateway? I’d like to try it, but haven’t found enough resources to put a solution together

            1. 2

              @fromtheexchange None that I can vouch for. I tend to fallback on the AWS documentation which is good but can be a lot.

              If you use AWS SAM (which you should be using either SAM or CloudFormation), this example seems pretty good https://github.com/aws-samples/simple-websockets-chat-app

              Basically you have three lambda handlers:

              1. On connect, store the connection info in DynamoDB
              2. On message, read all the connections from DynamoDB and broadcast the message to all the connections
              3. On disconnect, remove it from DynamoDB

              Hope this helps, follow me on Twitter and send me a DM if you have any other questions.

    2. 1

      I agree and have it the same. Yes, my costs start at $5 unless it's a static site on Netlify, but I have all the flexibility of a full VM.

  2. 6

    Lambda and Dynamo are going to bite you in the ass with the pricing once you start to gain any semblance of traction. I'd take a $10/month DO droplet or Render/Heroku any day of the week.

    1. 2

      Can you give an example? I'd be curious to hear it.

      In my experience running large scale workloads on Lambda, $10 a month on Lambda + DynamoDB + API gateway will get you about 2 - 3 million API calls. That is about 1 to 2 request a second. Which admittedly isn't a lot. However, you'd also need to consider:

      • For high availability you should be running multiple servers
      • Your traffic is unlikely to be evenly distributed so while that droplet may handle one request per second easily, you're more likely to have spikes. So if you capacity plan for the spikes you end up paying for capacity you don't use.

      Serverless solves those for you.

      If you break it down, can you save money by using servers? Yes, if you know your workload very well and it is extremely predictable or your system responds very quickly to changes in capacity needs. Otherwise you're likely to be overpaying during certain times of day which negates any saving you made not using serverless.

      Now, if you had a DDoS attack that is a different story. But there are ways to mitigate that with serverless.

      Edit: To clarify that last point. Serverless will scale instantly to meet the extra traffic demand. If you don't have some sort of WAF in front of it you're going to end up with a big(ish) bill. But that's the same with servers. If you are auto scaling, you're going to end up with a huge bill with servers too. And if you're not auto-scaling you're going to crash and customers will have a service disruption, so it's a tradeoff. And if you're willing to have a service disruption: AWS Lambda lets you set maximum simultaneous executions (scaling limits).

      1. 3

        Well said.

        My processing needs are bursty. Sometimes there are no requests coming in, sometimes there are hundreds of requests all at the same time, requiring the server to perform some long-running task.

        My Lambda costs are much higher than a DO droplet, but I don't think I could run the same setup on a VPS. I pay higher fees for the peace of mind that Lambda scales if I need it to.

        Not everyone needs this, but if you're doing any kind of bursty batch processing, it becomes essential.

        p.s. if I had a dollar for every time someone with zero knowledge of my back end said that I "could run it for $50 on DO" I'd be a rich man.

  3. 3

    I'm with you 100%. I've been default serverless for 4 different projects now and I think it's actually a much better way to build. (That being said, if you're faster on servers, stick with what you know.)

    Wrote about it here: https://dev.to/levinunnink/how-to-pick-the-right-tech-stack-for-your-startup-4cgo

    1. 3

      Good article. Thanks for the link.

      Very good point that decisions on infrastructure are often less important than shipping product. Please, I hope no one spends so much time trying to get serverless to work that they lose the forest from the trees.

      With that said, I personally find serverless faster to market in most cases because of things like auto-scaling groups, web server configuration, etc that are needed if you want high availability but just come by default with serverless. But knowing how to set up a server is always a good skill to have in your tool belt.

  4. 3

    In my latest project we started out serverless - it was great!

    Then over time we realized because we needed 24/7 processing for social scanning it was cheaper to use EC2 for the workers.

    Then, once we set the workers up we realized it was cheaper to piggyback the front end stuff on the worker servers too.

    It's just due to our specific use case. If I was making a general purpose SaaS app that didn't require a fleet of 24/7, always working, workers I would start out serverless for sure (if the app supported it).

    I found these resources very helpful to get a serverless PHP / Laravel site up and running in a matter of days:

    https://serversforhackers.com/c/fathom-analytics-serverless
    https://serverlesslaravelcourse.com
    https://vapor.laravel.com

    1. 1

      Scanning/spidering is a great example of something that is a poor fit for serverless! Assuming you have some sort of stream/firehose and not polling.

      Polling may or may not be a good fit for serverless depending on how often it polls. I've done the math and polling every few minutes is questionably cheaper serverless and polling every second is definitely more expensive serverless.

      Once you already have a server, I get that piggy backing the FE is very often cheaper depending on your use case.

  5. 3

    As an AWS certified Solutions Architect pro and certified developer with a few years consulting under an AWS advanced tier partner, I would also agree with many here who suggest servers.
    FaaS is not the direction I would take a product. The only way I would even consider serverless for an fully functional application would be if you were using AWS Appsync. However, that requires an extremely high overhead to learn.

    Most dynamoDB database schemas I see are incredibly inneficient and are not scalable. Many people see serverless as a cheap way to get an app running and it's true that costs and deployments are easier to reason about, but I highly suggest l, in an official capacity, moving away from that as your primary application layer.

    1. 1

      I agree with DynamoDB. My first few DynamoDB implementations were garbage, but it can be done efficiently for many common use cases. But in any case, you can always run a serverless API in front of a database server if you really need SQL or something Specialized like ElasticSearch.

      AppSync is interesting. I think a lot of the comments here are very mobile app focused. I've found API Gateway + Lambda to be very effective at running APIs, but that's just my anecdotal experience. Your milage may vary.

  6. 2

    I'm fully onboard with this. I've built SaaS using similar stacks and setups, and I even wrote a book about building with serverless AWS. Most people are more familiar with servers though and don't know how to shift their development and architectures over to serverless which can result in higher cost. Seems like most people need more insight and patterns to follow for shifting to a newer paradigm. Like any architecture or technology though there are shortcomings, so knowing those and working around them is most important. Even though I am a serverless fan, I know that there are loads of use cases that benefit from using provisioned resources instead.

  7. 2

    @boristane just tagging a founder who has the same idea

    1. 2

      Thanks for the mention @hieunc!

  8. 2

    Lack of connection pooling is an immediate non-starter for me.

    1. 1

      Yeah. As is JS on the backend. Yuck.

      1. 2

        Yeah I used to be like that until I ran my own business. My negative opinion came from my days building everything with jQuery... Modern tooling is wayy better and more interesting, especially when you switch to Typescript. Plus, having to only hire for one language is far easier than hiring for multiple in my experience.

    2. 1

      That's a fair point. It's not for everyone. It's best for connecting to stateless systems.

      As an FYI, some things to consider, though (only for AWS Lambda, others might have similar features but I don't know them well):

      • If you're worried about the overhead of making a connection: Lambda doesn't shut down the container it uses but instead freezes it, so if you don't close the connection at the end of your function and the server doesn't ping or close the socket, it will still be there on the next run (assuming your lambda did not get disposed of -- which happens if it's not called in 5 minutes)

      • If you're concerned with too many open connections to your server and you are using AWS RDS: they do have a solution for this (called RDS proxy). Though it charges per hour and has some limits so you effectively have a server at that point.

      1. 1

        It may very well be due to how I architect software, but serverless is a cute concept until you start a serious business that needs to scale.

  9. 2

    I would like to add to the list that an additional advantage is more secure. No database = nothing to hack.

    1. 2

      Well, you could certainly hack DynamoDB serverless database if the AWS secret gets leaked. But from a network standpoint... no open ports, nothing to connect to. And it shuts down when it's done so no way to have a persistent in-network machine to launch attacks from.

      1. 1

        That’s right. My statement was very blanketing without prior experience with the DynamoDB. In this case, I assume it’s more or less like Redis but hosted on AWS. With the database shutting down, it’s an instant notification and not letting the hack to persist for a long time.

  10. 2

    I'm using this approach exactly. Optimize for initial cost because the project might not even take off.
    I'm using cloudflare workers-site to host the site and cloudflare workers as FaaS that runs on the edge. It is much faster than both AWS Lambda and Google functions, both because roundtrip from client is shorter and the first hit startup time is almost nothing.
    Also using other SaaS such as MailerSend for transactional emails which is free up to 12k emails.
    Wasabi is S3 compatible, and costs just a fraction of it
    Firebase as DB is free to start. Cloudflare k/v store is fast and I use it whereever possible.
    Cloudflare is free to start or $5 including workers and k/v when you need to scale.

    1. 2

      AWS free tier is also pretty decent. DynamoDB is one of the AWS products with a lifetime free tier. AWS also has edge functions that you can run on AWS CloudFront but they are more expensive than regular Lambda functions.

      But mostly I use other products on AWS (like SQS, EC2, S3, Route53) and like to keep things all in one place if the prices are ballpark for me. And I don't use enough storage for having a second vendor for S3 to be worth it with my use cases. For me, in my unique case, the extra $0.50/m is worth it to keep all my security, billing, etc in one place.

      I've heard good things about Cloudfront. I'll need to give it a try. That new product they announced to compete with S3 sounds promising.

  11. 1

    The folks from Fathom analytics have a nice read in dynamoDB: https://usefathom.com/blog/ditched-dynamodb

    In general I think if you use faas instead of servers you will just trade good known problems with new not known problems.
    It is the same with SQL Vs NOSQL.
    I mean SQL servers and their problems have been around for decades.

    If you do not know anything about any of them you will have a hard time anyway sooner or later.

    1. 1

      DynamoDB certainly has a unique set of challenges because of how limited its key schema is. If you don't (or can't) design your keys right it will be much worse.

      But NoSQL databases have unique scaling and performance properties that relational databases just can't match. So if your use case is appropriate to take advantage of those properties, it's worth looking into NoSQL.

      For example:

      • Many NoSQL databases (including DynamoDB) beat relational hands down on key value pair lookup.
      • Relational beats most NoSQL databases (but not all -- i.e. ElasticSearch) for querying and sorting by fields other than the keys.

      Sounds like what Fathom did was start with Dynamo and switch once they were able to take advantage of economies of scale. Which I think is a reasonable thing to do.

      They also moved to another SaaS not managing their own DBs, which sounds like the right move in their case.

  12. 1

    Jamstack is my favorite pattern so far. I don't think I'll move from it except if I really have to

  13. 1

    I really wanted to go serverless and started down that path but ultimately stopped and went another way. I agree that serverless is the cheapest way to go but the sheer amount of code you have to write goes way up as well. You're paying for scale that may never come with upfront dev/engineering time. As far as actual cost, $1/month sounds wonderful and matches what I saw. However, the alternative is $40-60/month with EC2/RDS. I deffo spent more than that in time with the serverless option.

    For me, it had to do with a few things.

    1. My data model was largely relational and, as such, I ended up having to write a bunch of code to handle things I would have gotten for free with a relational DB. I feels hard to justify writing this sort of code when you really want to focus on the product instead. It's also a lot more difficult to model things correctly and you kinda have to have a feel for what queries will be important up front. Finally, serverless options for relational databases are universally expensive -- it turns out these are harder to scale :)

    2. I actually like JS/Typescript quite a lot and use it for work a ton. However, I stay away from it for any non-trivial side projects for a few reasons. You're typically dealing with smaller libs/frameworks rather than larger opinionated frameworks (esp w/ serverless). That's fine w/ 40 hours a week and a team to establish/enforce patterns and keep all the dependencies updated but less so when you have to do all that yourself and are taking away from focus on the product. I could go on but the gist is it take more effort/discipline to do things right, and the costs are higher if you don't since the JS ecosystem moves particularly quickly.

    All of that said, I've run an Express app on serverless w/ PostgreSQL after trying raw Node Lambdas w/ Dynamo DB and it worked just fine. On the next project I'll be going EC2/RDS once again with Phoenix and suspect I'll be able to save $$ with smaller instances -- curious to see how it plays out.

    1. 1

      In my experience it is the same amount of code as a standard from scratch-app. Granted on servers there are a ton of frameworks you can leverage. But that's just anecdotal.

      1. I agree that some projects are 10x+ more difficult without relational databases. Though I do think Lambda backed by RDS is a viable solution.
      2. I like Node.js / TS and use it for a lot of my projects but lately I've been writing my Lambdas in Go and getting a pretty big performance and cold start time boost. I also know some people who write Lambda in Java and Python. Unfortunately lambda edge functions are still JS only, but I try to keep them small with no NPM dependencies if I can.

      Regardless, good luck on your project!

  14. 1

    I followed a guide utilizing the free tier GCP to deploy/host for couple of my projects.
    I've done it 2, 3 times now and almost near getting a deployment script ready so it's relatively short in terms of getting a deployment set up.

    Granted, it's not automatic deployment. But still, paired up with a language (Elixir) that compiles down for the virtual machine (BEAM), things are just packaged up nicely for deployment, with sort of WYSIWIG compared to my local environment.

    But I'd totally go down JAM stack too if I were more efficient / know how to do it well.

    At work we've looked at how fast lambda services eat up service costs so I have this inherent fear for it haha.

    1. 1

      At work we've looked at how fast lambda services eat up service costs so I have this inherent fear for it haha.

      Yeah, that's a downside. The upside to instant scaling is that traffic spikes won't crash your site or slow down the user experience.

      An auto-scaling group (not sure what GCP calls it) that boots up servers as the load increases can also do that to you. But Lambda scales far faster than you can boot servers.

      No matter what architecture you have to decide what you want to do if too much traffic is ballooning your costs:

      • Scale up to meet demand
      • Slow down (decrease performance)
      • Go into maintenance mode

      What I like to do for mature production services (Lambda or servers):

      • Set up AWS WAF (Web Application Firewall) and set up some IP rate limiting to shut down IPs that get out of hand. It has a monthly cost but worth it on production systems. It pays for itself in one attack.
      • Setup alarms to alert me ASAP if something odd is happening.
      • Setup an alarm that triggers AWS to put of a maintenance page (in extreme cases).

      And for lambda specific: optimize performance best you can. My typical billable duration for Lambda is 5ms at 512MB RAM which is very inexpensive. 100ms is 20x more cost. Which isn't much at low volume but at 10,000,000 hits (not unheard of if you are popular and/or under attack) it adds up.

      Since this is Indie Hacker I feel compelled to also say: don't let performance optimization delay your launch.

      Not sure the equivalent techniques for GCP but I'm sure there are some.

  15. 1

    Good read, thank you for sharing.

    As a stone-aged developer (I had my startup more than a decade ago :)), it's amazing to see how technology has evolved and how much easier it is to start a SaaS or e-commerce business these days.

    1. 1

      Don't say "stone age"... that must make me pre-historic :) But it has been really cool to see the progression through the years. Definitely much easier now... and less upfront capital requirement. But that's good, it means business can focus more on business problems instead of tech.

      RE, eCommerce, I just tweeted about that yesterday https://twitter.com/AndrewCurioso/status/1449039599670603781

  16. 1

    I'm curious why you wouldn't just use NextJs API instead of using AWS lambda? Seems like it'd offer about the same results but 1 less product and it's free.

    1. 1

      I've answered elsewhere but pretty much: personal preference. I have nothing against Jamstack hosts and if it fits your use case, that's fine. Much better than servers.

      For my SaaS the APIs are themselves a standalone product / microservice and not tied to the website. It is written in Go (vs Javascript) and tied closely to other AWS services like EFS and SQS.

      Even if it weren't for the tight AWS integration, my site would need a paid plan on both Vercel and Netlify so it wouldn't be free. In fact my AWS bill is 1/20th what the cost of either of those is (currently, that may change).

      Vercel and Netlify also charge per user, whereas AWS does not.

      I also didn't want anything to learn. My background is in AWS so it was an easy setup for me and I already know all the gotchas and tricks. And since I'm already on AWS it's easier for me to manage security and billing with everything on one place.

      But that's just my particular case.

      If I was just a website or a simpler API (vs a stand alone microservice) then I'd probably consider a Jamstack hosting service..

  17. 1

    Well explained. I am a fan of server less. I used Chalice by AWS to build the backend for generately.ai. It is seamless, i just have to run a command to deploy.

    1. 1

      Chalice looks really cool. I'm not a Python programmer so it wasn't even on my radar. But if you are Python it looks like a great choice!

  18. 1

    Used to run all my stack from custom deployed Kubernets on fixed priced servers on Digital Ocean, until I decided to move all my servers into serveless architecture on Google infra using Google Cloud Run.

    I usually serve 100k unique users per month, it's not a huge number by any mean. But running on Cloud Run with Cloud SQL gives me a peace of mind that my stuff will keep running overnight and if anything fails on IT google got my back.

    Even the cost went down 90% because I only pay for the second spent running my stuff.

    There are some trades that you got to make when going serveless, one of them is to make sure your app follow some practices. And the lake of ssh handle some stuff you could do. But even that makes you fix the problem the right way.

    I couldn't agree more with what you wrote.

    1. 2

      I used to run a SaaS that had a Kubernetes cluster. K8s is an amazing tool but it has a lot of overhead and if a cluster node becomes unavailable it can be a nightmare. I may choose k8s for a future project at some point but right now I'm very happy with serverless. Especially now that AWS (not sure about Google) lets you run Docker containers directly as cloud functions. I almost never use that feature but it takes away one of the k8s advantages.

      I couldn't agree more with what you wrote.

      Thank you. Glad to hear it.

      1. 1

        Google Cloud Run and AWS Lambda are mostly the same product but on different vendos, running Docker containers as serveless, auto scalable to accomodate traffic and billed just for the second it run

  19. 1

    I have an API that web scrapes specific sites on demand. I use Lambda and DynamodDB and I'm very happy with it.

    One thing that has hurt a little bit is setting up SQS in front of the Lambda functions to control load on the sites. I don't want there to ever be a huge burst on a specific site causing any problems for the host so I have reserved Lambda concurrency and an SQS queue in front of the Lambda function. Lambda is constantly polling in the queue and I get charged for each of these polls. With 50 different queues for prod and 50 for dev, I eat through 1 million free SQS requests in a day or two with no traffic at all.

    Kind of a bummer. But it does give me peace of mind that I won't suddenly crash a Secretary of State's site with a large burst of traffic from multiple customers.

    1. 1

      @jordbhansen I've been thinking about this use case. If you send me a DM on Twitter (link in my profile) I may have a solution where you can skip SQS but keep serverless.

  20. 1

    At this moment I'm using Netlify (FaaS) with succes for over 2 years now.

    I'm not going back to servers, everything runs serverless smooth with very high availability, low latency and cost effective.

    Just plug and play, as simple as that.

    It does have a small learning courve, but you'll get how functions work after few simple tests.

    +1 for serverless.

  21. 1

    Nice - but why not use something like Netlify or Vercel to host rather than S3 + Cloudfront? You have one service rather than two, its has a generous free tier and even gives you a https domain out of the box.

    1. 2

      Good question. Both are good services.

      My article was primarily in comparison to servers as an alternative and I think serverless wins hands down against servers (in my opinion). So the take away was not meant to be "use AWS" but rather "don't use servers." AWS just happens to be what I know best and I use other bits of their ecosystem.

      But with that said, I think if you can fit in the free tier of either of those, go for it. But I think the AWS route is more analog to their Business / Pro tiers (due to access control, enterprise features, etc that are built into AWS).

      In fact, Vercel explicitly says you can't use their free tier for business projects.

      Some quick back of napkin math:

      • If you have one pro seat on Vercel and use the max "1000 GB-hours of execution" it is much cheaper than AWS (~$70 vs $20).
      • If you buy "Provisioned Concurrency" on AWS it becomes a closer race but Vercel still wins purely on price ($30 vs $20 if your workload is evenly distributed).
      • But since Vercel charges per seat, if you have multiple seats, AWS starts to become cheaper around 4 users.
      • But one thing to consider (and I'm sure Vercel counts on this). Most of the time you won't be maxing out the 1000 GB-hours. If you don't max them out, AWS is cheaper by a mile. For example, with a SaaS I am building I would need millions of API calls for AWS to start costing me more than Vercel.

      Of course, Vercel is probably easier to use and less set up for most people. Which is a huge bonus and you have to consider your time in the equation too.

      But to come full circle, both are great. Both are better than managing servers, in my opinion.

      1. 1

        Thanks for the detailed comparison! I've been surviving under the free tier but this is good info to know incase I have the envious problem of needing to switch :D

    2. 1

      You're only supposed to use the free Vercel tier for non-commercial projects, so a SaaS probably wouldn't qualify. I personally just stick with the free tier until it starts taking off but I think thats against ToS technically.

  22. 1

    Good points and examples; very convincing. I'm currently running on AWS Free Tier running containers on an EC2 VPS (to avoid accidental charges.) My two cents:

    While cumbersome (serverless is likely cleaner and easier, especially for those unfamiliar with Linux/server management,) the benefit of a server is no vendor lock-in. Once this AWS free tier trial is up, I'll painlessly switch to Oracles free VPS service

    On the flip-side, vendor lock-in can be alleviated with IaC tools like terraform (easily switch cloud providers by just using code)

    There is going to be overhead and trade-offs with server or serverless; while OP has good points, the choice truly boils down to what developers and teams are comfortable with supporting

    If I had the funds, I'd personally choose serverless. Primarily for the peace-of-mind when scaling (and security.). Much easier than maintaining a distributed system, such as k8s clusters ! Spending more time on the business than the technology seems imperative

    1. 1

      I personally don't find, vendor lock in for Cloud Functions is not much of an issue anymore. Best practice is to have interface code that is the only code that is vendor specific. Then, as long as the runtimes are compatible you can have different interfaces for different Cloud Function providers.

      The biggest lock-in is with serverless Databases. You can also do interface layers here but it is difficult since DynamoDB for instance is very unique.

      But the way I see it... it's going to take me X hours to setup and manage a server no matter what, as long as I write the code generic enough that I'm not locked it, I saved X hours until I need a server.

      RE funds... having managed a k8s cluster... I found serverless to be much cheaper. I would never do a k8s cluster for a startup unless I had a lot of funds or required very few nodes. Here is where AWS gets killed, because they charge for the control plane and things like NAT so running a k8s cluster has a significant cost floor, whereas serverless can go down as cheap as $0. If I were to do k8s AWS would not be my choice.

  23. 1

    It's okay to go this way, but the problem is scaling in this pattern. The above pattern will definetely lead to a lot of rework when you are scaling the platform.

    It's better to follow known deployment strategies with servers and scaling in future won't be a big issue. If you decide to go with FaaS, I am not sure how you will be able to manage a big code base (which will be there in future, if you scale).

    The best way out here is to use shared hosting servers, which starts from $5 and then scale up the machine as the request grows. Initially you can keep all your DBs and code in the same servers only.

    1. 1

      If you are running a DB server you might as well do that (run them on the same machine until you outgrow it). You can use a FaaS with a standard server but IMO it works best with serverless database.

      However, for production workloads you also need to consider the cost of downtime. I would avoid running one server in production unless downtime is no issue (like a dev environment). My rule is at least three (so, to your point, $15!). Something that is virtually free and built in with FaaS.

      When talking about scaling, what are your concerns?

      If you're talking infrastructure, the FaaS deployment patterns are the same as if you were managing Immutable Deployments and you can use Infrastructure as Code with FaaS as well. I've scaled up both, I find them to be comporable. Unless you don't use IaC and Immutable Deployments out the gate, if you start with hand built servers scaling is way harder.

      If you're talking code, FaaS pushes you into a microservice pattern. You can run multiple endpoints using the same code but microservcies are very low friction with FaaS. And when growing teams and code, I find microservices much easier than monoliths.

  24. 1

    Static sites are faster than serverbased sites but what about Saas like customized chatbots? I heard this phrase(serverless) first time can you pllease educate me about it(or atleast send me links to understand it)?

    1. 1

      Serverless is much better for chat bots in my opinion. In fact, if you're writing an Alexa skill you are forced to use serverless.

      When using Functions as a Service (FaaS) you only run your chat bot code when you actually get a message. So if you have 0 messages you pay nothing and if you get 1,000,000 all at once, it scales up for you automatically.

      One thing that is tough with chat bots is state. With FaaS you have to persist your state somewhere if you want to know the context of the message. Which is much easier to do when you have a single server (or multiple servers with sticky sessions).

      Also, if you have a large Machine Learning model it can be trickier with serverless.

      But I still think going the FaaS route is better.

  25. 1

    Thanks for the inspirational message Andrew. I'm building a serverless MVP as well with the stack you recommended. Because of modest I.T. skills, I could not do it without the AWS universe Lambda / DynamoDB/ SAM/ Eventbridge/ SNS/ StepFunctions/ ECS/ ECR/ Comprehend. It's a steep learning curve to fit everything in its proper place on the overall landscape but terrific once you get productive. All the best.

    1. 1

      Awesome to hear. Good luck on your MVP!

  26. 1

    That's one reason why I'm using SvelteKit for my current SaaS project

    SvelteKit compiles all endpoints to cloud functions. Plus, it's platform agnostic.

    1. 2

      I've been meaning to try SvelteKit! My experience is with Next.js but I've heard good things.

    2. 1

      slightly off topic but I would like to know what framework you used to easily integrate with shopify api? I am building something and I want to do it fast and easy because of time constraint.

  27. 1

    Umm, check out Firebase.
    -Former AWS advocate

    PS most people I know are already doing serverless.

    1. 1

      @swebdev would you recommend firebase over aws amplify, or are they roughly equivalent? I'm interested because I'm about to look into both and would value the perspective of someone with more time in the trenches.

      1. 2

        To second what @swebdev said... FIrebase can get you up and running very fast. I haven't heavily used Amplify or Firebase since my use cases are mostly API first SaaS and both really shine in mobile apps. But AWS anything has a steep learning curve.

        Two things I'd ask yourself are:

        • Total cost of ownership for your expected use cases
        • Which ecosystem you want to buy into, Google or Amazon. AWS is very robust and I think their console is easier to deal with that Google Cloud Engine but both are great choices.
        1. 1

          Thanks, this is helpful (and @swebdev too). My takeaway is that both are good choices if I want a productive platform to build upon, but more macro-level considerations like TCO and "platform name risk" should be the deciding factors.

      2. 2

        Utilizing Firebase is verrry fast, they provide almost all the useful service out of box for setting up and running saas quickly. The speed advantage is very material to me, and that's why I'd recommend it.

        Firebase tends to get expensive as you scale, although that's a good problem to have if the product works out.
        AWS is also very mature, although it may slow you down due to integration effort (not to mention the complexity but I assume you're already well versed in it).

    2. 1

      Firebase is good. If it suites your needs go for it. A lot of products can definitely get away with just Firebase. Especially if your product is a mobile app. I just happen to like AWS better. Especially for a SaaS where your API strategy is often part of your product. But I'm sure you can do most of it with Firebase too. As I pointed out at the end, AWS is not the only game in town.

      Glad to hear most of the people you know are serverless already! I made this post because I personally know a lot of people on Kubernetes and provisioned infrastructure. So to each their own, some are further along in the journey than others, based on the feedback at least some people found it helpful.

      Edit:
      AWS also innovates a lot. To add an example, AWS recently allowed mounting of NFS volumes via EFS (Elastic File System) in Lambda cloud functions. This helped a major backend issue with my SaaS that would have required a server otherwise. Is that possible with Firebase? (I legitimately don't know)

      1. 1

        I agree to all the things you said; seems like our use-cases are quite non-overlapping :)

  28. 1

    This is very interesting and upvote worthy! Thanks. It's amazing how all of this keeps evolving to remove backend lift and cost.

    1. 1

      Glad you found it helpful.

  29. 1

    I have been doing this in the company I work for but haven't tried next.js in production though. Is the static site generated by nextjs search engine friendly? I am building a personal dream project soon and that problem is bugging me and I have no time to try it. Tia!

    1. 2

      So the static pages it generates using SSR (Server Side Rendering) are HTML so you can have all the SEO tags you would with an HTML page.

      Likewise, statically compiled pages are really fast, which is great for Core Web Vitals which are getting to be really important for SEO!

      With that said, it did take me longer than I'd like to admit to get it to generate a proper sitemap.xml and robots.txt file.

      1. 1

        thank you. This is noted.

      2. 1

        Curious - Does SSR gets any advantage for Google specifically? I thought they (google) do client side rendering before their search indexing, so in my naive view, SSR shouldn't get an SEO advantage there. Please correct me.

        1. 1

          Good question, I'm not sure.

          I was thinking more along the lines of the generated markup is the same as any other HTML site from a SEO perspective so all things considered you can SEO optimize it the same as any other site.

          You are 100% correct that Google executes Javascript (not all spiders do) but I always feel it is best to have the SEO related tags immediately accessible without JS because you never know what tools are being used to index your site. It may be old fashioned of me but since search engines are black boxes I figure every bit helps.

          1. 1

            👍Thank you Andrew!

    2. 2

      Yep, it can generate static pages at build time and SSR pages at runtime if you needed it

    3. 1

      I was in the same boat as you. Wanted to try Next.JS for the speed on a project, but didn't get time to try it.

      Ended up using Clutch.io.

      It's a great tool to build front-end components in React, and especially useful if you're building a serverless infrastructure, as it allows you to connect any backend via Axios, and visually build a data-driven web app.

Trending on Indie Hackers
Rant about the link building industry 20 comments Any indie hackers creating tools for the nonprofit sector? 13 comments 44 products by bootstrapped startup founders you can use 7 comments Breaking down one of the most successful ecommerce SEO strategies (IKEA) 5 comments Small creators were preferred over big brands for Black Friday & Cyber Monday 4 comments Product Hunt Launch Breakdown: #4 Product of the day Hive Index 3 comments