11
23 Comments

From K8s to Serverless and Back Again

Looking for somebody to convince me otherwise. My preferred stack is as follows:

  • Go
  • Svelte
  • Mongodb
  • K8s

I recently built https://trideate.com with:

  • Python
  • FaunaDB
  • Serverless framework
  • AWS lambda
  • API Gateway
  • Svelte

And I have been disappointed and constantly wrestling with Python and cold start times on lambda. I wanted to try to reduce costs and make the project be able to scale from nothing to widespread usage, but I am so far not happy with the stack. I am also considering switching to Postgres instead of MongoDB. Would love to hear your thoughts on serverless (ok I know nothing is serverless just how nothing is free) VS going with a solution like K8s. Thanks!

  1. 3

    I mean, I would just suggest you to do a simple $5 cloud instance server (like DO, EC2). Why do you need K8s?

    As for PostgreSQL, that's my main database. There has to be a big reason to use something else.

    I actually teach this exact setup in Deployment from Scratch. Python + PostgreSQL on a simple server. At the end I have a Scaling chapter where I tell people how I think about scaling.

    Scaling to me is a very lean process. 1 server, 2 servers (app + db), 3 servers (app + db + storage), 4 servers (load balancer, app, ...), etc.

    My reasoning is this. For one, I always pay for what I need. And second, with every step I do only certain things so it's never really complicated...

    1. 1

      I just love the reproducibility and the hot upgrades that K8s provides. I had a lot of luck with DO's managed K8s.

  2. 3

    Unless you have very specific needs, just use render.com so you can focus on the product and not the infrastructure.

    Also, you couldn't pay me to use Mongo. If you truly need unstructured data, just stick it in a Postgres jsonb column.

    1. 3

      i second this! render, postgres and just get building!

    2. 2

      Is render pretty reliable? Looks like a better heroku. Can the postgres database and application code be deployed to a VPC?

      1. 4

        (Render founder) Indie Hackers has been running on Render for ~two years.

        Private networking with service discovery is built in so data between your application and DB remains in your VPC .

        Happy to answer questions!

        1. 2

          I have signed up for render and giving it a try! Looking forward to it

  3. 2

    I also built a new service recently called Listeri that I thought would fit perfectly with ServerLess using Next.js on Vercel. Half way through after battling several issues, I just went with a small dyno on Heroku. Here are the big problems: you have a lot less traffic when you start out so you'll get cold boots all the time (That means the serverless host will put your code to sleep) and if you do get a lot of usage, then you'll have trouble with database connections since none of the major traditional database providers are designed for a system whereby you connect as needed. For example, Mongo Atlas Serverless still limits you to 500 connections. It was weirdly just faster to deploy onto Heroku (Node.js App) and put Cloudflare in front it as a CDN.

    1. 1

      Can you expand on some of the issues you encountered with Vercel?

      Currently using it with ~1000 visitors/day, wasn't sure if I should be expecting way more issues down the line.

      1. 4

        Sure. Here are the major problems I faced.

        For static pages, they seem to clear the page out of the CDN fairly quickly. So let's say no one visits your site for 30 minutes. The first person who does will see just the HTML document take a 1 second to arrive. Not a deal breaker, but my product has a huge emphasis on speed and reducing friction.

        If you're using the API part of Next.js and deploying to Vercel or using pages that have a getServerProps function, then you have the same problem. If your site doesn't have heavy usage, then the functions go to sleep. The person who has to wake them will see a 5-7 second cold boot time. This really sucks. You might have enough usage where this doesn't happen a lot.

        If you have enough usage, the problems above won't really manifest.

        Also if you're using the API functions and connect to a database, you will eventually also hit connection limit issues for most of the common databases. This is a general serverless issue and not really about Vercel. Postgres, Mongo, and MySQL aren't really designed for the web app to fall asleep and reconnect when started back up. If you use Fauna or maybe Dynamo, this is not an issue. There are also variants of MongoDB in Atlas or Amazon Aurora Serverless that try to hack around these problems, but they have just launched.

        Finally, there are a number of people complaining that Vercel has these really opaque fair usage limits. People randomly gotten contacted that they have run over them and need to go onto an Enterprise plan. I've not encountered this but I can see the potential issue here.

        1. 1

          By static pages do you mean the pre-rendered ones? We're pretty much pre-generating all of our pages, so if those are being cleared out of cache that's pretty applicable for a bunch of our users.

          I'm seeing a first paint of ~1.8s across all of our pages, with LCP of ~3 (p75) for mobile on pretty heavy pages -- e.g., https://mylinks.ai/cozygames .

          That seemed OK but not great for me, I'll need to look into how things eventually get cleared out of cache (I see ~80% of requests are cached).

          Update:
          Saw this from Vercel docs:

          Static files are cached for up to 31 days.

          I can definitely see what you mean by the problems that would come up when you have more users. Agree sounds like you'd get that with any serverless solution and have to switch.

          1. 3

            Yeah, I remember seeing the 31 days, but it's not exactly true in terms of the edge cache. So try this, open up Chrome DevTools before you go to your site. Then look in the Network tab and scroll up to find the first entry which is the browser getting the HTML document. If the cache is warm, it will come back under 100ms, but if it's cold it will take 500ms+ and you'll see the headers show a MISS for the x-vercel-cache. Refresh it again and it will always be warm and give you the page super fast even if you force a hard refresh. Look in the details and it's not that the connection time is faster, it's just not waiting for Vercel anymore.

            The other web vitals metrics might be more dependent on page content than the hosting platform. So that's something to investigate. I'm only holding Vercel responsible for getting the content to the browser fast!

              1. 1

                I was about to ask you to write a blog post about it. You can also write about current Vercel issues.

              2. 1

                Only saw cache hits in the headers but see that overall there's ~79% cache hit rate so something to check out.

                And nice, that was fast! Definitely feel like other audiences would also benefit from hearing about your experiences with specifically Vercel

  4. 2

    For 'scale to zero cost' I like a stateless Google CloudRun nodejs server in front of firebase. Database is a bit limited but for indie hacker stuff it has a lot of advantages in terms of very minimal upkeep.

    Only part that costs anything in that setup is about $0.02 a month in docker image storage because I keep a few versions backup leading to about 5GB of docker images.

  5. 1

    I went with Render.com based on the feedback and I am absolutely loving it. Super simple to deploy a Go app paired with a postgres database. Noticed a huge speed-up, and I am just starting off with the base options. Thank you all for the valuable feedback!

  6. 1

    Do what you know well. If you prefer k8s, run with it. Sounds like serverless wasn't your cup of tea (not mine either really). Go + k8s is a great option if there isn't too much overhead managing it.

  7. 1

    With lambda you are pretty limited down the line when you need to add stuff to the product.

    Here is a solution to keep costs down -

    • k3s
    • ec2 t4g spot instance
    • gp3 ebs storage
    • mongodb - Its not the best but it's free till you have enough customers to pay for rds.

    P.S - There is a risk of losing the spot instance so keep the config of cluster handy to spin it up faster ( Helm )

    In the end none of this matters if no-one pays for the product so "Ship it!!" at any cost.

  8. 1

    Hey, I want to share something that I'm a bit ashamed of. I've been changing stack a lot in the past few years and focusing on the technology instead of the business.

    Here's the path I been through.

    Wordpress (Free Theme) / GoDaddy → Fullstack PHP / GoDaddy → Wordpress (Custom Theme) / GoDaddy → Wordpress → Vue.js / Firebase → React.js / Gatsby / CloudRun / Firestore → React.js / Gatsby / CloudRun / Netlify / Firestore → Next.js / Vercel / FaunaDB

    Problems I encounter:

    • Cold Starts
    • Limitations of backend
    • Spaghetti code
    • High fees
    • SEO tags support
    • Wasting time on CI/CD

    But I think that's it. Next.js + FaunaDB is the stack I'm moving forward with and I think that's for good reason. I spend 0 time on CI/CD, it's super fast / flexible and SEO support comes right out of the box.

    There are a few limitations eg. WebSockets support and running binary executables but that there are reliable workarounds.

    The learning journey was very rewarding tech-wise but it has to come to an end. I think there a great value in being really comfortable in your stack if you want to move forward with your product.

    1. 1

      Tech stack jumping is a problem of mine too :D. Although I am happy to admit I have landed on a consistent language (Golang) for most projects.

  9. 1

    If you are used to k8s and writing your own yaml, throw in knative, 'serverless' style deployments, scale down to 0 or keep a instance warm.

    I find serverless awesome for ops tasks and internal functions, but I can't quite find a good reason to have them for primary API endpoints.

    As for DB, postgres is a powerhouse (and has decent json support now adays) and if the need ever arises, has tons of potential tuning and extensibility. I never really liked MongoDB, and used to make fun of 'mongodb is webscale', however if it works for your app, then you might as well go for it.

  10. 1

    I don't know anything with your stack but I think aws lambda is really good and why you think about changing database

Trending on Indie Hackers
Yayy! Made my 2nd sale in one month 43 comments Help me positioning my SaaS product 24 comments Need feedback about the landing page 22 comments 🤯Blown Away, Everyday. 20 comments Productized service: Got my 1st client (€2500/m) with 100% upfront payment 17 comments Need Feedback About My Landing Page 14 comments