1
23 Comments

Measures for DDoS attacks when using Serverless

Hi,
What measures you usually take when building products with serverless backend architecture?

Or can we totally ignore this initially?

  1. 2

    Ay,

    I worked with GCP Function before moving to AWS Lambda and API Gateway. GCP Function are, at least for me, not advanced enough without Apigee (which cost a lot if I remember).

    API Gateway from AWS offer a lot of options for your functions, and ones interesting are rate limit and quota which I use to "protect" a little my functions.

    1. 2

      Yea, unfortunately Apigee is too costly for bootstrapped companies like us.

      1. 1

        Ay,

        Take a look at AWS Api Gateway then. I'm starting using it right now for one of my side project and it looks promising and not so expensive.

        1. 1

          All our code is on GCP. Do you still see using Amazon Gateway helps?

          1. 1

            Ay,

            In my side, yes. Keep using GCP Function was not a solution as we needed more options for ingress to our function (auth, quota, routing, canary, ...).

            At this time, GCP Function provide too low options for that without using Apigee.

  2. 2

    Initially ignore it. Than use some API gateway which have rate limit or even ddos protection

    1. 1

      Totally respect this decision. Sometime API gateway can be to the rescue. Unfortunately, no solid api gateway available on GCP currently.

      1. 1

        Well, they have at least two different one: Apigee, cloud endpoints and you can put Istio, Gloo or Ambassador (or whatever) into GKE

  3. 1

    @ayyappa99 I realize this is an old thread, but did you ever look into Google Cloud Armor? https://cloud.google.com/armor/

    1. 1

      Unfortunately armour is too costly

  4. 1

    I was just thinking about the same thing .
    I would try to use cloudflare and research about it more. I read that if you only use the rules to protect your firestore you still be charged for money if the request was not accepted.
    For example , if you use "allow read : if auth != null", you will still be charged for a read if you get a request without any authentication. it will be a permission denied but that still shows up as a read on the usage info. So I imagine a DDOS -er could massively hurt your wallet on the Pay as you go plan and there is only spark and blaze.. so this is a good question...

    1. 1

      Recently I was checking cloudflare workers which can act as a first layer as an API gateway. Would be great if you can share some info after having a look.

    2. 1

      Totally with you. I know how you feel as you seem to use Firebase. We use Firebase functions which is no exception here. We are worried adding a cloud flare layer might add extra latency which may not be acceptable. Any thoughts?

      On another thought, having a Redis store to rate limit seems to be an good solution but again as you said the calls(cloud functions) are still chargable.

      1. 1

        I read that Firebase is tightly integrated with the google cloud platform, so maybe the Google Cloud Armor is the way to go.

        1. 1

          Funny thing is Armor costs higher than than the cloud functions/firestore pricing. In that case its better to invite DDoS without armor :|

          I still don't know on what basis they priced Armor.

  5. 1

    Cloudflare has a free tier that includes DDOS protection. You could put that in front of your serverless deployment with relative ease and handle most cases of bad actors.

    1. 1

      Recently I was checking cloudflare workers which can act as a first layer as an API gateway. Need to try it out as it comes with DDoS protection + API Gateway with rate limiting features.

      1. 1

        FYI Cloudflare Workers are not a good solution for rate limiting. They are limited to 1 write per second and don’t ensure propagation under a few seconds. Rate limiting requires high throughput reads/writes making it unfeasible.

        If you are interested, I’m building a product to handle exactly this issue: https://getlightfront.com

          1. 1

            https://developers.cloudflare.com/workers/about/limits/#kv

            Relevant bits:

            Up to one write per second per key.

            And:

            Workers KV is an eventually consistent system, meaning that reads will sometimes reflect an older state of the system. While writes will often be visible globally immediately, it can take up to 60 seconds before reads in all edge locations are guaranteed to see the new value.

            1. 1

              Isn’t it possible to just use memory alone instead of storing in kv storage?

              1. 1

                Couple of issues there:

                • Workers don't persist beyond the request context AFAIK.
                • Even if they do, each worker would have it's own instance of memory. So if one worker tracks something in memory, another worker instance wouldn't know about it, resulting in inconsistency.
                1. 1

                  True that each worker has its own memory space and the cache is limited to it alone.
                  Just trying to understand if it fits my current problem. Let's say I'm using it just to have rate limiting by IP Address, it shouldn't be a problem right?

Trending on Indie Hackers
How I grew a side project to 100k Unique Visitors in 7 days with 0 audience 49 comments Competing with Product Hunt: a month later 33 comments Why do you hate marketing? 29 comments My Top 20 Free Tools That I Use Everyday as an Indie Hacker 16 comments $15k revenues in <4 months as a solopreneur 14 comments Use Your Product 13 comments