How Facebook is Trying to Recover the Money It Lost During the Outage

Facebook lost approximately $60 million in revenue during the outage. Most of it was supposed to come from advertisers.

While reading Facebook's apology, I noticed an interesting sentence:

...some advertisers may see accelerated delivery as our services recover from the outage.

Accelerated delivery? In simple terms, Facebook will try to spend advertisers' money faster for the day (and possibly the days ahead, depending on your budget settings).

According to Facebook, this "accelerated delivery" will apply to some advertisers. Is that some number big enough for many advertisers to notice?

I did some additional research and came across this AdWeek article with a first-hand experience from a media buyer who manages multiple ad accounts:

“One media buyer noted Facebook’s delivery numbers have gone up dramatically this morning as the platform pumped up delivery to make up the losses.”

And what happens when you show too many ads too fast? You see a drop in conversions:

The agency is seeing a drop of at least 50% in conversions compared to what it normally expects, across roughly half a dozen clients. For some clients, it’s more than 50%.

Is there a malicious intent here? Maybe it's the FB algorithm at play, maybe it's someone at Facebook saying "speed things up and try to recover our revenue ASAP". Who knows.

What you can do about this: Pause running ads that you began running during the Facebook outage. It's far better to spread out the money you were meant to spend during the 6-hour outage than to listen to Facebook and spend it all at once.

Btw, if you liked this insight and want to receive other growth-related insights on what to do (and not do) when marketing your startup, feel free to subscribe:

  1. 6

    I work at a company that makes money through advertising, similar to Google or Facebook Ads.

    I can say with near certainty that there is no maliciousness behind this. The algorithms are designed to spend advertisers' monthly/weekly/etc budgets as evenly as possible over time and to spend all their budget. Traffic and so many other things are inconsistent day to day, so the algorithm is constantly raising and lowering bids to keep ad campaigns on track to spend all their budget by the end of the campaign period while also not running out of budget early. Missing a day of traffic would naturally cause bids to go up a bit because there is now an additional day of budget to spend in the remaining month.

  2. 3

    I remember once trying a Google universal app campaign (UAC). For those who don't know, in a UAC basically you hand over your wallet and trust that Google's machine learning algorithm will spend your ad money optimally.

    One time, UAC decided to spend a quarter of our budget in India despite the fact that we were a U.S. based app and we specifically chose the option to target only the US. That led to zero conversions.

    Another time, UAC decided to spend half our money on YouTube ads which also lead to zero conversions. We couldn't figure out how to opt out of YouTube (I don't think you can), so we just terminated the campaign. Maybe Google's AI knew something we didn't, but 0% ROI wasn't going to work for us.

    It's mind boggling to me how much trust you're supposed to put into these ad networks when their incentives are clearly not aligned with yours.

  3. 2

    I don’t doubt your hypothesis but there are other explanations as well. In distributed systems, it’s actually a fairly common symptom for certain processes to be accelerated for some time following an outage. Distributed systems use A LOT of queues or queue-like systems and outages often lead to these systems accumulating a backlog. When things come back online, the consuming processes have more goodies to process than normal and things can happen quickly. Not trying to defend these guys but there could be an innocent explanation.

  4. 2

    Did we ever hear what caused it? I mean, actually what caused it. Because the story they put out is clearly bullshit.

    1. 2

      The last word sums up Facebook altogether.

    2. 1

      There is a pretty good post about the outtage in the Cloudflare Blog, if you are interested in the technical site of thing: https://blog.cloudflare.com/october-2021-facebook-outage/

    3. 1

      "I'm amused at how 'a configuration change to routers took Facebook down' is scarcely believable to a lot of people outside of tech, yet is rightfully one of the first guesses of people working in tech."

      1. 1

        I think this is quite an arrogant / smug / ignorant stance by Vallery Lancey (but what else would you expect from Twitter?). I do work in tech and I have done for well over a decade. Not only that but I work in a large team of skilled engineers and we all have doubts about this.

        No-one doubts that a config change can bite you in the ass BUT what myself and my peers have questions over are as follows:

        • How can one config change bring down Facebook, instagram, messenger, WhatsApp PLUS facebooks internal systems including their in-office security / swipe systems

        • If that even was the case then why was it not immediately reversible

        • Surely a change with that volume of catastrophic ramifications should be closely monitored, logged, audited and not be the responsibility of one person (and thus easy to track and revert as in point 2)

        1. 1

          These questions have all been credibly answered by the Cloudflare post and the FB engineering followup.


  5. 2

    You got to please the shareholders...

  6. 1

    It seems like they don't care about the customers or users but pleasing the investors and wall street.

Trending on Indie Hackers
Share your project below👇 and I'll share it with 3,000 newsletter subscribers 71 comments How do I transition from a wantrepreneur to an entrepreneur? 62 comments App Stores are powerful search engines 22 comments Building a microsaas in public 19 comments I built the MVP... now what? 19 comments Working towards an MVP 10 comments