🚨 AWS went down today — and it turned into a chance to make the app start up more responsive if and when a similar outage occurs in the future.
This morning, TestFlight users reported that JollyTango (my real-time audio travel guide) was taking longer than usual to load. Normally, it’s up in just about a second — today, it lingered on the splash screen for quite a bit longer.
After some digging, I discovered that AWS issues had affected one of the third-party services I rely on. The app itself wasn’t broken, but it was waiting too patiently for a timeout to complete before proceeding.
🧩 The Root Cause
A nested helper buried deep inside one of the app’s dependent services was making a synchronous network call during initialization. When AWS slowed down, the app dutifully waited for a response before letting users in. Perfectly logical — but not ideal for user experience.
⚙️ The Fix
I adjusted the initialization logic to load that data asynchronously:
await thirdPartyService.initialize(
...
loadDataAfterLaunch: true, // 👈 the one-line improvement
);
Then I ran through the flow to make sure the change didn’t affect any of the subsequent logic or sequencing.
Thankfully, this is now patched before launching the app on October 30th.
Sometimes, a platform outage can be the best kind of stress test. 🛠️
Anyone else turn today’s AWS chaos into a chance to harden their stack?