1
2 Comments

Speed Test - Improving site speed 48%. CDN, Http2, and more!

Hey Everyone!

Last week I had a Reddit post that gained some momentum and drove a decent amount of traffic to my site.

/r/minimalism post

I had a boost of 500 visitors come to the site in a day.
Imgur
You can see it was a large increase in traffic from what it was getting previously, though still relatively small in website terms.

Slow load times were an issue.

I kept hearing that the site was slow and took forever to initially load. At first, I didn't want to believe what I was hearing. For me I was able to get the site, on mobile or desktop, in under a second or two. It blinked and I was there. The lighthouse analytics were slow but it still seemed fast and not what the analytics were showing. I also had a loading screen which loaded even earlier than this and hopefully grabbed users attention for a split second before the full site popped in.

Imgur

But still the slow load times were an issue..

Since the site is formatted as a Single-Page-Application ( SPA ), the initial load time is one of the drawbacks of the architecture. The whole site is downloaded and cached when a user first goes to a SPA page. This increases the initial load time but any navigation to other pages in the site are immediate since everything is already there, no network needed. With SPA's, no double dipping of network bandwidth to server the other pages is necessary. After the user initially navigates to the site any subsequent loads are pulled from their browsers cache, and your server can focus on the business logic and getting app data back and forth.

But still the slow load times were an issue...

After putting my emotions aside, I was going to address the problem and see what I could do to make things better. I laced up my big boy boots and jumped in!

Why was it slow

For me it wasn't, things were fast. But I knew that other people might be using different hardware to access the site and more importantly, different network connectivity/bandwidth. This was especially true for people in other countries.... Hmm.. other countries? Yes! Users connecting in from all over the world would have to directly connect to the sites single server in New York. Was this a bottleneck?? Possibly.

I also didn't want to write off the lighthouse analytics quite yet, though I was very familiar with them and had already done multiple iterations of their recommended fixes. I opened the developer tools and ran another report.

Imgur

Http2 caught my eye. I had previously seen this line in my other reports but didn't know how to fix it. I researched the difference between HTTP 1.1 and HTTP 2 and saw that the switch could yield performance improvements. The total number of requests, 145, was also concerning to me. I knew I could do better.

Imgur

Scanning the lighthouse report once more I found that I could also save some time by preconnecting to urls, about 0.2 seconds by their calculations. This might be another easy fix.

Finally I went over to the network tab of the browser developer tools, filtered requests by size, and noticed a few files that were not mine. The sizes of these were from about 132KB to 300KB. This amount of data is considerable since it was nothing I was developing or directly knew how it got there. After doing some research I saw that it was the YouTube video I had embedded. I could see the cover image for my video when I hovered over the request, and other images and JS files from YouTube. On top of that, the video kinda stunk. It needed to be redone and didn't provide much importance to the user. That might need to go.

Using a CDN

Thinking more about international users, and even users in my own country that were not close to the server in New York, I decided to host the site on a CDN. The first one that came to mind, and really only one I had heard of before was cloudflare. They have a free plan that provides their CDN service which was really all I was looking for at the time. I decided to give it a go and signup.

cloudflare

Following their instructions it was very easy to get configured. The biggest switch I had to make was changing the name server records for the site. This was as easy as heading over to GoDaddy and changing:

ns1.digitalocean.com
ns2.digitalocean.com
ns3.digitalocean.com

to

eric.ns.cloudflare.com
lorna.ns.cloudflare.com

And that was it. After the records propagated the site was now being served through the CDN, but all subsequent requests and data transfer would still go to the server in New York. For me, this meant my WebSocket connections worked as expected. Just the initial load of the site would come from the closest CDN server, which made things faster and helped get us to the end goal.

CDN Side affect

After the CDN was up, I had an unintended result. When thinking it through, it made sense and was quickly resolved. I typically connected to the New York server by ssh via the domain name or hostname of the machine. For example:

ssh <user>@stor.guru

Note the stor.guru after the @ symbol

This was now breaking for me. I could not connect! Realizing that I had just changed my name server, I tried connecting via the direct IP address and I got through no problem. This meant that I needed a quick fix in my hosts file ( /etc/hosts on linux ) and I could again connect using the hostname I expected.

127.0.0.1        localhost
88.247.12.31     stor.guru

HTTP2 Fix for NGINX

Looking into the http2 fix, I saw that I needed to add a simple 'http2' string to my listen directives inside the nginx config file for the site. This seems to be pretty easy right? There's a catch!

server {
     listen 80 http2; 
     .
     .
     .
}

By making the change above to my nginx file, I redeployed my site but encountered the following issue. Because I was using CertBot to convert my site from http to https, when requesting certificates, CertBot complained that my site was speaking HTTP2 but it had expected just plain HTTP ( 1.1 not 2.0 ). This is because with unsecured traffic, just http not https, the HTTP2 protocal is not really accepted. HTTP2 traffic over an unsecured channel is typically not accepted by most browsers today. You can speak it but no one will listen. This was the case with CertBot.

What I ended up needing to do was turn on HTTP2 after I had used CertBot to get the certificates. Okay, now its easy! I used the following sed command to get things fixed!

sed -i 's/listen 443 ssl;/listen 443 ssl http2;/' <website_nginx.conf>
nginx -s reload

And you can see that the websites nginx conf file was changed. I did another report in lighthouse and saw Improvement!

server {
     listen 443 ssl http2; # Managed by Certbot.
     .
     .
     .
}

Minifying libraries

The other half of that HTTP2 issue was that there were 145 different request made. Each of these requests had their own header payload and required acknowledgement messages to go back and forth ( since they were TCP not UDP ) and when you multiply that by 145, you get a slowdown. The number of requests could be cut down significantly by minifying my JS files into a single file. This is a common practice. I realize that it has totally been explained before by others and can be done in many different ways, I am just here to explain what I did. My internal js, or code that I developed, had already been packaged into a single JS file ( sg.min.js ) so I was good right?

Well the other half of this was that all of my libraries that I included, 3rd party libraries, were all included in separate script tags. They were all the minified versions of the libraries but again they were separate tags which meant that each had its own header payload and acknowledgement going back and forth. This is the point that most people forget, after minifying there is still more you can do!

I now created a single script tag for all of my 3rd party JavaScript that I created using terser which saved lots of extra requests for the same data.

Pre Connecting to URLs.

Another fix in the lighthouse report that was easily addressed was preconnecting to URLs. I will not bore you with everything preconnect urls provide ( see here for more details ) but basically it allows the browser to prioritize getting resources from other sites and increases the flow of loading the page.

I added the following. Then reran the lighthouse report. The issue was resolved, and page speed increased. Nice.

<!-- PRE CONNECT URLS -->
<link rel="preconnect" href="https://m.stripe.com">
<link rel="preconnect" href="https://storage.googleapis.com">
<link rel="preconnect" href="https://www.google-analytics.com">

Dropping 3rd Party resources

The last major issue that I wanted to address, and this is an important one because it is very much a judgement call that can save you valuable load time, is using 3rd party resources especially embedding code.

For me this was removing a YouTube video from the lower half of the main page. The video itself was a little dated and it was not the center of attention like it had been in the past. It was still draining valuable time away from the user getting to the rest of the page. These types of decisions are the hardest because you have to weigh the values that each provides. If the YouTube video was front and center and was showing our most recent version of the software ( which we plan on doing ), I would consider adding it back in, but now it was just dead weight that could be trimmed.

Takeaway

The biggest thing I learned was that if your users say its slow, its probably slow and taking the time to hear them out is one of the best things you can do for your site.

Here is my improved lighthouse report!

Imgur

Please Let me know if it is still slow for you!

Keep in mind that the site is a SPA and is fully loaded after you see the main page. All other pages should be pretty snappy and can even be navigated to offline!

Stor.Guru - https://stor.guru

I have more ideas on how to improve things like implementing the Shadow DOM API and potentially turning what I have into Web Components.

I really need feedback on the site to make it better, please let me know what you are thinking, it is greatly appreciated!

Thanks for your time!

  1. 1

    We also have a Demo you can checkout!

  2. 1

    Again any feedback would be greatly appreciated! We would love to get better and build a product that is useful!

Trending on Indie Hackers
How I grew a side project to 100k Unique Visitors in 7 days with 0 audience 49 comments Competing with Product Hunt: a month later 33 comments Why do you hate marketing? 28 comments My Top 20 Free Tools That I Use Everyday as an Indie Hacker 14 comments $15k revenues in <4 months as a solopreneur 14 comments Use Your Product 13 comments