While working on my new project, PLUMA, I spent a couple of days testing and comparing our content infrastructure with the most popular static hosts and some other options.
I thought it would be nice to share these results publicly as I found them somewhat surprising and illuminating.
Note: this article has been updated a number of times. See the changelog at the end.
- AWS S3 + CloudFront
- Cloudflare Workers Sites
- Digital Ocean Object Storage (with and without CDN)
- Firebase Hosting
- Github Pages
- Vercel (previously ZEIT)
AWS S3 is running from a single bucket in the US and using their own CDN. I imagine some AWS guru could configure cross region replication with Route 53, Lambda at Edge, and whatnot but this totally escapes me.
I have no idea how Firebase, Netlify, or Vercel are serving their static content. I imagine they have storage buckets in a couple of locations and are obviously using their own CDN on top of those buckets.
Digital Ocean Object Storage is not really for serving static websites but I thought it would be cool to include results from a bucket in NY with no CDN to compare. I also included results using their CDN to get more data points.
PLUMA is not really a static host either since it's serving dynamic content but I've included the results here for future reference.
First I crafted a very simple
index.html and uploaded it to all those providers.
Then I used Turbobytes Pulse to get results from a number of worldwide locations. Turbobytes is actually running these tests using agents from end-user networks which is nice. See here if you'd like to host a Pulse agent on your network.
Unfortunately, not all Pulse agents are online at all times. I could never catch online agents in Latin America or the West Coast of the US and there were far more active agents in Europe. Since this most likely biased the global results, I've included regional numbers too. I think this is still preferable than getting results from servers that in many cases will produce latencies of less than 1ms.
To get cached and uncached CDN results I did the tests on random times over a number of days. I did the same number of tests on all providers to get a similar percentage of cached requests.
I only used the default domains given by the hosting providers to avoid any DNS shenanigans. I also kept the default caching times for each provider which is probably the most common use case. All tests were done over HTTPS.
Enough preambles! Let's get to the plat de résistance:
All values measure time-to-first-byte (TTFB) and are rounded to the closest millisecond.
|Digital Ocean CDN||10ms||26ms||136ms||234ms||3,701ms|
|Digital Ocean CDN||29ms||49ms||178ms||239ms||2,862ms|
|Digital Ocean CDN||10ms||22ms||130ms||131ms||3,701ms|
It's worth pointing out that in the North America region most requests were made from Ney York, Florida, and Kansas.
|Digital Ocean CDN||10ms||16ms||67ms||198ms||1,451ms|
|Asia & Oceania||2829||10ms||54ms||165ms||336ms||3,935ms|
Global vs regional
The big difference between regions is most likely caused by the locations of the origin files and the different strategies adopted by each provider.
Cloudflare Workers seem the least affected by regional variations. It makes sense as Cloudflare runs on many locations all over the world and AFAIK Workers are pushed to many of those locations.
Which static hosting provider would I recommend?
After collecting all this data, and (I think) extracting every bit of useful information, my conclusion is: the more traffic you have, the less it matters in terms of performance. If you look at the TTFB values of cached responses (min and median) all providers have exceptionally good results.
In a previous version of this article I recommended Netlify as the best go-to option, based purely on the numbers of these benchmarks. I've changed my mind and decided to just present the data as is. Each use case is different and each provider has different pricings and conditions.
Cloudflare Workers Sites
If you've ever used cloud functions I think you will agree it's impressive that Workers can compete neck to neck with static files. In these results I see no trace of the infamous cold starts that still plague cloud functions from AWS, Google, and Azure.
I gotta admit I was very surprised by the performance of Digital Ocean too. I expected results without the CDN to be much worse considering roundtrips from around the world to a bucket in NY.
Maybe I should do another test with all the object storage providers without using any CDN. 🤔
I didn't expect their CDN to be that good either. I wrongly assumed this service would be a hobby project for them, after all Digital Ocean is mostly known for their compute droplets. With such great results and priced at $0.01 per GB of bandwidth (worldwide) it is a fantastic offer. Hopefully they will allow object storage to host websites in the future.
Well, this is certainly not the ultimate benchmark. Alas, we do the best with what we have available.
In any case I hope this has been interesting or at least entertaining for you.
If you have any comments hit me up on Twitter or send me an email to yo@ at this domain.
If you're curious, here's a repo with the raw and parsed data.
Update: 19th June 2020
- Added Github Pages.
- I updated the data and conclusions after finding a data parsing error which ignored all requests above 1 second. This error represented only 1.2% of all requests but it did impact some providers negatively.
- I've had to remove Stackpath results since I just noticed many requests were actually HTTP errors which were biasing their results. I will redo those again at a later date.
Update: 6th July 2020
- Added Stackpath back
- I redid a second round of 5000+ requests. The presented results use all the data from all the requests made. The results varied slightly but the conclusions remain the same.
- New graphic with percentage of fast requests.
- I removed the rankings section. Since the average time does not paint a complete picture of what's going on.