
For Static Sites, There’s No Excuse Not to Use a CDN - dwalkr
https://forestry.io/blog/for-static-sites-theres-no-excuse-not-to-use-a-cdn/
======
ajnin
Is this an ad for Netlify ? It's cited 10 times in the post.

There are plenty of reasons not to use a CDN, not the least being that you
might not want to give a third-party access to your traffic. Even static
information might be sensitive, accessing some forbidden data can put people
at risk. The central position CDN are increasingly taking on the Internet make
then worryingly nice targets for snooping by surveillance bodies.

~~~
saas_co_de
There is also very little benefit to putting pages on a CDN.

CDNs are beneficial for images and static asset files because pages often have
dozens of those so the small speed up of going through a CDN is multiplied
many times over.

The difference yielded by moving a single file (the primary page) to CDN is
unlikely to make any measurable benefit overall unless you specifically run a
site with a very geographically distributed user base.

Even for sites with a NA/European user base you can host in either US or
Europe and it makes no difference. The latency is not significant compared to
other performance issues.

~~~
gamegod
And HTTP/2 further reduces the benefit of a CDN for small assets (vs. no CDN),
since requests are asynchronous and multiplexed, giving you better
performance.

------
ireflect
CDNs make a lot of sense for improving load times and for the overall
efficiency of network utilization, but I worry that the current ones
(Cloudflare et al.) contribute too much to the recentralization of the
Internet.

What we need is for gateway.ipfs.io to use geo-dns such that it can serve this
purpose, and IPFS gateways can be run all over the world by regional ISPs,
universities, and even individuals.

~~~
prophesi
Yeah, +1 to this. I'd use a CDN if it wasn't all owned by one company. I run a
static site on IPFS, and just have two nodes located in NYC & San Francisco
that my local gateway is always connected to.

It does a pretty good job for USA visitors (which is where the majority of my
traffic comes from). But it'd be ideal for IPFS to use geo-dns, along with
FileCoin implemented so that I can pay for nodes in other countries to have my
content pinned.

Also, my site uses Service Workers to cache all of the content + TurboLinks,
which drastically speeds up every subsequent page transition. I'm not sure if
that's doable if you're using an external CDN like Cloudflare for all of your
resources.

~~~
aspett
Do you have a blog on setting this up per chance? I'm interested in reading
more about IPFS (haven't looked at it since I first heard about months and
months ago). Their website seems a bit light on the details bar a link to a
whitepaper I haven't dared to look at yet ;)

~~~
niutech
Here is one: [https://medium.com/@merunasgrincalaitis/how-to-host-your-
ipf...](https://medium.com/@merunasgrincalaitis/how-to-host-your-ipfs-files-
online-forever-f0c56b9b5398)

------
londons_explore
CDN's also ruin HTTPS security.

As a domain owner, you have to give your HTTPS private keys, and all your
users private data (authentication cookies, passwords, etc.) to your CDN, or
you have to do a lot of careful dividing up the "static" resources from the
dynamic stuff on different domains, serving them with different certs.

As a web user, some CDN's like cloudflare offer 'HTTPS', but with plain HTTP
as the backhaul to the origin server. That tricks the user into thinking their
connection is secure, which is, IMO, immoral.

~~~
timc3
You do realise they are talking about static sites with no user private data?

~~~
gnode
What public data you access is in itself private data. Even if you don't care
about your personal privacy, I think we should all care about the chilling
effects of people not being afforded privacy to look at public information.

That people can browse Wikipedia, and their ISP / government is prevented from
knowing whether the article being read is about knitting or a political issue
is a good thing.

~~~
timc3
Ok good point.

------
blueflow
My site is not using an CDN and faster than forestry.io.

Mostly due to the fact that its 2 statically resources than can be loaded in
around 0.3 seconds.

Foresty.io can load the HTML and the CSS in the same amount of time, but then
another 8 megabytes of JavaScript and imagery follows.

Optimizing for ping time is premature optimization when website obesity is the
elephant in the room.

~~~
JBReefer
0.3 seconds seems really long for that - are you storing the files on disk or
in memory?

I mean I really agree with you and think more people should do exactly what
you're doing, but comon, let's bikeshed this at least a little :)

~~~
blueflow
Both - on disk, but the kernel caches it in memory. Not everyone is blessed
with an fast internet connection.

------
alanfranzoni
Unless you don't want to give up at least partial control of your domain and
content to an intermediary.

------
seba_dos1
I see plenty of possible excuses, starting with the easiest one to come up
with being not wanting any intermediaries so the TLS connection is truly end-
to-end.

------
dorfsmay
"Subscribe to our newsletter to get the posts directly in your inbox."

No! Of course not!

I don't understand why every damn site add those annoying popup. I would love
to know his many subscriptions sites receive from those popup.

~~~
pwg
Mobile Firefox plus uBlock origin set to block all 3rd party scripts and no
'subscribe' popup appeared for me.

------
tw1010
Laziness, premature optimization, better to get your idea out there instead of
obsessing over tools. I can think of several.

------
JeanMarcS
Isn’t http2 resolving the latency problem (after the first connection of
course) ?

You open one connection and then the rest flows. For static sites it might be
enough.

Of course, in case of worldwide audience it might be a problem, but with DNS
anycast can’t you resolve this with putting your website on local providers ?

~~~
organsnyder
> ... with DNS anycast can’t you resolve this with putting your website on
> local providers ?

Then you're effectively rolling your own CDN. Definitely an option for some
cases, but overly complex and costly for most.

------
mrb
I have a very good excuse for NOT hosting my site on "a CDN": one CDN is a
central point of failure. Instead I host it on 3 geographically redundant dumb
servers from 3 different hosting providers (and all my DNS records resolve to
3 IPs.) As a result my site has had 100% uptime since its deployment years
ago, despite many individual outages at these hosters. There has also been
multiple occurrences where an entire swath of the web was down because of
outage at $CDN, while my site was chugging along just fine.

I think the most likely outage I might encounter would be due to operator
error, eg. accidentally pushing a bad web server config to my 3 servers.

More details: [http://blog.zorinaq.com/release-of-hablog-and-new-
design/](http://blog.zorinaq.com/release-of-hablog-and-new-design/)

~~~
owaty
I read your blog post and some of the comments, and I don't see how you can
justify this claim:

> As a result my site has had 100% uptime since its deployment years ago

Assuming your individual hosts did go down occasionally, how do you know how
many of your visitors waited 2 or 3 minutes for the browser to try the
alternative IP?

~~~
mfringel
If you have no service level requirements, you definitionally have 100%
uptime.

------
727374
Every site that needs a CDN probably already has one, because it's a quick
win. For every site that doesn't need one (small user base, small asset size,
etc) it's likely not worth the added complexity.

------
LinuxBender
There is a middle ground, which I have done for hobby sites that sometimes get
popular by mistake.

I set up dozens of caching reverse proxies, distributed on a few VPS
providers. Each VM then uses strongswan to route to my primary origin servers.
In some cases, there is no extra bandwidth cost, if the caching VM's happen to
be in the same datacenter as the origin servers, as I can use the private
interfaces for my strongswan traffic.

If I want to poorly mimic the geographic DNS behavior of CDN's, I can use
split views in DNS to very roughly send people to a closer caching proxy. It
isn't perfect, but then neither are CDN's.

To take this a step further, I can use multiple domains with TLS SNI from
different registrars to provide some take-down resistance.

The advantage to this model is that one CDN or VPS provider does not have
control over my content.

The drawback is that I have to manage these nodes myself. Nowadays that isn't
too bad, because each VPS allows for making API calls to spin up VM's with
pre-built images. Ansible also allows for adding new nodes dynamically. There
are community playbooks for most VPS providers.

------
yakcyll
I don't think I'm up to date with the Web enough to understand the selling
point of this article. Is it not that most use cases of static sites are not
at all concerned with latency or bandwidth, but rather simplicity and
presentation? My perspective is limited in scope, mostly to personal projects,
so some additional insight will be much appreciated.

~~~
0xCMP
By static sites they mean more than just a bundle of markup, but a website
created using a Static Site Generator (Jekyll, Hugo, etc.) and because of
GitHub Pages it's become popular to make it that simply pushing to a git repo
with the markup for one of these tools will trigger a task to pull the repo,
compile the markup, and serve up the result.

GitHub pages is free and highly scalable, but unless you're a paying user
GitHub has some limits. So for things like very popular projects you could
simply move your stuff to a server+nginx or s3+cloudfront. However, these cost
money so it's hard for many to justify doing that when GitHub Pages is free.
In comes Netlify which provides the "better" GitHub Pages experience with the
ability to use whatever you want that can be installed using go, python, ruby,
or node and the resulting markup is distributed via a CDN all for free.

With such a nice deployment story the question becomes: why not just put every
website like this? Mainly because it's all based on using Git to edit markdown
files which usually non-technical people have issues with. Netlify offers
NetlifyCMS as a solution to this and Forestry offers a more robust version of
this with a more powerful and clean editor.

------
nzoschke
CDNs are great in general.

I find myself putting CloudFront in front of pretty much everything to unlock
speed, security and now even functionality like auth thanks to Lambda@Edge.

I have a CloudFront add-on for Heroku that, in some cases, can double
performance without any application changes.

[https://www.mixable.net/blog/making-heroku-
fast/](https://www.mixable.net/blog/making-heroku-fast/)
[https://elements.heroku.com/addons/edge](https://elements.heroku.com/addons/edge)

------
sebringj
The only part I had an issue with in terms of a CDN was expiring content fast
enough when making changes as it seemed to be a headache worrying about that
as I can't count how many times a customer asked "should I refresh?" but I
guess then it should be a dynamic site if changes are frequent enough. Using
Netlify seemed to handle this problem of expiration for me and I do believe
they are using AWS and are handling the details or caching and expiring
headers etc. so I would recommend using a service that removes worrying about
that part.

~~~
cortesoft
You can purge content from most CDNs through an api.

~~~
sebringj
Absolutely true but try telling that to your designers using Cloudfront.

------
andreareina
Page authors: for content sites, there's no excuse to auto-focus on the search
bar, it breaks keyboard navigation (and given the content, you can expect that
a large portion of your audience uses it).

------
lowbloodsugar
I used Amazon Cloudfront for this tiny blog describing how to set up a tiny
blog using Cloudfront. [1]

[1] [http://www.jamiebriant.com](http://www.jamiebriant.com)

------
enriquto
I like to do statistics of user agents and ip localization. If the files are
hosted on a cdn the http logs are not easily available. For me this is the
biggest reason for self hosting.

------
yoz-y
For static sites of a huge size and with lots of visitors maybe. But hey, if
you publish a full RSS feeds then you don't even need to care.

~~~
zrail
A full RSS feed is a really good candidate for putting on a cdn though. It
doesn’t change often and is really big.

~~~
yoz-y
Yes, but in the current landscape you can punt it off to Feedbin, Feedly and
other services like that since they are those who will to the bulk of the
serving.

------
russh
And if you have people in China that use your site some of the CDN's are
blocked and can't be reached from behind the GFoC.

------
starchy
CDNs are great, but can we stop upvoting abusive headlines?

------
patrickg_zill
The total size of the HTML on that page (including stuff that loads JS and
tracking images) is 30KB. It's all the "other" stuff that makes the site feel
slow IMHO... perhaps for marketing purposes they want to track everything; but
for actually transferring information, a CDN would not help them given how
quickly a few hundred KB can be served.

------
rf1331
Any site that cares about SEO should definitely use one.

~~~
enz
May I ask why?

