Hacker News new | past | comments | ask | show | jobs | submit login
What Experts Love and Hate About CDNs (maxcdn.com)
90 points by kawera on Oct 22, 2015 | hide | past | favorite | 47 comments

MaxCDN user here. I have my love and hate with CDNs in general but one thing I hate specifically about maxcdn is that once you create a "pull zone", they will continue to charge you for it even if not being used. In fact, even if you delete a pull zone, it remains active for billing in the background and have to call them to terminate completely. That is shady. Other than this, pretty good service.

Hi - Chris here from MaxCDN. That's a really good point and unintentional! Will get it fixed ASAP. Thanks for the feedback. Drop me a line if anything else comes up. chris at maxcdn

Thx Chris. That will be great. I am a happy customer overall but this one thing bugged me. I didn't realize until I started getting all those invoices and auto payments.

Are you Chris from MaxCDN or chris at maxcdn? Sorry I am case sensitive...

Pretty clearly just providing his email address in a way intended to avoid scrapers.

does that even work nowadays?

I'm surprised no one mentioned the obvious security issues arising from the use of CDNs, especially when used to deliver JavaScript source code.

When you serve your JavaScript over a CDN rather than directly from your server, the CDN has access to pretty much everything your users have access to - the CDN can read cookies and local tokens, make actions on behalf of users (such as updating the profile, messaging users, deleting content, making payments), read out financial/sensitive information, and more. A permanent and impossible to fix XSS attack vector that can be abused by your CDN, or anyone who hacks to your CDN.

This, in addition to security issues caused when the CDN proxies requests to your main hostname and not only to your static files (requiring you to surrender your SSL keys to them).

While working on Bitrated, a Bitcoin service that deals with users' private keys and funds on the client-side, this stood out as a very serious issue. We opted to not serve any content from 3rd-party providers, at all (including Analytic services, which suffer from the very same issue), and configured a strict Content-Security-Policy that forbids anything other than hostnames controlled directly by us over SSL.

Edit: I'm aware of SRI, but it is brand new and not yet supported by the majority of browsers in use, so its not really a realistic solution just yet.

What you can do with some CDNs is have a self-served loader that pulls the files from the CDN using XHR (requiring CORS headers...), verifies the content hash, then injects it into the page if it looks good. I have a PoC on github https://github.com/ryancdotorg/verifyjs - same sort of thing could be modified to support signed files rather than hashed, which SRI can't do.

that's a pretty cool POC Ryan. Do you know @jdorfman? You guys should hook up

I do not - I mainly do arcane infosec work and only in the past two years or so (I've gotten better with javascript in that time) have I been doing non-trivial client side stuff. Consequently, most of my professional contacts are in infosec.

cool. I asked him to drop you a line

This article is quoting "CDN experts", in other words people with a financial interest in CDNs.

It's not in their interests to point out how the proliferation of CDNs results in centralisation of the internet and hence makes them tempting targets. Or how it gives a few companies the power to effectively shut out certain groups of users from large parts of the web (like CloudFlare did to Tor users a while back).

It's really disappointing to me to see the CDNs swallow the web whole. But at the same time I understand it, it's not really possible to protect against DDoS attacks for example without that kind of scale.

One thing to watch out for with some CDNs (cloudfront & fastly for ex) is per request pricing. it was the biggest component of our cloudfront bill because we have lot of small resources (js/css files) which had to be served with short expiry times. We moved to edgecast and it was cheaper & faster but recently they seem to be having ISP/PoP level outages which we are not able to pinpoint.

So we are on the lookout for good CDN. Came across https://www.keycdn.com/ which checks a lot of boxes but I can't find any reviews of it. Has anyone here used it?

Here using Fastly with great results. Consistently fast, including SSL negotiation. Truly fast invalidation. A ton of control available with Custom VCLs (they run a custom Varnish version). E.g. you can generate an "Expires:" header automatically on the CDN edge based on the Cache-Control header. You can serve a static resource if the backend times out. Best of all stale refresh and soft purge means that you can update resources "in the background" on the CDN and avoid exposing your clients to "miss latency" (backend server latency).

I feel that the ultimate answer is a multi-CDN solution. If your assets are critical to be always available, I think you are going to want to diversify your CDNs. The challenge comes in how to load balance and/or failover across CDNs--whether that's easy or not depends on your use case. Doing so automatically at the time of trouble is a challenge too. Dyn just rolled out a product called Internet Intelligence that monitors all the CDN's performances (with end-user perf measurements) and can help you determine whether one of your CDNs is slow (and whether it's slow only in certain parts of the world). Theoretically, then you can combine geo-aware DNS with congestion-aware CDN selection and BAM, diversified, less-congestion-sensitive content delivery.

My faith in Fastly was shaken after earlier this year, we experienced connections intermittently failed (HTTP 5xx) to any service behind Fastly (ours and third party) in our Amsterdam datacentre. We contacted their support but told us repeatedly it must be our fault (like a proxy). Long story short after investigation it was confirmed to be their fault and most likely a machine-gone-bad in their PoP and the intermittent nature was related to balancing across them. This was not resolved until their team woke up and we pinged their team on IRC. It shook my support from the obvious lack of monitoring or on-call system.

My experience with fastly as an end user is not great. rubygems is hosted with fastly and it times out a lot on some PoPs (singapore). Also fastly has per request pricing which is something I want to avoid.

Another multi CDN solution is from cedexis - http://www.cedexis.com/openmix/multi-cdn.html

We have used couple of CDNs including KeyCDN. I cannot comment on performance as we have not done any tests yet but the team is very responsive and helpful. They have good UI and set of tools available for their customers.

One of the best APIs I've used is the one from Rackspace on top of their Cloud Files offering. Their documentation is really rather good too. They are simply sitting on top of Akamai and you get a nice REST API to talk to.

If I remember right, when you use Cloud Files you don't get access to all of Akamai's PoPs

I love the flexibility and speed that CDNs provide however I hate the fact that if you are outside the US CDNs get very flaky very quickly. Most of the large providers don't have a Point of Presence (POP) in Africa and large parts of Europe, Russia and Asia are not covered (i.e don't have a close POP). CloudFlare and Akamai are the only companies that have POPs in Africa, it's unfortunate that small players can't use really use Akamai. Aside from the location problems, the lack and / or pricing of HTTPS / TLS really sucks.

> it's unfortunate that small players can't use really use Akamai

Small players can't use Akamai directly, but there are providers (Netflify and probably some others) that use Akamai that are more accessible to the downmarket crowd.

The most prominent example being Rackspace, which claims to offer the full Akamai network with simpler pricing & APIs:


Not the full Akamai network - only 219 edge locations out of many thousands on Akamai's full network:



cache hit ratios at different POPs and different types of resources (don't lump my .js files with my mp4s, let me give you patterns / routes to group by)).

fine grained latency numbers (95, 99, 99.9 percentiles) per region, per resource type.

this is a really good idea.

lol, how no one even mentions privacy issues.

Given that they don't, would you explain the topic?

Given that CDNs are used on many different sites, it's easy for people running CDNs to track other people around the internet.

There are also security issues: if you're referencing some JavaScript from a CDN, you're trusting the people running the CDN to actually serve the JavaScript you request.

Do not trust the CDN:

link(href='//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css', rel='stylesheet' integrity='sha256-MfvZlkHCEqatNoGiOXveE8FIwMzZg4W85qfrfIFBfYc= sha512-dTfge/zgoMYpP7QbHy4gWMEGsbsdZeCXz7irItjcC3sPUFtf0kuFbDz/ixG7ArTxmDjLXDmezHubeNikyKGVyQ==' crossorigin='anonymous')

script(src='//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js' integrity='sha256-Sk3nkD6mLTMOF0EOpNtsIry+s1CsaqQC1rVLTAy+0yc= sha512-K1qjQ+NcF2TYO/eI3M6v8EiNYZfA95pQumfvcVrTHtwQVDG+aHRqLi/ETn2uB+1JqwYqVG3LIvdm9lj6imS/pQ==' crossorigin='anonymous')

To be fair SRI is brand new and doesn't exist yet in most browsers (just went live in chrome in the last..month?). It isn't realistic to expect most people to already be using it..

You can check a particular browser for SRI support using https://ejj.io/sri/

Despite minimal browser support, it's still nice to start seeing projects like this that make it very easy to implement SRI (for Wordpress users, in this case):


So Integrity checking it just that, checking that the code is actually what you have uploaded. Having worked closely with the great people have made the SRI specification(www.w3.org/TR/SRI/) I can guarantee any CDN supporting integrity checking want to ensure you get the correct content.

If they didn't care about delivering the correct content they would be pushing back against the specification / using it.

This is why I have pushed bootstrap to start using integrity checking to prevent this form of code injection. But yeah there isn't a requirement for you to use this with a CDN either (I get whilst the browser support isn't there on a bank site it wouldn't perhaps be ideal - this has always been the case unless you are in charge of the CDN content) what it does do it inform me that they won't be delivering variable content of any kind whatsoever as SRI would break that code if they tried to load malicious content.

Justin who has also replied to this message works closely with developer outreach at MaxCDN and cares greatly about their product 'doing the right thing'™

Let me know if you have any further questions about SRI as I can probably answer then for you.

Jonathan Kingston

Here's the W3C spec on SRI -- http://www.w3.org/TR/SRI/ -- if you still prefer not to use it, you can opt to remove the 'integrity' attribute, it's being provided as a convenience.

link(href='//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css', rel='stylesheet' crossorigin='anonymous')

script(src='//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js' crossorigin='anonymous')

Subresource integrity keeps CDNs honest. If a CDN is "incompatible" with subresource integrity, they're doing a MITM attack on your users and your site.

as @ejcx said it is Sub Resource Integrity (SRI) and we (BootstrapCDN/MaxCDN) should be more clear on what SRI is so it doesn't cause any panic. Thanks for the feedback. If you have any questions or concerns you or anyone can email me jdorfman at maxcdn .com

> There are also security issues: if you're referencing some JavaScript from a CDN, you're trusting the people running the CDN to actually serve the JavaScript you request.

In the HTTP/2 world, it will likely be better to not do this.

The old tricks of relying on it being in browser cache already doesn't forego the fact that the connection to the server is expensive and HTTP/2 gives you a nice open connection to your server already... I would bet that a site is faster by taking as many third-party or other domain assets and putting them back on the same domain and same server.

> I would bet that a site is faster by taking as many third-party or other domain assets and putting them back on the same domain and same server.

There are other advantages to using a CDN beyond 'optimizing' browser cache usage: getting assets closer to your users reduces latency (and therefore increases throughput), which can be a noticeable benefit if you're hosting in San Francisco and serving Oceania/SE Asia.

HTTP/2 won't change that: it will just mean distributing assets over multiple domains (to circumvent browser pipelining limits) will make less sense, but putting one asset domain behind a CDN will still be beneficial.

Ah, I should clarify... if you already have your whole site on a CDN, and that CDN supports HTTP/2, then you are probably best not sending request to lots of other domains if you are able to serve the request from your own domain.

The connection overhead of DNS + connection + TLS is still significant... and HTTP/2 means you'll already have a connection to your server, and the assets could already be being pushed to you, but at the least you'll be able to grab them from the same open connection.

Assuming you serve the CDN content with good caching headers, there's no connection required in the case of a cache hit.

True, but not all of those JS CDNs do this (or provide full versioning in paths), and because you're not serving it you have no control over it or ability to configure it.

Are there many CDNs that serve static content without good caching headers? I haven't seen that myself, except in the case of the automatic latest version feature that some have/had.

Also, you have to be very careful to not share your apps cookies, or only the right cookies if necessary, with the CDN.

> you're trusting the people running the CDN to actually serve the JavaScript you request

Subresource Integrity [1] lets you fix this, and is in FF 43 and Chrome 45.

[1] http://www.w3.org/TR/SRI/

crossorigin='anonymous' solves the privacy issue, SRI will solve the security issue. If you use JS blocking like NoScript, which doesn't appear to support per-site access restrictions, I'd suggest you switch to something like ublock origin's advanced mode, which allows you to allow individual sites to access specific CDNs.

Why people use shitty CDNs like Akamai or Highwinds (kinda) or MaxCDN, is beyond me. Does nobody do testing? Pick someone with a sane API and you don't have these problems. I happen to like CDN77.

What don't you like about those CDNs? In particular Akamai?

Almost every single one has borderline nonsensical services, leading to some very "creative" APIs or workarounds for weaknesses.

The pitches that I have heard in regards to a lack of initiated prewarming or synchronization/transfer methods or architecture optimizations is pathetic at best and fraudulent at worst (Highwinds CDN, shame on you). Akamai has been inconsistent or failed to meet any sort of standard on a given test run from 2005-2008 (the last time I was considering them). Akamai continues to be opaque and stonewalls any attempt to assess why individual failures occur. Whatever your particular needs are, there is absolutely NO REASON that you should have to do any kind of workaround, without a discount in price where you can measure a tradeoff. As per my original comment, a shitty CDN is not an objective assessment. Being unhappy with your CDN because of an API is just plain laziness. I don't want to go to a CDN host website, ever. The API should be robust and performant. Storage, caching, availability by geo, pricing. These are solved problems. If you are using a CDN and are unhappy, go find a new one and move. If you can't do that, you have larger problems.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact