

An introduction to JavaScript-based DDoS - hepha1979
https://blog.cloudflare.com/an-introduction-to-javascript-based-ddos/

======
TazeTSchnitzel
SRI is only part of the solution. What you really want is hash-and-name-based
_lookup_ with a fallback. That way:

* Loading common resources is fast - already loaded by other sites

* If resource isn't cached, the browser doesn't need to open a second connection to a CDN, it can grab the file from the same site

* Proliferation of multiple CDNs ceases to be an issue, as each does not have separate browser cache

* Evil CDNs cannot provide bad JS as integrity is checked and the site isn't relying on a CDN anyway

* If a CDN goes down, your site isn't broken, because the fallback can be to a local file

Would work something like this:

    
    
      <script src="hash-lookup:jquery-1.10.2.min.js,sha256-C6CB9UYIS9UJeqinPHWTHVqh/E1uhG5Twh+Y5qFQmYg=,static/jquery-1.10.2.min.js"></script>
    

Browser checks cache for a file with a matching name and hash, ignoring site
(this looks in a special cache for files loaded with hash-lookup, you can't
check for arbitrary web resources). If there is one, it's used. If not, it
loads it from the fallback URL, checks the hash, and catches it.

Now, this would create potential for hash collision attacks. However:

* If hash algorithms found to be weak, browsers can disable them

* Unlikely to be effective given users will usually already have a file cached

~~~
kpcyrd
This was already proposed and got rejected because of cache poisoning issues.

IF you're able to create a hash collision and IF you're able to deliver (for
example) jquery first, your malicious version would be cached and injected
into every page that uses the targeted jquery version and makes use of this
feature.

This isn't simple, but still an attack vector with huge impact, if successful.

Also:

* If you keep the file forever, you've poisoned this hash forever. If you clear it sometimes, there's a short time window in which you can insert your malicious version.

* If you target an old version of jquery, you're increasing the chance the browser hasn't seen this file yet or forgot about it, to mitigate the poison-forever issue.

~~~
ctz
If this was even a remotely feasible problem with modern cryptographic hashes,
DNSSEC, TLS, SSH, package management systems, most authentication systems,
etc. would all be dramatically broken.

If I had this capability I wouldn't waste it on injecting javascript into web
pages. I'd create forged browser upgrades and go from there.

~~~
duaneb
Cache poisoning is not necessarily breaking the hash, it just means they snuck
something in there _somehow_ (e.g. social engineering techniques).

~~~
Dylan16807
In this context it's cache poisoning that does require breaking the hash.
Which is not a realistic reason to reject the concept.

~~~
duaneb
...how would social engineering NOT work to distribute javascript
vulnerabilities or backdoors via jquery??

~~~
Dylan16807
Every website using jquery will have the real hash, so you can poison all the
mirrors you like and it won't matter.

The only way to get the wrong hash onto sites is to actually publish it on the
authoritative server. That's not cache poisoning, that's a malicious official
version.

------
mangeletti
Subresource integrity (SRI) is a really great idea, especially considering
many websites' resources are hosted on many different CDNs, providing many
attack vectors.

Here is the SRI working draft -
[http://www.w3.org/TR/SRI/](http://www.w3.org/TR/SRI/)

Here is a tool for generating SRI hashes -
[https://srihash.org/](https://srihash.org/)

------
userbinator
You don't even need JS to perform "DDoS" \- simply link to an image or some
other resource on the target site from another site with high traffic, and you
accomplish essentially the same effect. Forums that allow linking to
images/etc. in inline posts and/or signatures are another way to "gather a
crowd", and I've accidentally DDoS'd in this way too... not unlike the
"slashdot effect". In fact it could be argued that HTTPS makes DDoS even more
effective, due to the additional overhead it introduces on each connection.

I think per-IP rate limiting is one of the best ways of preventing DDoS - stop
answering clients that issue too many requests in a short time and they will
probably choke on their own open-connections limit very soon as the
connections start timing out, since the packets are coming from the browser
initiating TCP connections and not a dedicated packet-spewing program.

~~~
hn_
I remember "hot-linking" was a big problem in the 1990s web.

Also one time a myspace template linked to pictures on some guys website and
became really popular. He decided to goatse the myspace profiles who were hot-
linking his photos.

~~~
kpcyrd
[http://ascii.textfiles.com/archives/1011](http://ascii.textfiles.com/archives/1011)

~~~
timboslice
Risky click of the day after seeing Goatse mentioned above. It was actually a
pretty good read, thanks for sharing!

------
rakoo
It's especially important to realize that Cloudflare offering free TLS for
everyone isn't enough to prevent javascript-based DDOS, if you don't make sure
that third-party resources are _also_ behind TLS.

~~~
parryjacob
It does a lot considering the fact that most resources on a page served over
HTTPS also have to be served over HTTPS.

~~~
thirsteh
All of them, if you want to do it right. Biggest reason inactive mixed content
is allowed at all is that blocking it would break too much.

------
jetm9
i dont get the severity of this attack. for this attack to work you have to
include a js from CDN that has been compromised. if you are using a high
profile CDN then it would probably be learned sooner. in other case that's
plain DDOS not necessarily a JS-based DDOS.

