Hacker News new | past | comments | ask | show | jobs | submit login
What I learned from suffering my first and last XSS attack (livesshattack.net)
101 points by WillieStevenson on Feb 29, 2016 | hide | past | favorite | 41 comments



Some modern tools to mitigate XSS/make XSS virtually impossible:

- Content-Security-Policy https://developer.mozilla.org/en-US/docs/Web/Security/CSP/In...

- The sandbox attribute on iframes: https://developer.mozilla.org/en/docs/Web/HTML/Element/ifram...

XSS is one of the hardest things to keep under control at scale.


I've gotten into the habit of starting off every new project of mine with the header "Content-Security-Policy: script-src 'self'" on all pages, and whitelisting domains and 'unsafe-eval' as I need them. ('unsafe-eval' isn't so bad. 'unsafe-inline' is the really unsafe one that you want to avoid because it undoes the XSS protection. Eval has legitimate uses and eval-based vulnerabilities are not anywhere near as easy to accidentally create and never notice as a regular XSS vulnerability. I'd bet that everyone who has really written Javascript+HTML without CSP has created a regular XSS vulnerability at some point.)

There's no excuse to not start new projects off with a CSP rule like that. Doing it from the start means you won't ever have to comb through your codebase later to remove all bits of inline javascript that you depend on to get the benefits.


A much better way to mitigate XSS/HTML injections is not using string functions to generate HTML.

CSP prevents an attacker from executing scripts, but the attacker can still corrupt your HTML.


CSP rules are a good defense-in-depth measure to most likely limit the reach of vulnerabilities. They're not a cure-all that makes it so you don't have to understand HTML/text encoding.

It's like what ASLR and stack canaries do for buffer overflow vulnerabilities in C/C++. You still should try to avoid writing buffer overflow bugs, but if/when they do happen, the protections mean that the issue will probably just be a denial-of-service issue rather than an issue that lets an attacker execute code on your systems.


You can use CSP to prevent content exfiltration as well, not just script execution. There's a number of directives suited for that: img-src, form-action, child-src, etc. Full list on MDN: https://developer.mozilla.org/en-US/docs/Web/Security/CSP/CS...


Yep. XSS isn’t at all hard to prevent if you’re using tools that are safe by default. Unfortunately, popular ones like jQuery aren’t. (This, more than any other, is a reason to prefer the DOM API.)


I'm not sure the DOM API is safer. innerHtml vs html()


If you're assigning untrusted text to a property called `innerHTML`, you shouldn't be surprised to learn that you're vulnerable to XSS.

APIs like jQuery are worse because you end up treating strings as HTML without even realizing it. Example from the article:

  $("#tail-here").prepend(newlines);
There's no hint on that line that the `newlines` variable is interpreted as HTML instead of text.


Pfft. Straight from the documentation of the method:

  content
  Type: >>htmlString<< or Element or Array or jQuery
You could argue that the API is bad for allowing HTML strings as arguments, but it's not really surprising that a function does what it is documented to do. Especially since these are DOM manipulation, not string manipulation functions.


> You could argue that the API is bad for allowing HTML strings as arguments

That is exactly what I’m arguing, yes.

> but it's not really surprising that a function does what it is documented to do

It doesn’t have to be surprising or undocumented to be stupid.


  > You could argue that the API is bad for allowing HTML strings as arguments
  That is exactly what I’m arguing, yes.
And how native DOM helps there?


There are relatively few DOM APIs which take a trusted string. innerHTML and outerHTML are two and clearly state that they take HTML so it's no surprise for the stuff you give them to be interpreted thus. But if you use e.g. textContent or createTextNode to insert text into your document, they will correctly sanitise it.

jQuery has text()[0], but because most of its API takes strings to start with, it's very convenient to do the wrong thing and shove untrusted strings into unsafe methods.

[0] http://api.jquery.com/text/#text2


yes, API takes strings, but it calls native DOM at the end anyway, so I still don't get what's your point.


> yes, API takes strings, but it calls native DOM at the end anyway

jQuery calls unsafe API which should only be handled trusted strings but makes it easy and convenient to give them untrusted string and thus introduce exploit vectors.

Furthermore it's also significantly more difficult to audit the code, using the regular DOM there are only a pair of attributes to check, whereas pretty much any jQuery method call is a potential security hole.

tldr: jQuery makes doing things wrong very easy, much easier than doing things right.


Everything in jQuery is DOM manipulation. (Well, except for the little extra parts that aren't like $.ajax.) Tons of websites' code is primarily just jQuery calls because DOM manipulation is what makes things happen on the web.

The only native DOM API calls that can get you an XSS vulnerability are sets to `innerHTML`, `outerHTML`, and maybe some few others. Most of the methods for getting things done don't have any possibility of introducing an XSS issue. With jQuery, many methods including every method capable of inserting elements is also capable of introducing an XSS issue depending on the types passed to them. So you when you're reviewing for XSS issues, you have many more places to check, and analyzing the calls to jQuery methods to see if they have the potential for XSS is much more difficult because you have to trace the path backwards from every single call to see what types are set into the variables passed to the methods.


Some examples detailing the dangers of HTML corruption: http://lcamtuf.coredump.cx/postxss/.


jacobparker: Indeed. I added related headers to my web server conf after the fact (https://github.com/WillieStevenson/conf-files/blob/master/ng...). I should of have mentioned it in my post.


One of the projects I'm working on aims to make it easier to integrate CSP headers into web applicatons.

https://github.com/paragonie/csp-builder

I'm working this week to integrate it into another project we're developing. (It's MIT licensed, so have fun with it.)


Hi CiPHPerCoder. Can I PM you?


On Twitter? Yes, my DMs are open. Feel free to email me too:

    security@ my company's domain name
I'm pretty sure HN doesn't have PMs.


The HTML framework I created for personal projects has a special string type that cannot be output (nor even compiled) without specifically selecting which format you want the string encoded as (eg HTML, URL, plain text, etc). While this does produce some arguably uglier code and creates a little additional development overhead (ie code fails to compile by default), it has caught numerous instances where I would have accidentally left myself open to this kind of attack. So I've found it to be highly effective in the long run.


Python's Jinja templating language has the concept of "safe" and "unsafe" strings. By default all strings are unsafe (meaning they will be escaped wherever they are rendered) and you need to explicitly mark them as safe in order to be able to treat them as HTML.

React and Angular (and maybe Ember too?) similarly auto-escape any strings to be rendered and only provide very explicit ways to circumvent that (in React it's even called "dangerouslySetInnerHTML" and takes an object with a property called "__html" rather than a string[0]).

You can avoid XSS in dynamically typed languages. You just need to make it easier to do the safe thing than the difficult thing.

[0]: I really like React's approach to naming APIs you should think twice about using. For legacy reasons ReactDOM is still bundled as part of React and exposed with a property but the name of that property is "__SECRET_DOM_DO_NOT_USE_OR_YOU_WILL_BE_FIRED" (presumably to prevent React maintainers / Facebook employees from using it).


I wasn't implying dynamically typed languages were worse that statically typed languages. I was just discussing the approach I used to mitigate human error when building web UIs in hope it might inspire other people (or other people might offer additional feedback that inspire improvements in my own framework).

It is interesting to see that other frameworks are using similar ideas to my own though. Maybe not surprising; but at least reassuring to know that my own quirky API design isn't as leftfield as it felt when I was developing it.


The "dynamic typing" remark was directed more at the sibling comment than yours directly.


> Python's Jinja templating language has the concept of "safe" and "unsafe" strings. By default all strings are unsafe (meaning they will be escaped wherever they are rendered) and you need to explicitly mark them as safe in order to be able to treat them as HTML.

That's good, but it's not ideal as it confuses HTML/unsafe. An unwary developer could output untrusted text "safely" and still open up an XSS vulnerability – because they could be outputting to JavaScript embedded within HTML. Yes, Jinja will encode it as HTML by default, rendering it "safe" in terms of the HTML parser, but that still doesn't make it actually safe.

Example:

    alert("Hello, {{user.name}}!");
By default, Jinja will encode this as HTML, meaning that you can't trick the HTML parser into injecting your HTML. But it is still completely vulnerable to the exact same security vulnerability, except by tricking the JavaScript parser.

We should recognise what's actually the issue at play here. It's not really about whether data is safe or not. It's about correctly encoding data in a manner that's suitable for the context in which it is being output. Sometimes that's HTML, sometimes it's JavaScript, sometimes it's CSS, etc.


Yes, Jinja's "safe"/"unsafe" are just "html"/"text" by another name.

One could create the more comprehensive "html"/"css"/"javascript"/"text" set in dynamic languages too, but Jinja doesn't.


Strongly-typed languages FTW.


I have always been nervous about clicking on links to sites that say 'we were hacked, but we're safe now.' When that link also includes the hubristic 'and we'll never be hacked again,' which paints a target on you big enough for the entire darknet to see... I'm sorry but I'm not clicking on that link.


The site's giving me a 503 Bad Gateway now, so you might be onto something.


Is there a tool that tries XSS and other common attacks on a web page?

Similar to how Google Page Speed Insights and SEO tools which detect errors such as "You haven't minified your CSS", "You haven't set a title and description for your page" etc.


Vulnerability scanners are usually premium (paid) products. Some offer free trials such as Qualys: https://www.qualys.com/forms/freescan/

Other famous tools are nessus, and accunetix: https://www.tenable.com/products/nessus-vulnerability-scanne..., http://www.acunetix.com/vulnerability-scanner/

These scanners are usually used by security consultants to test a complete internal network of a client.

For a full list, some of which are open source, check this link: https://www.owasp.org/index.php/Category:Vulnerability_Scann...


Thank you!


Check out https://www.tinfoilsecurity.com. It looks for XSS, and about 60 other different types of attacks.

(edit: full disclosure, I'm one of the founders)


Thanks, I got an error message. I'll check it out once the site is up!


Tinfoil discovered a few errors on my site. Thank you! I wish you hired a designer though :)


Ha, feedback is always appreciated. What aspects did you think needed more design work?


What is that tool you used for realtime traffic monitoring?


Google analytics?


Thanks. I never really programmed for the web


You're welcome.


Anyone know of static analysis tools that find potential XSS/etc issues in your frontend (such as the innerHtml message) and backend code?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: