
Recent Facebook XSS Attacks Show Increasing Sophistication - tswicegood
http://theharmonyguy.com/2011/04/21/recent-facebook-xss-attacks-show-increasing-sophistication/
======
blantonl
Dealing with user submitted content on a 200+ million user platform must
really try the patience of security researchers for Facebook.

It would be great if Facebook had a security research group that openly
_published the results of their findings_ , those findings that don't expose
corporate secrets. They almost certainly have seen it "all."

------
kragen
The parse_url vulnerability is probably a good example of why you don't want
to use blacklists for filtering out malicious input; you want to use
whitelists, and then you want to reconstitute the thing you parsed into a form
that can be parsed unambiguously.

parse_url(" javascript:alert('hello')") yields

    
    
        Array
        (
            [path] =>  javascript:alert('hello')
        )
    

which clearly does not have an URL scheme on any whitelist you might apply.
Even if it had incorrectly claimed the scheme was "http", the reconstitution
step would give you an URL like
"<http://localhost/%20javascript:alert(hello)>, which would avoid the problem.

------
Joakal
It's not mentioned but the lessons are:

1\. Use an unique CSRF hidden field in the form that only users on Facebook
can get and so it's them that can submit posts to their wall. (More:
[https://secure.wikimedia.org/wikipedia/en/wiki/Cross-
site_re...](https://secure.wikimedia.org/wikipedia/en/wiki/Cross-
site_request_forgery) )

2\. Regarding video; Don't allow Javascript code to be submitted and presented
anywhere. The code to link to videos is unescaped javascript (with space)
followed by a video link.

~~~
rorrr
Seriously, why are they not using tokens on every request? A simple
md5(user_id + salt) would be more than enough.

~~~
nbpoole
They do, and have for some time (although I can't vouch for the particular
algorithm used). fb_dtsg is the usual parameter name. Unfortunately, an XSS
vulnerability by its very nature allows an attacker to subvert those
protections.

------
xtacy
What would be a (simple) abstraction from browsers that would just thwart all
these attacks? The main problem here seems to be the use of heuristics for
identifying malicious content.

One of the main problems is mixing code and data. Say there's a new HTTP
header that tells the browser to disable inline scripts, would that help solve
the problem?

~~~
nbpoole
You mean like
[https://developer.mozilla.org/en/Introducing_Content_Securit...](https://developer.mozilla.org/en/Introducing_Content_Security_Policy)
? ;-)

