
XSS Twitter in minutes; Why you shouldn't store important data with 37signals - antonovka
http://brian.mastenbrook.net/display/36
======
tptacek
I'm usually the first person here to jump to 37s' defense. I know some of
their people, they're hometown heroes, and I use and like their products.

This is hard to defend, guys.

It is literally the-simplest-thing-not-to-fuck-up. Nobody's asking you not to
have security vulnerabilities. In fact: nobody's even asking you to _fix_
vulnerabilities. We just need a reliable way to communicate with you about
them.

If you're selling accounts on a web app, you need:

* A security page * With a PGP key * And an email contact * of someone who will write back * who knows what a security vulnerability is * and who will write back quickly.

That's it. Do that, and you're not a punch line. If someone dumps zero-day
about you onto Twitter, you're already two steps ahead in the PR war, because
you had a reasonable process, and the researcher ignored it.

Bonus points --- things that are trivial to do, but that nobody's even asking
you to do:

* You can assign special issue numbers to vulnerabilities, to make the researcher feel like an XSS disclosure isn't the same thing as a bug in your online help.

* You can thank researchers privately, and let them know that you'd really like them to keep disclosing thing to you --- you could even give them (wait for it) a phone number.

* You can do what every vendor with a real security team does, and keep a public web page thanking people who have discreetly disclosed vulnerabilities to you.

~~~
Manfred
Actually it's pretty easy to defend.

I never used to have travel insurance but after my first intercontinental
flight my suite-case broke and I learned the importance of having one.

In a perfect world your support team would be able to distinguish between
someone rambling about bytes and an actual security issue. That's hard without
a lot of technical knowledge.

I believe that's why the Rails security team responded rather quickly and 37
Signals support team didn't. I'm sure they will do better in the future.

~~~
tptacek
The problem isn't that 37S tech support doesn't know how UTF-8 works. The
problem was that security reports were routed to tech support in the first
place. Again, the solution to this problem is a single web page with just a
couple pieces of information on it.

------
jeremykemper
Brian, I'm on the receiving end of security@37signals.com and
@rubyonrails.org. I read your post with great dismay, to put it mildly. You're
understandably pissed: we whiffed on our response to you by changing venue to
Rails security without keeping you in the loop.

This is my fault. I identified it as a Rails issue and requested that you
forward your findings to the Rails security team so we could investigate in
concert.

Craig here at 37s narrowed down a root fix with Michael, Rails' security
ombudsman, who then enlisted Manfred's help to track down and repair the root
cause. What you see today is the end result of those efforts. The security
process worked, but you only saw the Rails arm of it. The apparent 37signals
arm of it amounted to runaround. Completely not OK.

We now have a security-only email and PGP key at
<http://37signals.com/security>. Next time, no runaround.

~~~
moomins
There are still a couple of issues Brian brought up you haven't addressed.

The main one being the hubris of the copy on your security page. Declaring
users data to be uncompromisable then justifying this by listing mostly
_physical_ restrictions to the datacenter seems to ignore rather the larger
security issues for web-based applications. A firewall and latest security
patches do not make one immune.

His other peeve seems to be the perceived bullshit of your support team saying
they replied to his initial complaint when it appears (at least to him) that
they had not, then putting the blame for this on his spamfilters.

~~~
ashleyw
@tptacek:

Second Google result for "apple security":
<http://www.apple.com/support/security/>

~~~
tptacek
Yes, and that's a good page. Now tell me how to navigate to it on the OS X
site, and note how much security marketing fluff you'll see before you ever
find it.

I don't even think Apple is a bad example of the form. I think it's entirely
reasonable for them to market security on their main pages, and leave the
researchers to find their support page on Google. There are tens of
researchers, and millions of customers.

Apple has a lot of really smart people working in security research and
software security. Some of them are friends of ours. And some of those people
are frustrated with Apple for any number of reasons. But none of them --- in
fact, nobody I know that works in software security --- is particularly upset
about <http://www.apple.com/macosx/security>. It is what it is.

------
cpr
Funny, but I've always thought the 37signals team was bipolar.

On one hand, they're pretty much "our way or the high way" about any support
concerns/feature requests.

But, once caught out in public about any issue, they're all over it.

I suggest a very strong dose of self-administered humility for their founders
would go a long way. That doesn't mean being un-opinionated; it just means
being realistic about human nature and its vagaries as it applies to
themselves.

------
snprbob86
For all the horrible, terrible, awful things about Internet Explorer
(namingly: standards support and UI) ... they are innovating heavily in
security: [http://www.microsoft.com/windows/internet-
explorer/features/...](http://www.microsoft.com/windows/internet-
explorer/features/safer.aspx)

~~~
tsally
Microsoft is actually decent at security now. The AV they released for free
was actually on par with a few commercial products out there (all AV at the
moment is pretty bad though, if you're curious). The really difficulty with
Microsoft and security is countering their reputation for bad security that
they earned over the past several years.

~~~
snprbob86
That's because Microsoft is good at anything that they throw resources at.
Unfortunately, we haven't thrown enough resources at determining what to throw
resources at.

~~~
tsally
Well, good enough at least. ;-) Honestly the AV was a good move. Give stuff
like that away for free for long enough and the public perception of Microsoft
and security may change.

~~~
mattmcknight
On the other hand, over the years people have hated them and filed antitrust
claims against them for including free stuff in their releases. Norton used to
make a file manager before the Windows one was any good...now their AV product
faces a built in competitor.

------
mncaudill
I read through the code, but I don't really know Ruby that well, so I'm not
100% sure of what the exploit was.

Looking at the patch, it appears that their "check for UTF-8" function wasn't
perfect. Is this correct? If so, how is this exploitable?

~~~
antonovka
As I understand it, Rails' string escaping would treat an invalid byte
sequence (eg, 0xFF, 0x1C) as a single multi-byte code point, and thus not
filter it, even though 0x1C (which is '<') should have been escaped.

The browser, however, would correctly treat 0xFF as an invalid initial byte,
and then interpret the next character point, 0x1C ('<') independently.

So, you could pass arbitrary characters through Rails' string escape functions
by prepending an initial invalid byte sequence, and thus cause the browser to
interpret arbitrary JS/HTML.

~~~
mncaudill
Thanks for the response. That does make sense.

------
laktek
Can someone help to compile a good security policy and guideline for web apps?

I guess every web app should have a page dedicated to security similar to
privacy policy and terms of service.

What are the essential information should go there? Special email dedicated to
report security issues? PGP key to encrypt emails? and What else?

~~~
lupin_sansei
Don't trust any data that comes from outside your app (urls, querystrings,
http request headers, databases, files) and you'll be right.

Perl's Taint Mode enforces this automatically. Don't know if any other
languages have it? <http://www.webreference.com/programming/perl/taint/>

~~~
jrockway
I don't think taint mode is helpful for this. All data comes from outside of
your program, but you somehow still have to display it. As we've seen
recently, this is hard to get right. Escaping HTML is one thing, but what if
you want to let a user type in a URL? Make sure to exclude javascript: URLs.
Even if you whitelist only "<http://...> URLs, how do you know that a browser
bug won't allow an attacker to inject JavaScript, compromising any account
used by a user of that web browser?

Basically, web browsers need taint mode. The programming language that
produces the web page is a whole other issue.

------
jrockway
I think what we've really learned from this is that the current JavaScript
security model is not good for what people are using the web for these days.

We really need something like a "<without-scripts>" block, where anything
inside could never use JavaScript (including links that are to javascript:
URLs). This would make life a lot easier for web developers.

~~~
antonovka
Then XSS attacks would insert </without-scripts> before <script></script> =)

What we need is literal separation of control statements (eg, <script>) from
content such that neither can be easily misinterpreted, but that would be a
significant departure from existing design.

~~~
tptacek
That shouldn't work for the same reason that you can't escape a bound SQL
query parameter in a pre-parsed query.

~~~
fhars
A modernized, preferably static version of perl's taint mode might work. You
need two types of strings, one for trusted and one for untrusted strings, all
your output functions accept only trusted strings, all your input or request
parsng function return untrusted strings, and all naive string manipulation
functions return untrusted strings if at least one of their arguments are
untrusted. Then the possibly vulnerabe code is limited to a few statically
identifiable routines that take untrusted strings and return trusted ones.
They still may be buggy, but at least you know which parts of your code may
induce vulnerabilities and need special attention.

And with modern type systems, these types might even be phantom types
incurring no runtime overhead. Athough things like the differences between
"safe for passing to a browser" and "safe for passing to my SQL-server" might
compliate the architecture.

~~~
sho
Ruby has a taint mode for String too. I think it's used in Rails, or maybe
they rolled their own, but the concept is definitely there.

Problem is, at some point you have to be able to display user-entered data.
You indeed mark it as tainted, or equivalent, then escape it as best you can.
The issue here was a bug in the escaper. Tainting was working as planned.

------
oink
You found a security exploit, feel special. Finding an exploit isn't a voucher
to rant against the people responsible for it. The bottom line is that nobody
can be 100% sure that their data is secure after they've put it in the hands
of a third party.

~~~
bmastenbrook
I think my point was a lot more nuanced than you give it credit for.

~~~
pvg
There's an interesting tangent towards the end -

"Web application security is still an immature field, and many of the layers
are sufficiently poorly designed that issues like this will pop up for a good
long while. Just like buffer overflows have been a weak spot for C security as
long as the Internet has been around, escaping issues will continue to be a
weak spot for web security for as long as we're afflicted with this particular
architecture."

It seems like a field not only in its infancy but also oddly unglamorous and
under-reported. There's no repository (that I know of, at least) of
vulnerability reports of major web apps, for instance, yet it's easy to look
up an exhaustive history of Flash vulnerabilities down to the seventeenth
decimal sub-version. And yet the various XSS/CSRF/etc vulnerabilities are
easily as dangerous and as exploitable. Twitter's dreams of a billion users
and a new internet were not exposed by a buffer overflow, after all.

~~~
tptacek
I think you're probably wrong about that; more security practitioners are
familiar with OWASP than with any other security advocacy/advisory group.

~~~
pvg
That's possible especially since I'm not a 'security practitioner' and I'm
essentially talking about a subjective personal impression - that it's taken
less seriously, is less reported and incidences of specific vulnerabilities or
exploits in specific apps are not tracked in the way they are for operating
systems and major applications. This may, in part, be because in the case of
web apps fixes are immediately available to all users. On the other hand, you
can head to the RoR download page right now and click your way to downloading
the current vulnerable version of RoR. At no point will you get a suggestion
to check for recent security advisories or patches.

