
Why XSS is serious business (and why Tesco needs to pay attention) - troyhunt
http://www.troyhunt.com/2012/08/why-xss-is-serious-business-and-why.html
======
nicholassmith
Top tip for any all companies: when someone with a good reputation and a
background says you're doing it _really_ wrong, and then half of the nerdy
internet agrees, listen to them. It'll save a lot of trouble later on.

I wish I was surprised that Tesco haven't fixed the XSS attack, but it's
probably languished in management hell for a few weeks until it's deemed
necessary. Bolt the stable door before the horse does a runner.

~~~
romaniv
More than that, even if a single person without any reputation reports a
security issue, there should be some standard mechanism in the company to
evaluate it and fix the problem if it's real.

------
SwaroopH
I also have a feeling sometimes the security team never gets to read it. The
guy managing their twitter account has probably no clue of even what's going
on. His supervisor is too lazy to look into this further and asks him to send
a template "we are secure" response. I really feel like security@website.com
should become a standard for researchers to be able to at least talk to
someone who understands what they are doing.

~~~
mgkimsal
Perhaps with some formalized amnesty? If I go to the trouble of contacting you
to tell you you've got a security problem, do _not_ threaten to call the
police or FBI _on me_. I've had that happen _once_ , and I've tended to stay
away from contacting places even when I saw they had a clear security
violation, because

a) I'm not sure how they'll react (will they claim I'm trying to extort them?)

b) there's never any clear way to contact a company beyond calling/emailing
'customer support'.

Occasionally I've tried to poke around my linkedin network to see if I had a
connection at company X to reach out to their dev team and notify them of
something, but I've never been successful that way (but also haven't pushed
the issue much).

~~~
simonbrown
I think the best solution would be for more companies to publish a page on
their website telling people what to do if they find a security problem, like
GitHub and 37signals do (Google and Facebook also offer a bounty).

I guess a problem with this is stating where the line is drawn. It might be
difficult to promise not to sue well-intentioned researchers without reducing
their ability to sue people with malicious intentions.

~~~
nikcub
Companies that have security pages and contact details are also companies that
understand the importance of security issues. the problem here is that most
companies do not understand the issue and tend to react defensively

One solution might be using an agent - setup a clearing house for security
issues run by a couple of trusted people. You log the issue, the clearing
house gets in touch with the company and gives them access to the issue
details.

It keeps the person reporting the issue one step away from the company and any
potential trouble.

Once the company acknowledges and fixes the issue it is made public with an
optional credit

If they don't acknowledge the issue it becomes public anyway after x days.

~~~
simonbrown
What stops them checking their logs to find your IP?

~~~
nikcub
hiding your real ISP IP isn't that big a deal for pen testers, either VPN or
Tor

I don't know many (I certainly don't) who use their real IP when probing sites

------
delinka
Why is anyone putting anything in a cookie besides a [protected] session ID? I
can store your account details (the ID my store uses to identify you, your
first name for dispaly on pages) in the session object on the server and
access them when my store page scripts need them. Why does anyone need any of
this in a cookie? I don't get it...

It's a fundamental property of user-facing apps that you can't trust anything
provided by a user (or anything that can impersonate a user.) Got some input
that you're gonna store in the database? Use the database sanitizing functions
to clean it up before putting it in the database. Going to display some of
that back to the user? Pull it out of the database and HTML encode it before
putting it in the page.

Is it just blind trust in users that keeps web dev people doing this? Or is it
truly lack of comprehension? Either way it's incompetence.

~~~
tedunangst
It appears to be how things are done in a number of frameworks. In a bizarre
parallel to mgkimsal's comment, whenever I've asked about it, I've been told I
just don't get web programming.

To my mind, the term cookie implies some sort of opaque handle. I give it to
you, you give it back to me. It doesn't contain data, it's just a handle.

A base64 encoded 128-bit number also solves the problem of "omg, our cookies
are so long we need a separate domain for static assets to reduce our
bandwidth".

~~~
pmahoney
Rails claims its CookieStore will "greatly increase the speed of the
application" relative to other stores (the guide mentions a file store and a
database store) [1]. The cookie is a base64 encoded hash serialization (i.e.
completely reversible by the client), but "to prevent session hash tampering,
a digest is calculated from the session with a server-side secret and inserted
into the end of the cookie."

I've not seen or performed myself any measurements of these claims. I have
seen it suggested that CookieStore is the default because it is simpler in not
requiring a database table, and not for any performance benefit. Without
measuring, I can only guess the difference is dwarfed by the typical page load
time.

[1] <http://guides.rubyonrails.org/security.html#session-storage>

------
mgkimsal
Possibly OT, but a security story/rant all the same. I worked on a project
where web session information was managed by a java/php dance. PHP serving the
front end would ask a Java web service for a session token (or pass a session
token in to validate it). The Java web service would generate a new session
token by add a row to a database and returning a string of characters. The
string of characters was _the auto increment ID of the database row_ ,
encrypted with a key. When the Java web service got the token again, it would
decrypt it to see if it matched a valid row ID in the database. The token was
sent down in a web cookie.

I was then asked to also use this same 'session management' approach on
something with far more sensitive financial information (the original use case
was arguably already something sensitive, although not directly tied to
financial records). I refused, and explained why. I was told I didn't
understand security. I suggested tying the row to a random ID and using that.
I was told that 'random generators aren't really random - you need to
understand how computers and processors work to know that'. It took me 5
months from when I first brought this up as a security concern to _someone_
above me actually understanding the problem - not necessarily agreeing with my
conclusion that it was unsafe, but at least understanding why it _could_ be
unsafe. BTW, none of my nominal colleagues took it - or me - seriously - cause
after all, I only did _PHP_ \- it doesn't even have threads (that was an
actual quote from someone).

The problem was if someone managed to decrypt the cookie value, they'd
immediately see it was a number like 109826374. Trying again a few minutes
later, they'd see 109826379, etc. Yes, potentially hard to decrypt in the
first place (or to know that it was encrypted), but if done, devastatingly
easy to assume another identity.

I was continually told I didn't understand web security, or computers, or
programming, because I was new to the company, and developers _far_ more
senior than me had developed this (at this point I'd been doing web
development for almost 8 years, and had already made my fair share of
stupid/dumb security mistakes, so I recognized this one instantly).

 _Finally_ someone 'got' it when I pointed out that dozens of people had come
and gone, had access to the code, but more importantly, had had access to the
encryption key. It hadn't been changed. Ever. I asked how they could ensure
that people weren't already using that key to decrypt and take over accounts -
there'd be no way they could actually _identify_ if that was even happening. I
think the immediate answer was to plan to rotate the encryption key on a
regular basis, but I left soon after that (for more reasons than just this,
but this was illustrative of how I was interacting with the place). Having
values in a cookie that are tied to directly manipulatable database values
simply isn't a good idea, even if the cookie is encrypted, and you certainly
wouldn't want financial records stored with this as the primary (only?) means
of security.

Was some of the failure my fault? Possibly. I may have come across as
arrogant, or 'know it all', or whatever. I'd sent detailed tech emails to
people outlining the issue. I'd had friendly lunch conversations. I'd
demonstrated how it was possible. Nothing worked. And that was frustrating,
and led me to coin my own phrase - "It doesn't matter what you say, it matters
how you're seen when you say it".

~~~
benmmurphy
I think the main problem with this scheme is encrypting without
authentication. You need to authenticate the data not encrypt it! (you can use
a HMAC scheme for this) Encrypting is useful if you don't want clients to see
the ID but that is not really a problem. For comparison Rails using the scheme
by default for its session storage.
<http://guides.rubyonrails.org/security.html#session-storage> Also, it is kind
of pointless to have a HMAC cookie scheme and still use a database :) The
point of a HMAC cookie scheme is to avoid using a database.

But the problem of not rotating keys is also something that is commonly
overlooked. Rails doesn't even have an option to rotate keys which is slightly
annoying. It would be nice to introduce a new key without invalidating every
session. For example someone might leave the company. You could introduce the
new key for a day then revoke the old key which would be much more seamless
[but more risky if you believe they are going to attack straight away. since
they could continue to use a compromised session after the old keys have been
revoked].

~~~
AnIrishDuck
The biggest problem with the scheme described is what you point out:
_encryption is not authentication_. They are designed to provide two entirely
different guarantees (confidentiality vs. integrity).

To provide a more concrete attack scenerio: if they used CBC encryption and
stored the IV with the encrypted value (both very common practices with block
cipher crypto), it would be trivial to spoof any user id by flipping IV bits.

On a more general note, the main problem appears to be basic lack of
cryptographic knowledge. These "experienced developers" need to buy a crypto
book and read it. Talented mathematicians have already solved their "problems"
with academic rigor (in this scenario, using HMAC).

~~~
tptacek
Flipping bits in CBC ciphertexts works just fine without the IV.

~~~
AnIrishDuck
Right... I was assuming in this case that the message size would be < 1 block
considering it's just a database id. In that case you'd have to modify the IV
to flip id bits.

------
UK-AL
When I applied to capgemni(A big I.T company) the graduate recruitment website
returned my password in plaintext over email which shocked me because they're
ment to be I.T experts that do government and corporate i.t projects. Although
this was a year ago, is there a way to formally complain about this?

How can I get people aware?

------
rickdeaconx
Security is an afterthought; startups need to be considering it as a basis.

