
Cookies Lack Integrity: Real-World Implications - adamnemecek
https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/zheng
======
agl
VU#804060 is talking about "cookie forcing". This is not a new discovery.
Here's Chris Evans (not the actor) talking about it in 2008:
[http://scarybeastsecurity.blogspot.com/2008/11/cookie-
forcin...](http://scarybeastsecurity.blogspot.com/2008/11/cookie-forcing.html)

The best solution is to preload HSTS on a domain and include all subdomains,
and we've been saying that for years. That prevents any HTTP connections,
although it's obviously not an easy solution in many cases.

The USENIX paper does suggest some unilateral changes to cookie semantics to
address this issue, but any such changes have eye-watering compatibility
concerns and could only be deployed after a lot of testing.

~~~
chetanahuja
_" The best solution is to preload HSTS on a domain and include all
subdomains"_

That's the great thing about HTTPS/SSL security. Every attack, every
vulnerability, every problem with performance is met with "just make sure your
server enables XYZ and blah blah blah is updated and make sure the clients are
only connecting from chrome while standing on one leg and singing the national
anthem while looking at a picture of the Pope. So yeah, it's actually really
secure.

When are we going to accept that it's a fool's errand. SSL is a hopeless case
and design something better for today's world?

~~~
grey-area
We'd all love to see the plan.

~~~
chetanahuja
Here's one:
[http://cr.yp.to/tcpip/minimalt-20130522.pdf](http://cr.yp.to/tcpip/minimalt-20130522.pdf)

~~~
grey-area
Thanks.

------
LoSboccacc
"man in the middle can do nasty thing, here is a castle of vulnerabilities we
built on top of already having pwnd the communication channel"

related
[http://blogs.msdn.com/b/oldnewthing/archive/2006/05/08/59235...](http://blogs.msdn.com/b/oldnewthing/archive/2006/05/08/592350.aspx)
"It rather involved being on the other side of this airtight hatchway"

~~~
jacobwcarlson
I can't for the life of me find it, but there was a paper a few years ago
about evading App Store policies by purposely introducing vulnerabilities in
your app. The idea being that e.g. you have a buffer overflow and the app
makes a specific network request that exploits it, resulting in code execution
free from Apple's policies. So that ONT post is a bit short-sighted in
thinking bugs that don't result in privilege escalation aren't "security
holes."

~~~
mbubb
Coincidentially I heard about this at a recent talk - is it this:

[http://www.imore.com/jekyll-apps-how-they-attack-ios-
securit...](http://www.imore.com/jekyll-apps-how-they-attack-ios-security-and-
what-you-need-know-about-them)

Jekyll apps in App store

~~~
jacobwcarlson
That's it, thanks!

For anyone wanting to read the paper itself, the link the imore.com article is
broken. Use this one instead:
[http://www.usenix.org/system/files/conference/usenixsecurity...](http://www.usenix.org/system/files/conference/usenixsecurity13/sec13-paper_wang-
updated-8-23-13.pdf)

------
PaulHoule
I am more afraid of the man at the browser and the man in the browser rather
than middleman attacks.

For instance a user might be trying to use your paid service for free or
otherwise get information they are not supposed to and if you are using
cookies for authentication, authorization or application state, the user could
modify the cookie and break your system. Not to mention the cookie is a vector
for XSS, buffer overflows, and other troubles.

So if you are sending anything in a cookie that you don't want people to
tamper with you should cryptographically sign it, or alternatively, send them
a single opaque random identifier that points to a session or request record
inside the server. There are way too many cookies on web requests now, and
just from the viewpoint of speed, the opaque reference is a performance win in
the age of Hazelcast.

Replay attacks can be made but there are many countermeasures, such as adding
a timestamp and a nonce.

This defends against the major threat (the would-be user who wants to abuse
your web site) vs a more hypothetical case (sophisticated outsider wants
something.)

~~~
MichaelGG
What you're talking about is defending the site. That's great and of course
you should do that. What the paper talks about is defending the user. The
example they give is being able to inject cookies then override the GMail chat
widget with the attacker's account.

Such an attack doesn't put the site at risk - Google is fine. But it puts
Google's users at risk, as they are signed in, yet chatting on another
account.

~~~
derefr
The point the GP was trying to make, I think, is that if the site's operator
has cared enough about _their own_ security to cryptographically sign their
cookies, then this provides security to the users as a free benefit, because a
MITM wanting to attack _the user_ doesn't have the site's signing key either.

~~~
MichaelGG
Cookie signing doesn't fix this. Attacker will just login, take his own
signed, valid, session cookies, shove them into Victim. Now Victim uses
whatever.com and Attacker can see.

The example they gave was being able to do this to Gmail. Victim is logged
into Gmail, but the Gmail Chat widget is logged in as Attacker.

------
AdmiralAsshat
I haven't dug around my browser config or the Extensions store in awhile. Does
anyone happen to know if IE/Chrome/Firefox can be configured to not accept
cookies from non-HTTPS sites?

~~~
paulsutter
Chrome plug-ins have a lot of control over network requests through the
webRequest API[1]. They're easy to write - email me if you have questions.

[1]
[https://developer.chrome.com/extensions/webRequest](https://developer.chrome.com/extensions/webRequest)

"Use the chrome.webRequest API to observe and analyze traffic and to
intercept, block, or modify requests in-flight"

------
nostromo
Browsers should consider not showing a green lock unless a site uses HSTS.

~~~
MichaelGG
But even with HSTS, if it wasn't preloaded, an attacker could inject cookies
on the first visit (then take user to real HTTPS site). Or, if the domain
doesn't have full HSTS for all subdomains, there's still a potential vuln as
you can inject from non-HSTS subdomains. The paper notes that Google and some
other big properties have technical issues in deploying global HSTS.

------
ademarre
This exposes a weakness in the "double-submit cookie" CSRF defense technique.

[https://www.owasp.org/index.php/Cross-
Site_Request_Forgery_(...](https://www.owasp.org/index.php/Cross-
Site_Request_Forgery_\(CSRF\)_Prevention_Cheat_Sheet#Double_Submit_Cookies)

------
adamnemecek
Here's the research paper presented at USENIX
[https://www.usenix.org/system/files/conference/usenixsecurit...](https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-
zheng.pdf)

------
techscruggs
Another great argument for using signed cookies.

~~~
daave
Exactly. A better, or at least more general, mitigation than enabling HSTS
(though that's a good idea anyway), is to not design your web application in
such a way that a modified cookie in the client creates a vulnerability. Since
cookies are stored in the client, they are always going to be susceptible to
malware on the user's machine. So, trusting that the contents of a cookie were
authored by the service receiving them is a bad idea in general. Cookies
should be stored along with some additional information that verifies their
authorship.

A relatively simple way to accomplish this is to have your application include
an HMAC in the cookie contents, and verify it whenever the cookie is received.
e.g, if you are storing $session_id in a cookie, change your cookie contents
to be "$session_id:$hmac_of_session_id", and verify the HMAC every time a
cookie is presented.

Now a user, or malware, or a MITM, is not in the position to take over or
modify a different user's session simply by altering the cookie, since they
will not be able to produce a valid HMAC (the key is never shared with the
user).

If even storing the key in your web frontends is too risky, you could use RSA
or DSA signatures, only store the public key in the web frontend that verifies
cookies, and store the private key in a more hardened cookie-signing service
that isn't directly exposed to external networks. This service can be invoked
when new sessions are created or upon user login, if applicable.

On top of this, if the client supports ChannelID, you should include the
ChannelID in the message that is HMAC'd, so that stolen cookies cannot be
reused on other machines.

~~~
sbov
> Now a user, or malware, or a MITM, is not in the position to take over

I fail to see how this fixes this issue. I can just set my cookie to
$their_session_id:$hmac_of_their_session_id, or I can set their cookie to
$my_session_id:$hmac_of_my_session_id

Sure, I can't modify signed cookies. But I'm still in a position to take over
their session.

~~~
daave
> I can just set my cookie to $their_session_id:$hmac_of_their_session_id

If you can steal somebody else's cookies (which are not Channel-bound) then
that's true. If you can only steal or predict somebody else's session ID's,
the HMAC provides protection.

It's not atypical for session IDs to be simple counters that get incremented
for each new session. If your session ID is 100042, it's a pretty good bet
that 100041 and 100043 are valid session IDs as well, and without HMAC, a user
could take over these sessions trivially.

The even better mitigation to cookie theft, which I also mention, is TLS
ChannelID. ChannelID creates a unique private/public keypair for each new TLS
connection, and sends the public part along in the TLS handshake. Then, when
you resume sessions from the same machine, you can prove that you have the
private part and the server can accept your existing cookies. With this
approach, cookies are no longer bearer tokens and stealing cookies becomes
worthless.

This can be hardened even against local malware running as the same principal
as the user doing the browsing if the browser's ChannelID implementation
generates and stores the private key inside a TPM or HSM.

~~~
ademarre
> _If you can steal somebody else 's cookies (which are not Channel-bound)
> then that's true. If you can only steal or predict somebody else's session
> ID's, the HMAC provides protection._

Session fixation. You don't need to steal any cookies. The attacker can plant
his own session ID cookie in the victim's browser using the OP exploit. Using
signed cookies doesn't change this attack at all.

------
nchelluri
I think of the following:

\- store only session ID in cookie

\- regenerate session ID upon privilege escalation (login, what else?)

\- destroy session upon logout

That being the case, is this really capable of doing much damage? Especially
once you enable HSTS.

~~~
Eridrus
This is still vulnerable to the same kinds of attacks you can do with login
CSRF:
[http://seclab.stanford.edu/websec/csrf/csrf.pdf](http://seclab.stanford.edu/websec/csrf/csrf.pdf)

Though the attack scenarios for that are always very tenuous.

------
ck2
_solution: Deploy HSTS on top-level domain_

yeah easy for you to say

~~~
blfr
Why? What's hard about HSTS? Although they don't include subdomains.

~~~
McRask
There is still the issue of trust on first use. Unless the domain is preloaded
in the browser as HSTS there is no assurance that there won't be at least one
HTTP request.

~~~
pas
DNSSEC, TLS DANE is the proper solution for this.

~~~
MichaelGG
DNSSEC means creating irrevocable CAs that'd be under essentially-direct
control of major governments. No thanks. At least with the current system, if
a CA fails to act proper, they can get smacked back. With DNSSEC, if .com
starts issuing *.com certs, there's no recourse.

~~~
pas
Huh? Your site can be already completely hijacked by the same actors. Your
browser trusts a lot of CAs so VeriSign (or whoever operates the .com zone)
can already issue .com certs. And your only hope is to preload your cert
(which is a Chrome only thing, and pretty inefficient and inflexible).

The Convergence Project with notaries is an even better solution.

But using the DNS as the authoritative source of data and using external
parties to keep an eye on that would both lead to efficiency (performance,
flexibility) and security (as in from the State).

------
coldcode
So is this a real issue or a theoretical one? Are people actively using this
to do harm or is something someone could do?

------
paulschreiber
Why isn't cert.org using HTTPS by default?

