
Race conditions on the web - type0
https://www.josipfranjkovic.com/blog/race-conditions-on-web
======
matheweis
I was once involved in helping diagnose a race condition in a website whereby
a user could change another user's password if they changed their password at
approximately the same time. When looking at the code it was not at all
obvious that there was even potential for the issue until you knew it was
going on...

Some on this thread are acting like these are trivial problems to avoid. I
think they are wrong...

~~~
homero
How's that work when you set passwords according to id?

~~~
lsaferite
Yeah, this is my question exactly. I've dealt with MANY race conditions over
the years and the idea that two different user requests could update the same
user account password seems... unlikely at best. Not saying it's impossible,
but more detail would have been appreciated.

~~~
yardstick
I've seen this before in a Java system. A MVC Controller populated the HTTP
params into a Bean, which it assigned to an instance field, and then used that
subsequently to grab the value to update (don't think it was password, might
have been email address iirc). This would work fine if the controlled was
request scoped, but it turned out to be a singleton - so the field was shared
across all requests!

------
Blaine0002
It always amazes me when developers making crucial transaction systems don't
know about or implement some form of locking. I personally use a cluster of
consul servers for distributed locking ( [http://consul.io](http://consul.io)
by hashicorp ) although I don't use the rest of consuls features I'd sure like
to learn how someday :)

~~~
spriggan3
You can build what you think is the most robust and secure system in the
world, someone somewhere will figure out how to break it. I don't think it's
fair to insinuate that the people who wrote the code were "incompetent",
especially given the size of Facebook's codebase. And given their audience
they'll be more exposed to hackers than whatever thing you'r working on that
isn't Facebook and doesn't have the same audience.

~~~
Noseshine
Exactly. When I write my code I am aware of a high number of things that could
go wrong, and that I deliberately don't check. If it goes wrong I'll let it
crash, or I make the deliberate decision too see if that condition ever
actually happens in the real world. I'm not talking about security! It can be
things like only checking if function parameters are what I expect them to be
for _some_ functions where I think it's important, or being aware that if some
functions are called with other timings than I expect something could happen.
The problem is I'm not writing the final app, I'm writing some sort of library
(sort of), so I have no control over how it is going to be called in the end.
I'll just add it to the documentation but I make few attempts at catching all
or even a lot of such errors.

If I would blow up my code at least tenfold if I tried to take care of all the
possible conditions - creating a lot more of them in the process. Writing code
feels amazingly fragile to me, and yet it works well. Note that that code has
had several reviews from other developers, so I'm not talking about really bad
code.

After having delved deep into medical topics out of curiosity - hundreds of
hours of anatomy, physiology, neuroscience, bio-chemistry, lots of statistics,
I'm even less concerned. The ways things go wrong in a biological system are
orders of magnitude more numerous, and the approach of nature is "fix it when
it happens" (or start by creating a new instance).

I think the more complex our own human-made systems become we'll have to use
more and more of the nature method. We are already doing it everywhere,
electronics or software.

I see two competing forces:

a) The human attempt to make systems more "provable", for example by
formalization/"mathematization",

b) Nature showing us that complex systems can only be done with a relaxed and
laissez-faire attitude ("shit happens") after putting in a reasonable effort.

The balance shifts towards b) for systems in rapidly changing environments,
and to a) for systems in static conditions.

So discussions about the subject should never be just about the system (piece
of software) itself, it _must_ include the environment it is to operate in.

------
et1337
I would love to hear how these were all patched. Wrap everything in a
transaction? Use some kind of MQ? Make sure these critical DB calls are not
cached in any way? I'm sure it's different for each case.

~~~
ejcx
Having patched one of this researchers race conditions, while I was at
lastpass, I did exactly that.

This only applies to coupon redemption

~~~
corobo
Which one?

------
paulmd
I used to use a camera shopping website that had a race condition in their
product listing/search functionality. Their site was quite slow, and if you
tried to have more than one tab open to their site then you would get mixed-up
results. The tabs might have the same content, mixed content (eg the
breadcrumbs from the other tab), or sometimes nothing at all. This would occur
even across a significant period of time, not just if you made the requests at
the same time.

I struggle to understand how that could even have happened. Were they storing
the results of the DB lookup referenced against my IP somehow?

~~~
flukus
I'm guessing a lot of them stem from using session variables and from caching.
I've seen everything from an nHibernate cache being stored in the session,
serializing and deserializing a huge chunk of the database on every request,
to developers not being aware that the session variable they get is user
specific (They access it via Session[UserId]).

Caching in particular seems to be something that's applied without much
analysis, I've seen it slow down implementations more often than it sped them
up.

What I'd really like is for the browser to send a tab id on every request, so
we can scope variable to a tab as well as user.

~~~
robocat
FYI: one place to store a tab id is in the window.name variable:

* it is persistent across refreshes

* it remains even if you leave page to another domain then return to page on same tab

* it disappears if the tab is closed

* it doesn't change as you navigate back/forward

* would be most useful with xhr requests or a hidden field in form post requests.

Downsides:

* no origin security

* no obvious nice way to use with page GETs for server to know which tab a page request is from.

~~~
throwawayReply
SessionStorage has all of those properties:

[https://developer.mozilla.org/en/docs/Web/API/Window/session...](https://developer.mozilla.org/en/docs/Web/API/Window/sessionStorage)

localStorage is often used for it's persistance across closing/reopening the
browser, but for times when you don't want storage to bleed across tabs or
closing the browser then session storage is much better, and unlike a cookie
doesn't automatically get attached to request so there aren't XSRF worries.

(You still need to be careful of XSS of course.)

------
vinchuco
Off topic: FarmVille on a satellite connection could let you plant fields on
top of each other even though they weren't supposed to intersect.
Unintentional vertical farming.

------
breatheoften
Bug bounty programs are interesting. I wonder if it wouldn't be a good use of
tax payer dollars to pay people at the nsa or similar organization with
computer security responsibilities to churn away at these programs. As in
"spend one month a year doing big bounty programs, collect your normal salary,
keep whatever you earn" ...

------
spriggan3
Interesting, it would be even more interesting to know how these problems were
patched on Facebook's side.

