
How Hackers Stole 200,000 Citi Accounts Just By Changing Numbers In The URL - mmavnn
http://consumerist.com/2011/06/how-hackers-stole-200000-citi-accounts-by-exploiting-basic-browser-vulnerability.html
======
raganwald
This is unacceptable, obviously. It's a massive failure of their security
testing protocols. I'm not particularly surprised that an vulnerability like
this would get written into the code, it's an easy mistake for an
inexperienced developer to make. But I'm not going to pile on the developer.
We laud "separation of concerns" in our architecture, and this pattern applies
to the organization of software development teams.

I don't expect every developer to be aware of every vulnerability. But I do
expect that a financial institution has a specialist somewhere that audits the
code before it is sent for testing ("white box"), and then I expect them to
have an independent audit team probe for vulnerabilities ("black box").

After the inexperienced developer has had his code rejected for various flaws,
he will become quite aware of the obvious ways things like this can go wrong.

Don't get me wrong, I expect that the vast majority of developers wouldn't
make this mistake in the first place. But if you aren't a specialist, it is
pure hubris to think that you write code that is hardened against _all_ of the
attacks out there. And if you have a vulnerability, it really doesn't matter
if it's an embarrassingly simple vulnerability or one that requires
sophisticated techniques to uncover and exploit. Either way, you're road kill.

High-value targets like banks need two security specialists (the code audit
and the penetration test) to accompany the development specialists. That's
simple separation of concerns, and it works as well in team organisation as it
does in code organisation.

~~~
tptacek
I don't know anyone at Citi, but I do know several firms that do audit &
pentest work there, and I'm told it's high-volume --- in other words, it's
likely _every_ app gets tested. This is a startling miss.

When you engage a software security firm to check out an app, you're trusting
that their report is reasonably complete. As a practitioner, this is basically
your nightmare scenario. You know in the back of your head that it's always
possible you're going to miss things, but a vulnerability this trivial seems
like the kind of thing you find on the first day of testing.

~~~
joe_the_user
This is the kind of vulnerability that should be caught on the whiteboard
before a single line of code is written.

I know this isn't what you're talking about, but I hope that an institution
that's protecting my money will formally specifying their protocol and not
haphazardly putting their app together with "Test Driven Development". But
maybe that's how they managed to be this stupid.

~~~
tptacek
Though there are no doubt very, very smart developers scattered across Citi,
please bear in mind that the wiring-up-form-fields-using-J2EE jobs in large
financials are among the least prestigious in our field. These are the types
of apps that are among the first to get outsourced.

Your idea that the input and output paths through a typical banking J2EE app
were carefully whiteboarded seems mythologized, based on what I've seen at
fisrv companies.

~~~
joe_the_user
That's hair-raising. I have only a small amount of financial firm experience.
I'm just used to ordinary software development being white-boarded first.

~~~
tptacek
People forget that statistically the devs hanging around on HN are in the top
brackets. The overwhelming majority of devs are 9-5 paycheck players with
exactly the level of competence required to execute J2EE and .NET CRUD apps
that pass internal acceptance testing.

If you are an enterprise dev with a knack for making web apps that are't
horribly insecure, and you don't have a phantasmagorically awesome comp
package already, more advice: start looking for new gigs. You're
underemployed.

~~~
auzieo
What's the lower bound for a phantasmagorically awesome comp package? I am not
good at evaluating the fairness of my current salary.

~~~
jacquesgt
In a market like the New York, Boston, or Seattle areas, with 5+ years of
experience, a top tier software engineer should be able to pull down $100k
salary plus 401(k), a good health insurance package (ie, something better than
a high deductible plan plus HSA), and long term/short term disability
insurance. Adjust the salary number for your local cost of living.

------
mgkimsal
In 2000(?) I employed this same 'hack' against Ameritech's online bill viewer,
but couldn't get anyone's attention. I called several people at Ameritech, but
couldn't get through to anyone who understood anything I was saying.

I tried to get ahold of newsmedia, but realized afterwards that the links I
was sending did have a session timeout associated, so by the time a reporter
clicked a link, they got nothing.

Finally, I managed to get in touch with someone at 'fuckameritech.net' (IIRC)
- a consumer watchdog (I hesitate to say 'group' - I think it was just one
guy) who said "I'll take care of it". He made some contacts - I think got it
to a reporter in Chicago, and _that afternoon_ Ameritech's online bill view
and pay was taken down (a wednesday IIRC) and it wasn't brought up again until
Monday.

The 'fix' was not much - they were now hashing the account number in some
massively long (128 char?) ID instead of just your account number. But it was
all still visible in the URL, which was the bigger problem to start with,
because it encouraged 'hackers' like me to change my account number by one
digit.

I suspect others had noticed this before, tried to contact citi, and couldn't
get in touch with anyone who understood what the caller was saying.

Companies need separate 'web vulnerability' hotlines to call/contact to report
issues like this - perhaps just hidden in the 'view source' - if you're good
enough to find the info, you know what you're doing enough to report a
problem. Too low a bar?

~~~
FooBarWidget
I imagine that publicizing a web vulnerability hotline would result in more
trouble that it would solve. Normal people really don't understand computers.
If you somehow give off the message that your system is not perfectly secure
or bug-free they would get scared and run off to competitors who are just as
bug-ridden but at least _appear_ to be more secure.

~~~
cosgroveb
Eh I dunno. Google does this with its Vulnerability Reward Program [1] and
people seem to be fine with sharing almost all of their private data with
Google from all their e-mails to their credit card numbers (Google checkout),
etc., etc.

Also Facebook has a form for reporting vulns [2] and people are still happy to
share their personal info there. I'm sure there are other companies that have
"hotlines" but these are just a few I can think of.

I don't think having an avenue for responsible security bug disclosure gives
anyone the impression that their data is unsafe.

[1]
[http://googleonlinesecurity.blogspot.com/2010/11/rewarding-w...](http://googleonlinesecurity.blogspot.com/2010/11/rewarding-
web-application-security.html)

[2]
[https://www.facebook.com/help/contact.php?show_form=white_ha...](https://www.facebook.com/help/contact.php?show_form=white_hat)

~~~
daeken
Mozilla and CCBill as well: <http://www.mozilla.org/security/bug-bounty.html>
[http://www.ccbill.com/developers/security/vulnerability-
rewa...](http://www.ccbill.com/developers/security/vulnerability-reward-
program.php)

------
StrawberryFrog
A security guy weighs in on it here:

[http://idunno.org/archive/2011/06/14/citibank-hacked-
ndash-d...](http://idunno.org/archive/2011/06/14/citibank-hacked-ndash-dumb-
developers-dumber-security-consultants.aspx)

 _"This was not sophisticated or ingenious, as reported, this was boringly
simple. ... OWASP has had Insecure Direct Object references on it’s Top 10
list for years. It’s in the SDL Threat Modeling tool. Any security firm worth
its salt checks for this"_

Yes, there's a good description of this kind of trivial "hack" in the Open Web
Application Security Project Top 10:
<https://www.owasp.org/index.php/Top_10_2010-A4>

~~~
wccrawford
Yeah. When that first report said 'experts' said how hard it would be to have
predicted and prevented this, I choked. Amazingly lame.

~~~
peterwwillis
Citi has an internal web application pen-testing group. My guess is they were
only ever attacking the outward facing apps and not the ones after successful
authentication. Even if so, they may have hundreds of apps to test which
constantly change, and sometimes attackers just get lucky and find a hole as
soon as it pops up.

~~~
tptacek
My understanding, just to clarify: Citi has a web app pentesting _operation_ ,
which engages and is among the largest customers for several well-known
pentesting firms. Just in case the impression was that Citi has a couple guys
in a room doing this stuff.

As for constantly-changing apps, let me speak against my own direct financial
interests here (we have a product coming out that addresses that problem, so
I'd like it to be a big one). Many, if not most, of the large financials we
work with or have talked to have a fairly strict process for deploying new
code, and the process gates on security review. Not deploying unreviewed code
comes pretty close to being part of the due care standard at large modern
banks. If I had to gamble on this, I'd bet that this specific code _did_ get
reviewed.

~~~
m0nastic
I can only think of three scenarios which would result in this breach (but I
could just be lacking imagination):

1.) This app has been sitting around in production and was never tested.

2.) This app was part of the normal testing procedures (which usually means
it's tested annually) and somehow this vulnerability was missed in every test.

3.) This vulnerability was not present the last time the application was
tested, and somehow this version was deployed before it was signed off on.

I've been around too long in this industry to claim that scenario 1 or 2 are
impossible, but knowing the particulars, they seem exceedingly unlikely.

That leaves me to think it was the third scenario, which is still abberant
behavior on their part.

I feel bad when I hear about situations like this. As you mentioned in another
comment, this is pretty much what we fear the most.

~~~
tptacek
I don't know Citi at all, but at our fisrv customers I think (2) is more
likely than (3) (neither is a mortal lock). I also think that this is a hazard
of working with high-volume Big-4 type firms... but I want to tread lightly
with that thought for obvious reasons.

~~~
m0nastic
No no, I absolutely agree with you (about the hazard). I worry about any
company that puts all it's app test eggs into one large contract with a big
firm (a statement which I'm sure would make one of my salespeople cringe). I
find that the places who use a combination of multiple app testing companies
in combination with their internal teams seem to fare much better.

For this specific vulnerability, I find it shocking that even the most
rudimentary assessment wouldn't have caught it; but my own personal
befuddlement might be biasing me against thinking that (2) is likely.

------
patio11
I'm kind of torn on this. On the one hand, yeah, it is a trivial flaw.

On the other hand, so is waving a gun at a teller. That attack has been around
for decades and still works a few dozen times a year, because the cost/benefit
analysis says that after hardening the banks a little it is easier to just
lose a few tens of thousands of dollars every once in a while than it is to
give them the Secret Service's attention to physical security.

That is hardly the only systemic vulnerability in the banking system. For
example, let's suppose I want to compromise your account number and
credentials sufficient to take you for every penny you possess. You know what
I need? A check of yours. Any will do. Everything I need to create a demand
draft against your account is on every check you have ever written. Every
employee of every business you have ever paid by check got the keys to your
financial kingdom.

You may not be aware of it, but since those credentials are assumed
compromised, the security is in a) catching me when I use the demand draft to
suspiciously drain your account and b) failing that, making you whole out of
the bank's pocket. The numbers have been crunched: it is vastly, vastly more
efficient to treat fraud as a cost of doing business than it is to tighten the
screws 100%

The attack surface on software the size and complexity of a bank's is like the
Death Star, except any single rivet being out of place will eventually result
in this headline.

(Step #1 in tightening the screws would be turn off public facing websites,
because inexpert users plus compromised machines means that no banking website
will ever be secure, even without coding errors. This will never happen,
because the provable cost savings of moving customers to online banking
roflstomp over the marginal fraud risk.)

~~~
ctdonath
This isn't akin to waving a gun at a teller. This is akin to handing the
teller a huge stack of withdraw forms with random account numbers on them, and
the teller dutifully checking each one, ignoring the ones with invalid numbers
(most of them), and handing over the account contents for all the valid ones.
This doesn't work, because the teller acts as security: if something is amiss
in the implied security checks, to wit there is no reason one person should be
submitting requests regarding lots of account numbers esp. when most of them
are invalid; she'll have alerted management & security within seconds of
seeing the bizarre request.

Your subsequent analogy/justification/complaint is only valid if ONE doctored
URL were used. My bank, Chase, does in fact implement security against such a
"lots of random account numbers" attack: not only must the account match, but
the MAC/IP address, browser/cookie, and other under-the-hood identifiers must
line up; any mismatch between account number and access tools initiates
emailing or texting a verification code to a known address/phone, which then
must be submitted to close the loop of verification and, only then, allow
access. Not running some kind of "one account per access device" sanity check
is insane.

It's not about one rivet being out of place - such vulnerabilities are
understandable. It's about having an uncovered vent lead straight to the
reactor core - that's stupid.

~~~
stef25
> Not running some kind of "one account per access device" sanity check is
> insane

So I would need different accounts for my personal computer, my laptop, office
computer and the computers of my parents? I regularly work on all of those.

~~~
ctdonath
It's a sanity check, not an absolute limitation. My bank DOES (as I detailed)
require positive-feedback verification that any attempt to use more than "one
account per access device" is in fact authorized by the account holder. Any
time I use a new access device, they email/text to a known address/phone a
verification code which I must feed back before the login proceeds. The sanity
check is: if the account is being accessed from a device not used before, the
legitimacy is suspect until confirmed.

This in contrast to the lead story, where some 200,000 accounts were accessed
from a very small number of computers clearly not authorized by the account
holders - achieved because not even a basic sanity check was performed. Heck,
the servers didn't even notice that no login process was performed for the
accounts, much less track which devices the account holders tended to use.

------
mrcharles
How is this even possible? In my very first website I built from scratch using
PHP, I paid attention to the possibility of this. I can't say for certain that
I fully protected against it, but I _tried_. That little trick would not have
worked.

How is it that a bank, of all places, pays money for a web infrastructure, and
manages to employ people who don't even think about the most basic of attacks?
I've been changing info in URLs since I started using the internet.

~~~
rmc
Diaspora had this problem when they first released their software, and it was
written in Rails, a modern framework
[http://www.kalzumeus.com/2010/09/22/security-lessons-
learned...](http://www.kalzumeus.com/2010/09/22/security-lessons-learned-from-
the-diaspora-launch/)

~~~
spdy
You cant really compare this with a bank like citygroup. Security is
everything the should care about in the first place. They have to protect your
money and nobody really cares about fancy features when they just want to wire
some money, everything should just be rock solid.

I would love to get more information about this breach it sounds to simple to
be true.

~~~
patio11
_They have to protect your money and nobody really cares about fancy features
when they just want to wire some money, everything should just be rock solid._

Those two desires are in direct competition. For example, I bank at Citibank
in America _precisely_ because their website will allow me to initiate a US to
JPY international wire transfer without my physical presence in the US. That
class of activities is just about the most dangerous thing a consumer-grade
banking website could allow you to do. (International wire transfers are
practically non-reversible. If you are induced to send one to a fraudster or
they compromise your online account and send one on your behalf, and the bank
doesn't catch on within a few seconds, you're pretty much screwed.)

Accordingly, many banks do not offer online international wire transfers and
will laugh in the general direction of adding it to their feature lists,
despite it being technically not rocket science.

The rock solid feature that lets me eat on a semi-regular basis is also an
attack vector against almost every other HNer with a Citi account, most of
whom will never send money overseas. So, what should Citi do? Optimize for
security and shut down that feature from their website, or optimize for being
able to just wire some money in a rock solid fashion?

~~~
Osiris
Why not require a separate PIN code for certain transactions, like wire
transfers? That wouldn't help against a keylogger, but it would help if the
account was compromised in this way.

~~~
shabble
The 2-factor approach my bank takes (bit of detail at
<http://news.ycombinator.com/item?id=2634730>), is what I consider a decent
security/usability tradeoff.

When your phone has a NFC reader and your bank smartcard can talk to each
other to handle it, even better. (Well, higher risk of intrusion because it's
a multipurpose device, but way ahead in terms of usability)

------
thibaut_barrere
You'll surely enjoy the quote from the linked article:

=======

The method is seemingly simple, but the fact that the thieves knew to focus on
this particular vulnerability marks the Citigroup attack as especially
ingenious, security experts said.

=======

Sorry but no, this isn't ingenious - it's really the basics!!!

~~~
john2x
Yeah, the NYTimes article really made it look like the hackers went into a lot
of trouble to breach the bank's security.

~~~
webjprgm
Of course they did, if they made it look simple then everyone would freak out
(or they should, and have mass exodus away from Citi). People are used to
hackers finding occasional, difficult exploits.

------
mbreese
If this really was the "hack", you can be sure that Citi is has opened
themselves up to a whole world of negligence lawsuits. This is the same as
having a vault where any customer could walk in and just browse around the
safe deposit boxes. Sure, it might be tough to be authenticated to get into
the vault, but once you're there...

This is something that should cause the immediate dismissal of the CIO, but
sadly, probably won't.

------
aw3c2
"Gawker media blog spam".

Visit
[http://www.nytimes.com/2011/06/14/technology/14security.html...](http://www.nytimes.com/2011/06/14/technology/14security.html?pagewanted=all)
instead

~~~
sili
I was actually disappointed by the NYT article. They interview security
experts who call the attack "ingenious", "hard to prepare for" and performed
by exploiting a vulnerability in a browser. This understates how incompetent
the bank's website design is.

~~~
mmavnn
That was why I linked to this article rather than the NYT. The article linked
to, while shorter, contains all of the relevant information and less
completely random cluelessness.

~~~
chriserin
The NYTimes story is the 2nd crime, because they're white washing Citicorp's
responsibility. With all the hacking that's been going on recently, the NYT
needs a full time cyber security reporter.

------
yuvadam
A naive question here:

Suppose I accidentally stumble upon a gaping security hole in my bank's online
service (or any other online service for that matter).

Am I legally obliged to notify them of that security bug? Can I offer the bank
my assistance, for hire, in solving the bug without it constituting blackmail?
(i.e. I'd be happy to help you solve this at a $300/hr rate)

~~~
ctdonath
Legally obligated? no. It's not your system, it's not your problem in that
sense.

Sure you can offer to fix it, but since you don't know squat about the system
(save for a small flaw at the surface) and they have teams of developers who
do, they won't be interested in paying you to fix it. Offering to explain the
bug for a fee won't be blackmail unless you threaten to reveal the bug to
others if they don't pay up.

Be a decent chap. Send 'em a nice letter explaining the problem. It's your
bank, remember, and they're humans like you; work with your service providers
to improve the service. Assume you're not the only one who knows about the
problem, that someone who also knows isn't as nice as you, and it's YOUR bank
balance that is at risk.

~~~
rapind
But it's a corporation, not _humans like you_. I'm sure they employ lots of
decent people. I'm sure they employ lots of douchebags too. It's irrelevant.
It's a corporation who's only motivation is to make money. It's not your
neighbour.

If you like their service and feel like telling them then go for it. If you
want to try to charge them, then go for it (no it doesn't make you evil to
charge for a service). If they don't want to pay you, feel free to say
nothing.

Now if it's a mom and pop shop down the street, then yes, please be a good
neighbour and help them fix it (although you can still charge for it, but
avoid douchebaggery like $300 / hour unless that's your usual rate).

~~~
billybob
Hmmm. How successful does the mom and pop have to be before you feel no
obligation to tell them?

~~~
rapind
As a general rule for me it's once a company goes public.

At that point the company is subject to practically anonymous shareholders
through many levels of abstraction. Voting is then done based purely on
financials and often short term gains, which really puts a company at odds
with their customers. So I have no customer loyalty to a company that's
publicly traded.

Many private companies can still maintain my loyalty though based purely on
their actions if the owner(s) / investors aren't completely disconnected from
their customers. That's more of a case by case. The smaller they are, the more
likely they care about their customers (there are always exceptions though).

------
nextparadigms
How can a bank this large have such poorly designed security? It's ridiculous.
Hopefully, all these latest hacks get everyone else to treat security more
seriously. There could be a lot of other banks that do the same thing as
Citigroup. So if one gets hacked, at least the others will remember to review
their security policies, so it doesn't happen to them, too.

------
leftnode
What's worse: every customers ID in the database was stored in the URL or that
_there was no ACL to test against_? If a user is logged in, you have their
account ID stored in a session. If they navigate to a page that their account
ID can't see (like another person's account), then kick them out. Astoundingly
simple.

------
quattrofan
Seriously how do people stay in their jobs allowing crap like this to happen?
The CIO or CTO at Citi should get the boot. Until companies like this and Sony
start making examples of people, this kind of sloppiness that gives our
industry a bad name will continue.

~~~
groby_b
Except the examples that will be made will not involve the CIO/CTO. And even
if it did, the guy almost certainly a golden parachute - what does he care?

As long as it's cheaper to clean up after the debacle than prevent it in the
first place, that's what people will opt for.

~~~
gnosis
_"And even if it did, the guy almost certainly a golden parachute - what does
he care?"_

If he really was fired and the company made a public stink about how much he
screwed up, he may well care.

Many people at that level care about their prestige, and care about being
shamed in front of their peers, even when they don't need to worry
financially.

------
TheCondor
I've been ranting on this stuff for a couple years now to my friends. There
are some alarming trends.. First off, a pen test is often treated the same as
an attorney client relationship. If the test turns up particularly costly bad
news, I've seen a handful of testers have the relationship essentially
severed, received some hard language about talking about it from a lawyer and
then received a check from a private account as if the company doesn't want to
leave any traces that they actually knew about the problems. (I'm not joking,
some medium sized companies have done this)

With some of the regulations the big missing piece is openness, there is no
transparency into it at all. Any audited company should say who audited them
and then after some period of time, 180 days maybe, the audit should be made
public. The business risk is that customers will leave, in many cases like
Playstation Network, customers effectively can't leave, they've already
invested in something and there isn't an alternative. In many other cases it's
not typically going to be widely publicized. If the customers can't leave, en
mass, there is no business pressure for security and without any transparency
the regulations will simply be gamed.

------
earlyriser
I discovered that my bank, Banque Nationale, used GET to delete transactions
from the History. Then somebody could send a mail to the bank clients with an
image linked to this Get action and delete the transactions of the client if
he was logged into the bank and reading his emails at the same time. It wasn't
a big risk, but I don't understand how this went live. I mean if a bank could
not get that POST is for C_UD and GET for _R__, then who?

------
lucraft
I'd argue that this wasn't even a hack. This is publicly visible information.
No security measures were circumvented.

------
fmavituna
This is like the first thing you learn in web app security (defender or
attacker) and you don't even need to write script a tool such as
<http://code.google.com/p/fm-fsf/> will scrape the data quickly.

Even though it's insanely easy to spot and exploit it's also easy to miss it
while coding. But any decent pen-tester will find it. Regardless, unacceptable
for a finance company.

~~~
dave1010uk
All you need is curl:

    
    
      curl http://example.com/user/[1-100]

------
snorkel
I know exactly how this occurs because I recently met a "Senior Web Developer"
at an established business who was basically their acting architect because he
was the their first coder and therefor his non-technical bosses regarded him
as some kind of genius because he knows how to unjam the office printer. He
didn't know a lick of Unix, didn't understand load balancing, and had very
weak SQL skills. He was your typical framework junky who couldn't imagine
writing even the simplest web app without a framework to do all the heavy
lifting. All he wanted was for me to recommend an even simpler web framework
so he wouldn't have to write any SQL at all. No doubt some day his code will
be generating headlines like this one, and he will no doubt blame whatever
framework he used and his bosses will simply mandate that they switch to a
more secure framework pronto, and they'll promote this boob as their Senior
Architect to lead the project.

~~~
lawnchair_larry
That actually sounds like the right way to do it to me. He shouldn't be
writing frameworks. Frameworks are reviewed by a lot of people, and there is a
far greater chance of them being bug free than something home-rolled.

------
lifeisstillgood
The big question is why the structure of the IT department lent itself to
doing something so stupid

You can fire the CIO, you can replace the offshore developers with onshore, or
vice versa, but experience says it won't matter.

I looked in amazement at googletesting's dependancy graph test suites
yesterday, and realised that the playing field is not flat at all.

Reading and writing code is the literacy of the 21st C.

And the end most big companies are like newspapers owned and managed by
illiterates.

It does not matter how you rearrange the strucutre or the hierarchy, when the
chips are down decisions will be made on what the illiterate management
understand is the best way to work. As such it is infinitely unlikely that the
decision then will be set up to support what a literate person would decide.

Until a generation of coders grows up, or all illiterate companies go
bankrupt, this will merely be one of a myriad of pathologies exhibited by
large companies run by the illiterate.

------
yaix
I have really problams believe that this could be true. Not even a first year
student would be that stupid to expose any user id in the URL. And read from
it without access right checking. For access to the related account data. How
would they even get the idea to do such a thing?

And as for the "hackers", I guess legally this was not even a break-in. At
least in Germany, for legally being a break-in, a computer system must be
"specially secured with the intention of preventing access". Well, this system
wasn't.

...still, I have a hard time believing that it could be true.

~~~
prodigal_erik
There's nothing wrong with having a user id in a URL. That's very RESTful (my
account is distinct from your account, so they really ought to have different
URLs), and can make a lot of sense in cases where users may delegate
permission to manage each others' records. It's just _trusting_ that user id
without an authorization check that's idiotic.

~~~
yaix
Its RESTful yes, but with the wrong item of data.

To identify a logged in user and give the user access to their private account
data, ONLY EVER use a unique and temporal random string. Nothing else. Ever.

Storing that random string in the URL may be done but is more insecure,
because it will remain in the browser history. Not good if the user in on a
public PC. Better store it in a cookie.

~~~
prodigal_erik
"Who am I", "which users/accounts may I access", and "which user/account do I
want to access right now" are different questions, and it's only the latter
that belongs in the URL. I agree that secrets in particular must not be
exposed in the URL.

------
shaydoc
Its a basic error. Tech Architect should be seeing this in milliseconds. Its a
total design flaw. Should not be going straight to SQL with just paramters in
a querystring, there should at least be authenticated user account
verification checking.

Also doesn't say much for the company doing security review, its a basic
check. Furthermore to not have a user/onwer id to join on there (no doubt sql
back end) is shameful. I mean I can see it now :

select x from accounttable where accountnumber = @val

how about simply :

select x from accounttable where accountnumber = @accno and ownerid = @ownerid

------
bskari
What worries me the most is that one expert that the Daily Mail interviewed
said "It would have been hard to prepare for this type of vulnerability." The
same expert "wondered how the hackers could have known to breach security by
focusing on the vulnerability in the browser."

[http://www.dailymail.co.uk/news/article-2003393/How-
Citigrou...](http://www.dailymail.co.uk/news/article-2003393/How-Citigroup-
hackers-broke-door-using-banks-website.html)

------
sixtofour
I've never worked for a bank, or any company that held sensitive information.
I've only worked for companies that sold products to be used internally.
Grains of salt are on the table to your left.

What this looks like, in the context of all the other serious recent breaches
like Sony and the IMF, and from the point of view of someone who's never had
to fight this particular battle but knows a little code, is that these corps
deployed online apps in the early days when this wasn't a major part of their
corporate face. Practices and points of view evolved from an initial
environment where there just wasn't as much motivation for criminals to crack
apps, because there wouldn't be that much of a market for what they stole. So
corps could get away with deploying almost anything, relying on both security
through obscurity and security through rarity (breaches were rare due to low
profit). People in corporate offices that even knew their corps had these apps
would be rare because the prestige of managing these people and apps would be
low.

The apps we have today would then be direct descendants of the old insecure
apps, and in many cases would be built directly on those old apps. Layers of
mud, and you can't change the inside layers because old mud is brittle.

And now the corps are going up against, not people who are merely exploring or
looking for bragging rights, but people working for criminal enterprises that,
while not having the global scope of banks, are large enough and _focused_
enough to directly challenge the technical power of the banks. And the banks
are working with old, dry mud.

Again, grains of salt, but I suspect I'm in the right salt mine.

------
holdenc
I have a feeling there's more to it than a clear account number reference in
the URL. It was probably a base64 encoded account number or a non-salted hash
of the account number (ie. rainbow table-reversible) and the quality assurance
analysts probably never questioned this.

Disclaimer: I worked in product development making banking software and simple
URL hacking was always a standard test.

------
jcoder
``security experts'': The attack was "especially ingenious," and "would have
been hard to prepare for."

The experts are certainly part of the problem.

------
tnorthcutt
I really hope that the reporter who wrote the NYT article either misunderstood
the facts, or talked to an ignorant "expert".

------
teyc
I wouldn't necessarily blame this on the guy programming this. However, the
person who spec'd the application up would be due for a quick demotion. The
problem with antiquated bank systems is that the teller is trusted with the
access to any account. So when it came to web enabling the old teller
application, someone did some screen scraping as a prototype without having a
concept of restricting access.

There is probably no concept of linking an authenticated account to a
restricted set of bank accounts. Instead, they've probably wired it up to CICS
directly to retrieve account details. This is why the Quick Fix appears to be
obsfucating the account number in the URL.

Is there a public report anywhere? Aren't companies required to report all
privacy breaches?

------
jdp23
Historically, companies like Citi haven't faced any meaningful consequences
for putting their customers at risk by not doing "security 101". Will things
be any different this time?

------
defdac
The same bank also complains if you want to use more than 9 characters in your
password.. That kindof hints about how they store passwords in their
database..

~~~
runningdogx
Sadly, other financial services companies do not seem to understand, either.

Etrade's password system requires 6 to 32 characters with at least one number
according to the password change instructions. They don't mention punctuation,
but if you try to include some, they're deemed to be "invalid characters". Go
figure.

This stuff makes me sad that I signed up for etrade:

"Thank you for your message regarding enabling HTTP Strict Transport Security.
I have sent a request to our Product Development Team to have this feature
added. Due to the high volume of requests, there is no guarantee that this
will be implemented." -- etrade representative

"E _TRADE does not allow certain characters to be used when establishing a
password online. There is no specific E_ TRADE publication providing
information on why certain characters are not allowed, such as special
characters used for punctuation, etc. We appreciate your feedback concerning
the online passwords. I forwarded this suggestion to the Product Development
Team for future consideration and implementation. I can not guarantee when or
if this change will be able to be made..." -- another etrade representative

------
Shenglong
It's funny that this wasn't discovered sooner. I suppose everyone figured a
security flaw like this would never exist, and never bothered to try. Irony...

~~~
wladimir
Most likely this was discovered sooner. But people kept the vulnerability to
themselves and possibly even profited from it.

These kind of vulnerabilities in high-profile sites can go unreported for
ages, even up to the point that they are common knowledge in certain groups.

------
narad
That's worse than SQL injection. Didn't they build ACL?

~~~
romaniv
ACL is probably part of the problem here. Most ACLs are very inflexible and
are "opt in". They probably had ACL to block unregistered user from visiting
the page, but it didn't deal with individual accounts.

------
nandemo
> Think of it as a mansion with a high-tech security system -- but the front
> door wasn’t locked tight.

It's more like an apartment building with high-tech locks in the front door
and apartment doors. But after you unlock the front door, you can unlock any
apartment's door with your key! The keys are all identical, only the number
printed on the label is different.

------
anty
Did anyone else check if the article was meant to be satiric? This is
unbelievable!

------
evro
I was reading about this yesterday and about an hour later, Citi called me
trying to sell me their fraud protection. I replied "Did it help the 200,000+
accounts that were stolen from you?" and hung up.

~~~
khafra
It might have--I mean, when Citi sells you fraud protection for your Citi
account, that's basically insurance against either you or them being careless
with your account info.

------
gnosis
I wonder what Citibank is going to do about this now? Are they going to change
their customers' account numbers?

That's the minimum that should be done for those accounts which were
compromised.

------
Sukotto
I use citibank for my family accounts so may have been impacted. Has anyone
posted a list of compromised account numbers someplace I can check against?

------
tannerc
Is there anything Citi has done right over the past few years? I'm asking
honestly and not with sarcasm, because I just haven't seen it.

------
smackfu
I wonder where they got the account numbers from. Citi doesn't show them in
the web interface, just last 4 digits.

------
Arxiss
How the hell could this happen? I think that 6yr old kid and figure out URL
injection... wait a second... citi is a bank???

------
stef25
wouldn't a simple IF( $url_id !== $logged_in_id ) die("No access") fix this?

~~~
slavak
If they saved the $logged_in_id in the session then they wouldn't need to pass
it as a parameter in the url. No, this probably indicates their entire
architecture is ass-backwards.

------
noduerme
So...the first login had to have valid credentials. Someone needed a citibank
account to start the scraper-bot. Wonder if the fbi's talked with that guy
yet.

~~~
adolph
I think the key phrase would be "Someone needed _access to_ a citibank
account..." If some Citibank account(s) was compromised in some way, such as
phished, then a third party would have the necessary access. It would be bad
enough to be a victim of phishing--imagine being a pawn in a large scale
attack like that. It would suck.

