Hacker News new | past | comments | ask | show | jobs | submit login
Capital One’s breach was inevitable, because we did nothing after Equifax (techcrunch.com)
467 points by Corrado 26 days ago | hide | past | web | favorite | 157 comments



This is a bigger issue than 'credit agencies have poor security'. This is an issue of 'standard authentication in the US is negligently weak'.

Knowledge of a SSN and other public information should never be enough to authenticate any person. That means no credit issued based on that, no tax returns filed or viewed based on that, no checks sent based on that.

The solution is not better security with credit companies. The solution is some form of actual authentication. Preferably done by an organization dedicated to that (public would be best, private could work); not outsourced to organizations that are mostly geared towards determining credit worthiness.

For the public system, assign to every participant a true unique identifier, rather than the SSN which explicitly states should not be used as such.

For those citizens that do not want to register in this way, allow for physical authentication at physical locations.

In Europe this is far less of an issue since our population registration is a lot more comprehensive.


To this point: does “identity theft” really exist, or is this simply a reframing of banks, etc., completely failing at authentication?


Identity theft is an amazing PR term, not-so-subtly shifting blame onto the individual whose identity was fraudulently used.

* The PII wasn't stolen from me, it was negligently exposed by services I contract with (and pay!) and others that I have no formal relationship with (like Equifax).

* It wasn't defrauding me, it was defrauding services I contract with (and others) who failed to verify my identity.

And yet somehow I'm obligated to do the cleanup myself.


Yup. The quickest way to stop these sorts of things from happening is to make the banks responsible for accepting/using stolen information(ie facilitating identify theft). For some odd reason, its the person's responsibility now that the bank used fraudulent information.


The concept of "identity theft" (i.e. the hacker stole your identity vs. the hacker used false credentials to steal from the bank) is probably the single greatest feat of social engineering of our times.


Point. Identity hacking would be more accurate.

In that the attacker is creatively operating the system, rather than really possessing magic knowledge.


Today, you are not you, you are your data, a persona. And you are somehow responsible for it or anything that casts a similar shadow.


In the US especially, there is a very good reason to oppose any such measure - there is no political will to implement restrictions on how private companies could use that system.

The setup for these breaches is entirely due to companies being able to require your SSN for whatever purpose, and indefinitely store it basically however they'd like. Either the government should have never assigned a mandatory unique identifier to every individual, or there should have been strict laws about what purposes it could be requested/used for, how it could be stored, and steep statutory liability for screwing those up.

But the political attitude in the US is to have the government do the bare minimum and private companies will take up the charge. However for many subjects the resulting mix is the worst of both worlds - given the tiniest hook into governmental power, the private sector eagerly implements totalitarian solutions for which there is no opting out.

Presently, the naive legal mandates of SSN's, driver's license numbers, and license plates are being heavily abused to enable pervasive corporate surveillance. These existing identifiers already make too good of keys for cross-linking every other ill-gotten datum on a person. The main thing that keeps every single business from demanding these identifiers is people's ambiguous worry of just handing them out, due to their technical shortcomings. Imagine going to a grocery store and having to present your national electronic ID to get the sale prices, with no alternative.

previously: https://news.ycombinator.com/item?id=19880374


I agree, SSN's are a poor form of authentication. What's missing from these conversations is realistic approaches to fixing it. It's a lot like healthcare: plenty of people want to get rid of Obamacare, but they fail to explain what will replace it.

> For the public system, assign to every participant a true unique identifier, rather than the SSN which explicitly states should not be used as such.

This will work for a time, but what happens when the next breach occurs? How do people renew their UUID's? Expire compromised ones?

> For those citizens that do not want to register in this way, allow for physical authentication at physical locations.

Physical authentication probably means fingerprints, face data, correct? These are already compromised. Worse yet, they cannot be changed.

CCTV cameras are everywhere, and getting better resolution each day. Face authentication can be easily duplicated - some of the early versions of FaceID (by Apple) were broken by 3-D printing a mask [2]. Furthermore, some organizations are already compiling a list of "face data" that can be used to fool sensors and other biometric tools. By the time "face readers" are widespread, hackers will already have large pools of face data to use to hack into these systems.

There are other cases where fingerprints have been printed using a 3-D printer and have broken security of mobile smartphones [1]. What's to say whatever government issued terminal won't be broken in a similar way? Furthermore, it's not easy to expect people to guard their fingerprints: every glass they drink at a restaurant will have their fingerprints. I don't expect to shed my SSN whenever I order a pint at my favorite pub.

[1]: https://www.theverge.com/2019/4/7/18299366/samsung-galaxy-s1...

[2]: https://www.wired.co.uk/article/hackers-trick-apple-iphone-x...


> I agree, SSN's are a poor form of authentication. What's missing from these conversations is realistic approaches to fixing it.

SSNs are a poor form of authentication because they're ostensibly secret but re-used everywhere. It's just like a password in that regard: no password is secure against reuse, no matter how strong it is on paper.

A minimal change would be to allow/encourage single-use SSN-equivalents, generated on demand by a central authority. That is, someone would give a different "SSN" to their employer, their bank, the IRS, and their cable company (for credit check).

That still provides a point of vulnerability, but that is far better than the current system where a single credit application form is a global compromise. If a single-use number is compromised, it could be easily revoked without affecting the person otherwise. Likewise, numbers could easily be generated with short expiry dates to make use from stored credentials impossible.


> This will work for a time, but what happens when the next breach occurs?

The UUID shouldn't be assumed to be private information - authentication should be built around the assumption that this identifier is a public identifier - like a name, but guaranteed to be unique.

> Physical authentication probably means fingerprints, face data, correct? These are already compromised. Worse yet, they cannot be changed.

Even if those are compromised, that doesn't mean it has to be easy to impersonate you. The solution may be low-tech - you may have to physically present yourself to a human who assesses if you are indeed who you say you are before opening an account. The higher tech solution physical authentication might require something akin to chip-and-pin or a (revocable) token generator a la Ubikey


Careful what you wish for with the low-tech solution. One of the most effective vectors for phone number port-out scams is just showing up to a local cell phone shop and presenting a fake id. Often this is completely free for the attacker since they can just opt to have a new phone added to the account on credit too.


The assessment doesn't need to be based on a potentially counterfeit ID the subject brings, in my hypothetical scheme. It's probably be better to do out-of-band verification with the private/public provider of the ID (State or DMV, for example)

edit: if the value of identity were to be elevated, then the physical security at these locations would be increased to the level of banks or cash-handling facilities to increase the cost of failed attempts at impersonation (to the level similar to attempted cash heists). Infact, the local phone shops should be barred/disincentivized from doing auth badly themselves and should outsource this function, just like they do with creditworthiness.


> The UUID shouldn't be assumed to be private information - authentication should be built around the assumption that this identifier is a public identifier - like a name, but guaranteed to be unique.

In that case, we already have this today: At the state level, most citizens have a Drivers license or State ID, both of which have a unique ID. At the federal level, all US passports have a unique Passport Number. Granted, not all citizens have a passport, but that system is in place to grant citizens unique identifiers.

And yet we still have identity issues. So this is part of the solution.

> Even if those are compromised, that doesn't mean it has to be easy to impersonate you. The solution may be low-tech - you may have to physically present yourself to a human who assesses if you are indeed who you say you are before opening an account. The higher tech solution physical authentication might require something akin to chip-and-pin or a (revocable) token generator a la Ubikey

This is a great idea. I believe France's healthcare system requires every citizen to have a card [1], which uses a chip and pin tech to authenticate the person with their doctor. This could be used for online services or over the phone too.

What the US needs is a branch specifically for administring these "identity cards". The Social Security Administration could be rebranded to an "Identity Administration" or something, then they will manage the distribution and revocation / recycling of these national ID cards.

But for some reason Americans get spooked when you say the words "National ID". Something about how "socialism is bad" and all that.

[1]: https://en.wikipedia.org/wiki/Carte_Vitale


The Real ID Act [0] effectively made all state ID's into national ID's. All but 6 states are already compliant, and the last 6 will likely become compliant by next year, lest their citizens become unable to use domestic air travel without a passport.

[0] https://en.wikipedia.org/wiki/Real_ID_Act


> What's missing from these conversations is realistic approaches to fixing it.

Do you really want to mandate that?

Valuing someone's personal information at $100,000 per person and then fining the snot out of companies that lose it seems like a much more "market driven" solution.

It also means that companies will work really hard to minimize any personal information at all--which is really what you want in the first place.


> I agree, SSN's are a poor form of authentication. What's missing from these conversations is realistic approaches to fixing it.

> This will work for a time, but what happens when the next breach occurs? How do people renew their UUID's? Expire compromised ones?

The unique identifier would be an identifier only, not something for authentication. But before you can authenticate any identity, you need a way to identify that identity. Hence I consider that a base-requirement. Then we need to build a system of authentication points around this identifier. Heck, if SSNs were unique just re-purposing those for the ID would work just fine.

> Physical authentication probably means fingerprints, face data, correct? These are already compromised. Worse yet, they cannot be changed.

No, I mean going to a physical desk and authenticating however you already can do this. This would be something like a valid government-issued ID and a birth certificate. Essentially, whatever is needed to get a passport, have the same system here. Because that is essentially your weakest link already. I added this option to appease the American fear of government tracking.

As for a proposal to fixing it, I would point to two systems.

* The Estonian system, where every citizen is given an ID-card that is also a smart-card with a public key. * The Dutch system, which I am most familiar with.

Let me expand on how the dutch system (called DigID) works. Though I should note the system has flaws, and there are valid criticisms. However, it hasn't had any big failures. The system works as follows:

Anyone can apply for an account, at which point the government will mail you instructions for setting up a simple username-password based authentication. Key behind this system is the 'Basis register of persons'. It is a national database (maintained by the municipalities) of all legal inhabitants and some info about them. Most importantly for this system, an address. This is what makes it possible for the government to send mail to a citizen.

To my mind, the above system of mail could/should be replaced by a visit to the municipal administration, where your ID-card is verified. (Notably, everyone over the age of 14 needs a valid government-issued ID)

Obviously, implementing something like this in the US would be hard. Mostly because mandated ID-cards and a government database of addresses would not be politically acceptable. I don't know the details of the Estonian system, maybe that would require less invasive tracking of citizens

I'm guessing most European countries have similar systems of government-based authentication.

Really though, these systems start with knowing who your citizens are and being able to identify them. And should this not be a basic requirement of a government?


The Belgians have this tool called "itsme" which acts as authentication manager/digital signature tool with authorized partners.

After validating your ID, you can use the app to do 2FA with most major services in the country, including almost every bank and financial institution.

https://www.itsme.be/en/

A program like this could go a long way in the US to help cut down on the issue you describe.


Thanks. Another example is the Dutch DigID: https://www.digid.nl/en/about-digid/


You should look into Estonian ID card program. Its exactly what should be done.


I really wish the mainstream media could pick up more on this, and instead of framing all these breaches and news as Identity Theft, reframe them to the credit companies offloading burden of risk onto consumers. Most people don't even have the context of what is really going on with this. we all should really assume that our SSN & PII is splattered across hundreds if not thousands of databases all in a various state of protection, and not be held liable for the lazy credit companies who's business is based on not making it hard to get instant credit for all those emotional purchases...


> For the public system, assign to every participant a true unique identifier, rather than the SSN which explicitly states should not be used as such.

How do you make this proposed new unique identifier more secure than the (admittedly very unsecure) SSNs?


Only use it as an identifier, not as a part of authentication. The issues with using an SSN as a identifier (username) are:

1) Explicitly not meant as an identifier

2) Not unique

If not for 2, then the SSN could simply be repurposed to be this identifier.


There is no reason to spend one dime on infosec after the big Equifax breach and the numerous Facebook hacks/intentional spreading of data. They already lost all the most important data for every American and both companies are doing far better than ever. Nobody went to jail, everyone gets to keep making money.

You should worry about lightning strikes and like solar flares disrupting your business before you worry about cyber security. Why should any enterprise risk manager waste their time on an issue that has no consequences?


Part of this problem is that Congress has simply stopped functioning for the past ten years or so. They're pretty much just keeping the lights on while social conservatives refuse to compromise with anyone else. When's the last time you remember high-profile federal legislation being passed with the intention of protecting or aiding constituents?


> When's the last time you remember high-profile federal legislation being passed with the intention of protecting or aiding constituents?

July 1st, 2019

H.R. 3151

"Taxpayer First Act

This bill revises provisions relating to the Internal Revenue Service (IRS), its customer service, enforcement procedures, cybersecurity and identity protection, management of information technology, and use of electronic systems."

https://www.congress.gov/bill/116th-congress/house-bill/3151...

Just because actual legislation is too boring for cable news and NPR doesn't mean it's not happening.

You can find all the bills that passed into law here: https://www.congress.gov/advanced-search/legislation?congres...

PS: Believe the 9/11 first responders bill would be more recent, but I figured people would take issue with that as a celebrity bill.


What makes you say this bill is doing more than call for a minimalist response to tax return fraud (giving people “identity protection ID number” to use with ID theft cases and a single phone number to call about tax related identity theft) and make updates correcting obvious flaws in the tax code (aka keeping the lights on)? The provisions in this law will not make it less likely that someone will file a tax return with your stolen info, and not make it easier to get it made right if that does happen.

At least the 9/11 first responders bill was about allocating resources to do something, but the main reason it doesn’t serve your point is the fact it stood for 18 years as an example of our government’s incompetence and inability to do basic, non controversial things.


I could quote from the bill, but the individual subsections all make material changes in the way the IRS runs.

Reducing everything to "creates a new department or abolishes an existing one" or "does nothing and just keeps the lights on" isn't a useful rubric.

Good law is acreted over time, in the same way bulletproof code is.


You are asking me to trust that the people making law now know what they are doing and are slowly moving things in the right direction instead of slowly in the wrong direction, while not contesting the claim that this law does nothing to accomplish its stated goal of reducing the public burden of tax return fraud.

US tax law has been dysfunctional and getting worse for decades, to me that says there are issues with how the system is designed and meaningful progress beyond “keeping the lights on” will require restructuring the law and the agency, not adding a new office here and giving taxpayers more notifications there. Those types of measures, as you correctly point out, have to be looked at as part of a larger plan for the organization that meaningfully addresses a problem, not in isolation. Except here we have a collection of measures that doesn’t coherently address a problem, so there is no way left to look at them except in isolation.


I was pointing out passed law that I consider answers "When's the last time you remember high-profile federal legislation being passed with the intention of protecting or aiding constituents?"

The law has many stated goals, as set forth in the quote I posted.


The Consumer Financial Protection Bureau just barely makes the mark. Of course, Trump started gutting it almost immediately after joining office.


The constituents of social conservatives favor this approach to governing. Those constituents don't want to punish corporations using the law because they view government regulations as bad for the economy.


They're pretty much just keeping the lights on while social conservatives refuse to compromise with anyone else.

Why is your assumption that the social conservatives are the ones that have to compromise? Shouldn't both sides be compromising?


That is hilariously backwards. "Social Conservatives" have done plenty to try an protect or aid constituents, but it's held up by the Grand Old Party in the Senate.


Social conservatives usually mean GOP when referring to American politics


Compromise implies both sides have something to give. One side doesn't have anything the other side wants, so there can be no compromise.

Let's take freedom of speech. One side believes it's unlimited, and the other doesn't. How do you compromise on that? You can't. How do you solve that problem? You don't, it's not the government's job to solve all of society's problems. That's the point most people don't understand: quit trying to control the behavior of others.


Is it the same for small and medium sized company though? We spend quite a lot on infosec with the premise that a breach could put us out of business since we can't afford to tarnish our reputation and loose key clients.

Though I kind of agree that with Equifax and Facebook there weren't much consequences, to me they fall into the (unfortunately) too big to fail category, ie. most of Facebook members don't care and banks still need the credit score and there aren't much competition in that sector.


> they fall into the (unfortunately) too big to fail category

They're absolutely not too big to fail. Equifax or FB? Other than the unfortunate employees and their families, does anyone give a shit? No. No one suffers. To the contrary, thinning sick herd members helpfully invigorates the health of the surviving individuals.

What these organization are are in too many pockets to be too big to jail. It's a trope but it's a fact, Jack

Edits for herd reference and reduced snark


The failure of Facebook might well be stimulative to employment and the economy. Imagine the wave of startups and new initiatives from other companies trying to compete in all the areas Facebook's in now. And unlike banks no significant part of the broader economy is at risk if facebook.com and instagram.com start 404ing forever tomorrow. A hiccup in the "influencer" economy, such as it is, which is negligible anyway, and they'll all have new homes one place or another (or several) inside a week and be building their followings back up.

And yeah, ditto Equifax. There are two other credit agencies already. They're all same-ish as far as I can tell. Pretty sure there are only three so they can pretend there's competition, not for any actual purpose. Again, it might be an opening for a new competitor anyway, while causing minimal short-term harm.


That just sounds like breaking windows to ensure more work for glassmakers to me. You're also ignoring the value Facebook ads provide to every business who advertises on that platform.


The shattering noise is the sound of self breaking windows.


My post is obviously sardonic to some extent, there are lots of situations where you need to worry about security.

I think I'm just more angry about these "too big to fail" companies that are apparently immune from any oversight or consequence today. We should really just convert the FTC building into a homeless shelter and fire all employees, they have done absolutely nothing for decades so this would be great way to actually use the space effectively.


Did nothing? Oh no, don't forget the great service to the nation represented by the recently approved T and Sprint merger!


Depends on the market. Do your clients talk to each other?

In a small market with few potential customers, reputation loss could kill you. OTOH if you sell to general population, then even a scandal involving you being featured on national TV won't hurt your company directly (related lawsuits might, though).


The consequence is I won't use them and as much as possible others won't either. It's not the same as no consequence, but I get what you mean. The government plays so nicely with business that we shouldn't expect even a day's worth of business profits in related fines.


I wish I had the option of not using Equifax (or Experian, or Transunion, or the secret telecom one), but apparently there is no way to opt out of all your most personal data being the product they sell in these private systems.


LexisNexis? there are a bunch of opaque databases that can contain your info


> The consequence is I won't use them...

Yeah, i fully agree...But, what about orgs like equifax where you and I are not really given a choice about their role in our lives? Even before all the shenanigans around equifax, would i have willingly allowed equifax into my life? Heck no! So, while i 100% agree that we should vote with our wallets/purses, sometimes that isn't enough...and that makes me a sad panda. </sigh>


I am pretty sure the vast majority of people don't even know about these breaches, and even if they do it's news that passes them by quickly. Maybe a rant or two on Facebook and then onto the next thing. Almost everyone will continue to use their Capital One credit card and the company will barely see a blip in their revenue.


When the news is mainly about people dying, someone selling stolen rolodexes sounds insignificant in comparison


It does to me.

100m people losing a minute of their life to handling this (lets call it nanodeath) is 190 continuous years of wasted time that you could have spent with your family, napping, reading or other pleasurable life things.

That’s the kind of math that, say, an insurer, should do to determine whether an intervention should be paid for.

But usually you just get called a monster for doing that kind of math.


I agree that we've been given clear signals that losing consumer data won't result in any negative repercussions for your business from government. Unfortunately none of us are customers of equifax - the companies we share our data with are. And those companies don't care, and include every bank.

For Capital One, or other companies with which we directly do business, I think there's likely to be more direct ramifications - namely people refusing to do business with them. Letting anyone access your customers' data is a good way to lose those customers.


Wells Fargo did much worse with actually defrauding their customers, ruining some lives, and only got a slap on the wrist.


I, unfortunately, am forced to do business with Wells Fargo. I recently purchased a house using a different lender and they sold my mortgage to Wells Fargo. Nothing I can do about it. I now have to have a Wells Fargo account to pay my mortgage and am just dreading the day when they start adding features to it that I didn't sign up for. About a month after learning they'd be taking over our mortgage, they got slapped on the wrist for some sort of mortgage-related fraud. It's really infuriating. At some point I may refinance just to get it moved to a different bank, but it wouldn't make financial sense to do it just yet.


I 'm considering putting up a disclaimer before signing up to my sites: "Hey there, this is the internet. Even if we 're not bad people, any data you give us may be hacked and sold , publicized etc. So don't do the dumb thing and tell us your best kept secret."


In my opinion organizations still don't rely enough on "defense in depth" techniques to protect sensitive data. Breaching the WAF and gaining access to S3 files shouldn't suffice to gain access to the raw data. Personal data that is not required for transactional use should be either encrypted, pseudonymized or anonymized. I couldn't find information about the exact use case of the data but as it was stored in S3 I would guess that it was "set aside" for future use in analytics or machine learning, maybe? If so there's really no reason to store the raw data.

Any single IT system is hackable and will eventually be hacked. The probability that an adversary will be able to hack multiple, independent systems is much lower though, and would in many cases prevent data breaches like this one.


I find that in large organizations, business only cares about business. Maybe because they can't be bothered with IT or security or any of the geeky disciplines. I'm pretty sure it's all about soft skills: they just can't handle dealing with folks that lack soft skills and those geeky, nerdy folks running the technology stack lack soft skills and only ever ask to spend money ...

If you, tech geek, learn enough to speak well to The Business, you have another challenge: the market is at a place where incentives matter. You can articulate, in the right language, the need for cleaning up the company security posture, but you can't articulate an incentive. User can't sue because the ToS says 'mediation;' There's no regulatory agency that will really threaten our profits - we can afford a $10MM fine when they get around to levying such after three years of investigation ... what's the incentive to spend half a million dollars this year on additional employees and licenses when that's money destined for high-level bonuses this year, and by the time that fine arrives, this executive team will have moved on?

>In my opinion organizations still don't rely enough on "defense in depth" techniques...

This flies in the face of 'easy money.' 'Easy' meaning we, The Business, comprehend the purpose of a particular budget line item. Spending money is bad. But spending money in some places is a necessary evil, and only acceptable when it is in a place that is directly reflected in the price to the customer. Acquiring, manufacturing, assembling parts in the final product? Fine. Marketing to acquire a customer? Sure. Attaining regulatory approvals? Bah, ok. After we've articulated the costs and padded an acceptable margin, the only thing left is the self-congratulatory bonuses for executives!


I've found that at large companies, employees touted as having great soft skills often lack the ones that I consider key for productivity: communication, integrity, responsibility, and work ethic.

Meanwhile, engineers possessing all of the above traits as well as hard skills are told to develop their other soft skills (i.e. positive attitude, courtesy, and professionalism) to make themselves more palatable to the inept.

In a vicious cycle, the feeling that everything is focused around appeasing those that contribute the least is enough to erode many engineers' soft skills.

Enter the dead sea.


Ridiculously accurate assessment of the situation. Incentives matter.


The small and medium companies that seem to be rethinking IT security are the ones hit by a cryptolocker.


> Personal data that is not required for transactional use should be either encrypted, pseudonymized or anonymized.

Sorry, just to nitpick, creating anonymized data isn't that easy, and I'm worried something like that would get miss-used like some password breaches (the passwords were encrypted with md5, they're still secure). I can store all this customer data without consideration, because a company thinks they've anonymized something that isn't actually anonymized.

Here's a blog post I made on the topic: https://gravitational.com/blog/hashing-for-anonymization/


Not even close to the same.

Granted a misconfigured firewall is surprisingly close to data with no AuthZ/AuthN but the Equifax breach was an operation.

This should be punished but the level of ignorance from both sides highlight just how immature the community is and how little concern we have in handling PII.

Thermodynamics....make the path of least resistance more secure. I feel laws find that by following the money which they seemingly tried to do with equifax.

Surprisingly small fine but on the other side, I have seen many enterprises with numerous processes/controls in place where it wasn't so easy to identify the security through obscurity that was going on.

There's a longer conversation to be had here.


It's actually a lot harder than you'd imagine to break into security considering how much outrage and demand there seems to be in the press and on forums.

Maybe this WAF wasn't the greatest software though. Simply buying something and squeezing it into your tech stack isn't enough. You have to know how it works or it could be the thing that gives a foothold to an attacker.


I spend a considerable amount of time pentesting. I understand it very well


Not that. Break into it as a job.

The talent pool is woefully underfilled.


Ah, gotcha. Sorry about that, misread it


Wasn't this a private S3 bucket and she somehow hacked permission access? Anyone know the full details of how this came to happen?

As for mitigation, does S3 encryption happen at the user access level (GET) or S3 system level. Basically, does each GET call pass in the decryption key? This means an attacker needs another piece of information. More encryption wouldn't hurt here. This goes for Equifax too.


Basically, I gathered from the indictment that they had a 'WAF misconfiguration', which I take to be SSRF allowing her to obtain temporary AWS credentials from the metadata endpoint, which have the WAF role they talked about, which has sufficient permissions to list buckets and download files etc.


This is precisely my read as well. Could be cred disclosure through a stackdump or the like as well but most likely SSRF.


I'll preface this by saying that I haven't seen any official resources confirming that it was an S3 bucket issue (although the statement from hacker mentioned releasing "buckets" so it very well could be).

S3 provides server side encryption that encrypts the files at rest. This is done entirely on the server side and does not require any additional keys from the client. However, it is possible to do your own file encryption using your own keys ahead of time, but I imagine that a very small subset of AWS users actually do that.

The way these hacks usually happen is that someone configures the bucket to enable public access, either to the entire bucket or certain files within it, and then someone stumbles upon the bucket's endpoint.

The mitigation is simply not to configure your buckets to be publicly available. That used to be relatively more difficult than it sounds because of a confusing S3 UI, but AWS has recently pushed a number of changes that try to address this issue, including putting a very clear "Public" label next to buckets with these settings, sending emails to users with public buckets, and providing configurations that allow account owners to prevent users from setting buckets to public access.

Some articles are referencing a WAF configuration issue, in which case the above may not fully apply here. The commenter below me mentioned the use of temporary AWS access keys, which can be obtained from an internal AWS service known as the metadata endpoint. Typically this endpoint is only accessible from EC2 nodes (or services that rely on EC2 like Lambda or CodeBuild) and allows AWS to deliver short-lived credentials to the node that can rotate frequently. If this truly was the issue, then it's possible a WAF issue allowed the remote attacker to query the internal endpoint from an external source and obtain credentials that were previously only available to the node itself. From there, the attacker could make AWS API calls to the S3 buckets and download the files.


> it's possible a WAF issue allowed the remote attacker to query the internal endpoint from an external source

But then it's literally a configuration issue, right? WAF is just Rule -> Block/Allow. It doesn't proxy traffic or anything, it just attaches to a load balancer, API Gateway or CloudFront.

More puzzling, what is the WAF-Role they're talking about? WAF doesn't use IAM roles, so is this just a role they used to configure the WAF (and also had S3 permissions?)


Yeah, that part is confusing. Since I posted the above a few more details came out, but it seems like the WAF may have been involved because it wasn't configured to block requests to the IAM instance metadata endpoint, which would have allowed the attacker to operate in the scope of the instance, which seems to have had the S3 permissions. But again, entirely conjecture on my part at this point.


That's not a "full disclosure", that's spam.


I added that because CapitalOne has an open source tool that has similar functionally, but point taken and post edited.


It's so funny that with all these breaches, Equifax is the winner and gets 150M customers. Same thing happened with Desjardins (Quebec bank) recently.


The really bonkers part about the Equifax settlement is they're being permitted to "pay" the fine by giving away their own credit monitoring solution. They value at it as something like $15/month, but it likely costs them pennies to run. A good portion of those users will probably convert to paid users at the end of things - I strongly suspect they'll wind up profiting overall.

They should've been forced to cover another company's credit monitoring solution, preferably a direct competitor's.


First American Financial Corporation is another company that appears to have been extremely naive about securing non-public personal information and didn't act until their customers went public. That wasn't even a platform security vulnerability. They were allowing unauthenticated access to their customers documents. Brian Krebs reported on 24 May that 885 million mortgage documents had been exposed. According to First American's reporting since then, they say they have narrowed that down to 32 consumers that had their information exposed and provided them with complementary credit monitoring. That's an awfully big discrepancy between the security community and company reporting.


After my identity was stolen, likely from Equifax, I closed my Capital One account and advised everyone I knew to do the same. I was shocked to discover how easy it was to get past the forgot password screen: you are immediately logged in with full privilege if you answer all the easy to guess PII questions! No email notification. No text. No 2 factor. Full logged in access!

Lesson learned: always check the forgot password/trouble logging in feature on sites where security matters.


I said this on the other HN thread about CapitalOne but I found it ridiculous that Aaron Swartz was facing a hefty sentence and the culprit behind this hack last I checked is facing up to 5 years??? What the heck?

For every person exposed in this hack is a single victim to be added. Not to mention the numerous indirectly affected people part of small businesses. Aaron Swartz hacked some ebooks by comparison harming only a school.

Can the punishment for crimes stop being absurd. I am only reserving further outrage because those are the current charges against the hacker. We know more can pile up as they learn more.

Really though if that kind of PII can give you access to ruin someones financial life then it should be made harder to get credit cards. If you dont have a drivers license and other things to show you shouldnt get a credit card.


Prosecutor discretion exists. Furthermore, AFAIK (IANAL, especially not a US criminal justice lawyer), US sentencing guidelines take into account first-party financial damages (low for CapitalOne) not diffuse third-party damages of the kind suffered that will be suffered by the 100M people whose PII was lost.


CapitalOne disclosed that this hack is going to cost them between $100mm and $150mm, which is a lot more than JSTOR would have lost from Aaron Swartz's "hack" of academic humanities papers.


It seems likely that were the accused to be convicted, a fine will be part of the sentencing, and Capital One may opt to pursue separate civil legal remedy against the convicted person to seek damages given their cost is stated as being >$100M. Unclear to me if this somehow isn’t allowed under U.S. law because the federal prosecution may be where it all gets determined, and a civil case may be “double punishment”?

So after coming out of prison and finding a job, which is likely to be hard for a felon, and may not be highly paid, what wages they make will be garnished to pay the criminal/civil financial judgements.

For most people who’ve worked in technology, the potential punishment here would qualify as “destroying your life”. Enormous impact.


Right, but it wouldn’t have happened if they hadn’t had such lax security, and I would argue that capital one are liable here for failing to adequately safeguard consumer data. If you properly secure your stack, you don’t get hacked.

If they had fallen victim to some undisclosed zero-day, I’d feel bad for them - but in this case it appears to be misconfigured VPC SGs. Their error. Inadequate processes.

We are also all labouring under the assumption that she was the only person to make off with this data.

I’m willing to bet that she’s just the first one daft enough to talk about it.


"If you properly secure your stack, you don’t get hacked." Thats absolutely not true. You do reduce the chances of being hacked and you might reduce time it takes for you to discover the breach and you will be able to contain it quicker.


You vastly reduce the chances. It’s the difference between bothering to close the bank vault’s door when you go home at night or not.


> Right, but it wouldn’t have happened if they hadn’t had such lax security, and I would argue that capital one are liable here for failing to adequately safeguard consumer data. If you properly secure your stack, you don’t get hacked.

If the system was designed by humans, it can be hacked.


Especially if the bad guy used to work for your vendor.


> Prosecutor discretion exists.

Which means she should've been held personally responsible/impeached over what she did to Swartz. But instead Obama protected her, just like he did with all of his other government criminals, as well as Bush administration's criminals, too.

"We need to move forward." and "No abuses were found." and other such BS needs to end when it comes to government criminals. No wonder more riots are popping up and the hatred towards authorities is increasing every year.


> If you dont have a drivers license and other things to show you shouldnt get a credit card.

In Europe everyone has to possess a personal ID card or a proper passport, and it is required to be presented to the bank agent (or a verification service). Yes, we do have some problems with faked ID cards and lately by fraudulent video identification, but still - not remotely comparable to the laughable "security" in the US.


For many, it's a goal to avoid having a national ID, for privacy-from-the-government reasons. The ACLU has a decent writeup about the issue: https://www.aclu.org/other/5-problems-national-id-cards


Similar to constant surveillance, the psychological implications of mandatory ID are horrifying. It tips the scale from "You are born free, but you must fulfill certain obligations to cooperate with others" to "You exist first and foremost through the lens of the government. You are not permitted to live outside the bureaucratic abstraction of you."


Well then easy solution: you refrain from opting in into personal ID cards, but you're responsible for all fraudular and other activity that is done under your name and could have been prevented by having an ID card lock down your identity.


Holding victims civilly liable for fraud is not a fair or liberal alternative to government monitoring. Should I also have no recourse when my house is burglarized if I fail to install cameras that send a feed to the police?


> Should I also have no recourse when my house is burglarized if I fail to install cameras that send a feed to the police?

That comparison is wrong on multiple levels:

1) you may be automatically at fault already when not doing what the government is requiring you to do for public safety reasons (for example, your car has a rusty brake pipe, you do not do the yearly mandatory checkup, pipe explodes, car crashes into something)

2) contrary to a feed to the police, showing your federal ID card at a bank or car dealership when applying for a loan does not cause the government to know you were at the bank or the car dealership. Therefore it is not surveillance.


Go cry about it to Rousseau. If you want to continue enjoy the benefits of a modern society, to some small degree, you are going to have to play its little games.


> "You exist first and foremost through the lens of the government. You are not permitted to live outside the bureaucratic abstraction of you."

Aren't we already at that point?


Sort of, but generally not on the minute-to-minute basis that is being forced to carry government identification at all times. And in cases where that may de facto be the case, e.g. being a minority near a border, that's already abhorrent. That whole paradigm should be reversed rather than further generalized.


> being a minority near a border, that's already abhorrent.

That's abhorrent because they are targeted which force them to carry it.

Isn't driver license a government identification anyway? Sure no one is forced to have it, but that won't change much if everyone had it.

I'm not arguing about forcing carrying it either. Just about whether it would be bad if it existed.


That's not the case in the UK - we don't have any single government issued identity document/card that everyone has to have.


The UK is a special case and will not be "Europe" for long besides. Homogenisation of rules can take a while, especially when there is a cultural aversion to them. In this case I'd say there simply has not been enough time for this to happen.


Barring a rather spectacular feat of engineering, the UK won't be relocating itself from Europe any time soon (much to the chagrin of those who seem to want to plonk us next to Singapore ...)


Ireland also does not have mandatory ID, nor do the Nordic countries. I don't think it's as clear cut as you make it out to be.


The Irish (PSC) Public Services Card is getting close to being a de facto ID card at this point.


Agreed - it's turned out to be a privacy and security fiasco. Hopefully, the ICCL challenges will put an end to it.


England will not be less a part of Europe just because she leaves the EU. The EU was never perfectly cohesive to start with [0], and is made up of extremely disparate nations and cultures. Plus, England never adopted the Euro (which is probably for good reason, seeing as interest rates are now even further down because the ECB can't properly conduct monetary policy).

The EU was never comparable to, say, America in terms of unity. Europe had too much history for it to work perfectly - every one was on what had been at some point some one else's land.

[0] https://europedirectemn.files.wordpress.com/2018/02/treaties...


"The EU was never perfectly cohesive to start with"

You could say that about the UK - which lost a significant chunk last century and could well lose rather more this century.


But your identity is verified through some means when opening an account, even if there is no unique document, no?

Example https://www.tsb.co.uk/current-accounts/faqs/identity/


Yes it is. Electoral roll.


For which to get on you need an address to live at. For which to get one, you need a bank account (at least, but in 99.9% cases this alone is not enough), otherwise no agency is going to give rent you a house.


> For which to get on you need an address to live at. For which to get one, you need a bank account (at least, but in 99.9% cases this alone is not enough), otherwise no agency is going to give rent you a house.

At what age do you become eligible for the electoral roll? At least in the states most people register to vote before they leave the house of their parents.


Can't find a quick answer on gov.uk, but I somewhat loosely recall that they let you register when you're 16.


One of many basic cultural differences between the UK and the EU. In the EU you must give up your biometrics (fingerprint) by law. Doesn't surprise me that they are leaving.


I think surveillance techniques and invasion of privacy are often spearheaded by the UK. I remember the CCTV cameras where everywhere long before other countries leveraged them at that scale.


Then great for the EU. God willing, y'all will roll back some of that stuff if she leaves.


Please show which law this is, I've never had to give my biometrics to anyone but the US government when visiting there.


All the UK passports and non-national resident ID cards are facial biometric.


> In the EU you must give up your biometrics (fingerprint) by law

Generic and incorrect statement

Also, I'm not an UK citizen and I'm forced to give up my biometrics (face) whenever flying out of an UK airport. Or when flying into the US.


I think that says a lot about how much vested interest powerful people have in enforcing copyright law.


That hefty sentence commonly mentioned for Swartz was using the whale sushi number [1], not a number he actually had even a remote chance of getting [2] [3].

[1] https://www.popehat.com/2013/02/05/crime-whale-sushi-sentenc...

[2] https://www.popehat.com/2013/03/24/three-things-you-may-not-...

[3] http://volokh.com/2013/01/16/the-criminal-charges-against-aa...


That a corporation worth multiple billions of dollars can't keep our information secure despite one individual.

That is the message here.

IMHO, the "punishment" trajectory should aim toward Capital One. After all they are the ones who ultimately fucked up.

Frankly; Oh dear my ex-employee, or someone "trusted" who was pissed off because I/We didn't think I/We did anything to piss them off is not an admissible excuse.

Why anyone should weep for a multi-billion dollar company while crowing "throw the bitch in jail" for exposing their lacking security practices is beyond me.

Who is the criminal. A large mega-corp who could not keep their shit straight or an individual who proved their security perfectly invalid, and then told us!

Cry me a river...


> Can the punishment for crimes stop being absurd.

The absurd punishment was Swartz’s, and was 8 years ago. Are you saying that, out of fairness, the punishment for all future computer crimes should scale up to make this one tragic event seem more reasonable?


We don't really know enough yet. My read is that Capital One was horrifically negligent, and none of the data was actually resold or used...


> none of the data was actually resold or used...

Yet.


How long before owning a breach databases requires a license in order to avoid a criminal charge? I have to imagine that access to breach info from other sources greatly reduces the work necessary to pull off another breach as users typically use the same "highly secure password" across most, if not all of their online and work accounts. All you need is one breach with weak password hashing -- or no hashing or encryption -- to provide an electronic skeleton key to intruders. The more breaches that occur, I think the more we'll see.

Just curious: if a prior breach, for example the Equifax breach, yields data that enables a future breach like Capital One's, can Equifax be held liable for damage to Capital One?


> The Equifax incident should have sparked a fire under the credit giants.

I get what the author is trying to say, but based on the entire remainder of this article, the large credit firms are doing exactly the right thing (for their shareholders) by not spending tons of money on security.


> the large credit firms are doing exactly the right thing (for their shareholders) by not spending tons of money on security.

Isn't this due to the fact that there are no serious penalties for losing customer data, aka regulation?


Yeah, as much as I would like to see a market-based solution to this, I'm not sure how it would work exactly.

Equifax seems to be the exception to the rule that most of the data lost in most of the breaches we hear about was given voluntarily; the customers are the ones getting screwed and they still willingly hand over their data to anyone who offers a small discount or even just a newsletter sign-up.

It seems like most people don't care about privacy, at least not enough to pay more for it.


"we" did nothing?

I'm annoyed at the use of first person plural pronouns in such articles. It's particularly obnoxious in a story about identity theft which, as other posters on this thread have pointed out, is a linguistic con-job banks pull on customers.


I have been part of 12 data breaches, that i have been informed about, in the last 5ish years. I read about it and then move on at this point. I have a sick feeling credit monitoring with insurance is going to become the norm, just like house and car insurance. I am not sure why progressive and state farm dont have it yet on your policy (maybe they do).

I wonder if these companies are like one of the places i work at and have checkbox cybersecurity as opposed to real cybersecurity.......if you have ever had to ask your cybersecurity department "you really want me to loosen the permissions on those files so it will pass the scan¿", then you know what checkbox security is.......


What exactly can "we" do other than the government creating some financial penalty for this?

I soundly believe that in most of these cases some line level security person told middle management there might be an issue, but it wasn't dealt with because of time/money considerations ("Just Ship It") or there are many legacy things that never received a proper audit/fix schedule because of lack of people/experts to even see the issue.

One time financial penalties won't fix that, because I'd bet it might be cheaper to pay it. Criminally penalizing executives may not fix it, because some of these decisions likely never made their desk.


What guarantees does Amazon sell to AWS clients regarding the security of their data?


From all reports, it was caused by an internal Capital One employee and was allowed due to misconfiguration on Capital One’s side.

AWS preaches the “Shared Security Model” and emphasizes what it is responsible for and what you are responsible for.


Lots! Tons and Tons and Tons! S3 is super secure and CAN NOT be hacked when properly configured and used according to our standard!

You got hacked? You must have configured it wrong because we already told you it was unhackable; Good luck proving it was our fault not yours.


> Good luck proving it was our fault not yours.

Seems like it would be incredibly easy to prove that an S3 bucket was misconfigured in such a way that the data was publicly accessible. In fact this has been the case in the recent high-profile cases that I can recall.


The S3 bucket was not public.

The hacker got ephemeral keys by remotely exploiting the WAF. The WAF had no reason to have privileges to read from S3, that was a mistake.

I’m unclear if data in bucket was encrypted at rest but I guess if you get keys to read it’s a moot point.


Can you actually substantiate a S3 security problem that wasn't user error? Because I've yet to hear of one.


not sure if serious


They promise not to fuck up their side of things.

They make no promises about your side.


A lot, they probably set the bucket as world-readable with no encryption, which AWS warns you about, their engineer just ignored the warning


Up front, I worked at AWS on the CloudHSM team, don't work at AWS any more. I am not a lawyer. This is all my opinion from a very brief analysis.

The customer agreement[4] states:

> 3.1 AWS Security. Without limiting Section 10 or your obligations under Section 4.2, we will implement reasonable and appropriate measures designed to help you secure Your Content against accidental or unlawful loss, access or disclosure.

> 10. Disclaimers. THE SERVICE OFFERINGS ARE PROVIDED “AS IS.”

It goes on with the usual shouty disclaimers.

The service terms[3] state (I'm specifically citing IAM here because it's how you handle a ton of authentication):

> 19.3 You are responsible for maintaining the secrecy and security of the User Credentials (other than any key that we expressly permit you to use publicly). You are solely responsible, and we have no liability, for any activities that occur under the User Credentials, regardless of whether such activities are undertaken by you, your employees, agents, subcontractors or customers, or any other third party.

My read on it:

AWS generally gives you tools to secure your data, and it's largely up to you how you want to do it.

The docs state that if you set IAM to allow or deny access to a service to an authenticated entity, then IAM will do that. If you set up a VPC and shut off a port through a security group, it's going to be locked down.

AWS has a slew of services, and these things can interact in surprising ways. So reading the permissions, you're often wondering[1], "what permissions do I need" and it's not always clear what a permission grants.

To summarize, then, the AWS documentation at a low level gives you some very technical instructions, and at a high level will generally recommend best practices[2].

I will say that IAM is good stuff and works, the issue is the sheer complexity of configuring it all, and a few footguns thrown in for good measure. But AWS should look at adding "security agreements" similar to their Service Level Agreements that guarantee availability.

[1]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_perm...

[2]: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practi...

[3]: https://aws.amazon.com/service-terms/

[4]: https://aws.amazon.com/agreement/


I'd really like to see someone target federal politicians. Get their data and just destroy their finances. Now that would get their attention and result in rapid changes.


Thoughts and prayers


Has anyone seen any material on what do you if you have Capital One accounts besides update passwords? My Credit is already frozen.


"we did nothing"

Who?

These companies get sued, that is a reaction.

Congress? Well if you make a law twice as illegal, I'm sure that will make it stop /s.

No one wants to be hacked, let's not pretend there is no fallout from ignoring security.


Oh come on, do you think these companies are doing everything to protect our data? Why the hell is our credit card applications hosted online anywhere after they've been processed anyway? And for 14 years?

No mate, making it doubly illegal (such as actually fining and imprisoning the negligence in leadership that chooses forgiveness over permission) would undoubtedly help. There are plenty of ways to keep our data secure and they didn't do enough.


They probably have approved vendors for their data and SOPs in their DCA and it had information on how to configure it for the cloud and there are signatures and so on. But, it’s unclear whether they took into account rouge internal threats.

Be this on S3 or on your private assets, without proper controls for internal threats these things have a likelihood to happen.


After 14 days it should be encrypted independent of any AWS encryption as someone mentioned in the other Capital One thread and the key should not be stored on a S3 container or some obvious service that can be easily compromised.

Keeping all your eggs in one basket (the cloud) is never a good idea. If you have to do it try and give yourself as much control over sensitive data via encryption of no longer to be accessed data.


More practical would be the removal of the Board of Directors and the CEO of the corporation, with the forfeiture of any unpaid future compensation and the ineligibility to serve as a director or officer of any other corporation. They are responsible for setting the policies and providing the resources to secure the corporation's data, and they have failed.


This line of thinking doesn't work. I want to agree with you, but I can't. An executive could do all the right things by promoting and pushing for security in their organisation and still be hacked. Should he/she face jail now?


The OP called out "negligence," which would leave some wiggle room for the executive in your scenario. Promoting and spending directly on security would be proof that you're at least making a conscientious effort.


Problem is, executives don't understand those things. Of course it's very simple to point a finger at them, but they rarely are tech savvy, and they are there to run the company, not micromanage every decision every department makes.


Hiring people that don't know what they're doing isn't a reasonable excuse, such as Susan Mauldin, the ex-CSO of Equifax with a bachelor in music and no technical or security related education/training


Then they had better start hiring replacements soon..


I'm fine for a judge and jury to decide what is and isn't actually criminal conduct and whether the EO was negligent in protecting customer data. It needs to be explicitly illegal first, though.


Utopia solutions aren't really helpful for ideas.

It's great if companies had unlimited resources to spend on security, and didn't screw their customers with fees.

Let me remind you, even Apple had their phone hacked. More laws won't make mistakes go away.


It doesn't take unlimited resources to destroy sensitive transient information past its time. The opposite really.


I am not sure this complies with KYC laws.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: