Knowledge of a SSN and other public information should never be enough to authenticate any person. That means no credit issued based on that, no tax returns filed or viewed based on that, no checks sent based on that.
The solution is not better security with credit companies. The solution is some form of actual authentication. Preferably done by an organization dedicated to that (public would be best, private could work); not outsourced to organizations that are mostly geared towards determining credit worthiness.
For the public system, assign to every participant a true unique identifier, rather than the SSN which explicitly states should not be used as such.
For those citizens that do not want to register in this way, allow for physical authentication at physical locations.
In Europe this is far less of an issue since our population registration is a lot more comprehensive.
* The PII wasn't stolen from me, it was negligently exposed by services I contract with (and pay!) and others that I have no formal relationship with (like Equifax).
* It wasn't defrauding me, it was defrauding services I contract with (and others) who failed to verify my identity.
And yet somehow I'm obligated to do the cleanup myself.
In that the attacker is creatively operating the system, rather than really possessing magic knowledge.
The setup for these breaches is entirely due to companies being able to require your SSN for whatever purpose, and indefinitely store it basically however they'd like. Either the government should have never assigned a mandatory unique identifier to every individual, or there should have been strict laws about what purposes it could be requested/used for, how it could be stored, and steep statutory liability for screwing those up.
But the political attitude in the US is to have the government do the bare minimum and private companies will take up the charge. However for many subjects the resulting mix is the worst of both worlds - given the tiniest hook into governmental power, the private sector eagerly implements totalitarian solutions for which there is no opting out.
Presently, the naive legal mandates of SSN's, driver's license numbers, and license plates are being heavily abused to enable pervasive corporate surveillance. These existing identifiers already make too good of keys for cross-linking every other ill-gotten datum on a person. The main thing that keeps every single business from demanding these identifiers is people's ambiguous worry of just handing them out, due to their technical shortcomings. Imagine going to a grocery store and having to present your national electronic ID to get the sale prices, with no alternative.
> For the public system, assign to every participant a true unique identifier, rather than the SSN which explicitly states should not be used as such.
This will work for a time, but what happens when the next breach occurs? How do people renew their UUID's? Expire compromised ones?
> For those citizens that do not want to register in this way, allow for physical authentication at physical locations.
Physical authentication probably means fingerprints, face data, correct? These are already compromised. Worse yet, they cannot be changed.
CCTV cameras are everywhere, and getting better resolution each day. Face authentication can be easily duplicated - some of the early versions of FaceID (by Apple) were broken by 3-D printing a mask . Furthermore, some organizations are already compiling a list of "face data" that can be used to fool sensors and other biometric tools. By the time "face readers" are widespread, hackers will already have large pools of face data to use to hack into these systems.
There are other cases where fingerprints have been printed using a 3-D printer and have broken security of mobile smartphones . What's to say whatever government issued terminal won't be broken in a similar way? Furthermore, it's not easy to expect people to guard their fingerprints: every glass they drink at a restaurant will have their fingerprints. I don't expect to shed my SSN whenever I order a pint at my favorite pub.
SSNs are a poor form of authentication because they're ostensibly secret but re-used everywhere. It's just like a password in that regard: no password is secure against reuse, no matter how strong it is on paper.
A minimal change would be to allow/encourage single-use SSN-equivalents, generated on demand by a central authority. That is, someone would give a different "SSN" to their employer, their bank, the IRS, and their cable company (for credit check).
That still provides a point of vulnerability, but that
is far better than the current system where a single credit application form is a global compromise. If a single-use number is compromised, it could be easily revoked without affecting the person otherwise. Likewise, numbers could easily be generated with short expiry dates to make use from stored credentials impossible.
The UUID shouldn't be assumed to be private information - authentication should be built around the assumption that this identifier is a public identifier - like a name, but guaranteed to be unique.
> Physical authentication probably means fingerprints, face data, correct? These are already compromised. Worse yet, they cannot be changed.
Even if those are compromised, that doesn't mean it has to be easy to impersonate you. The solution may be low-tech - you may have to physically present yourself to a human who assesses if you are indeed who you say you are before opening an account. The higher tech solution physical authentication might require something akin to chip-and-pin or a (revocable) token generator a la Ubikey
edit: if the value of identity were to be elevated, then the physical security at these locations would be increased to the level of banks or cash-handling facilities to increase the cost of failed attempts at impersonation (to the level similar to attempted cash heists). Infact, the local phone shops should be barred/disincentivized from doing auth badly themselves and should outsource this function, just like they do with creditworthiness.
In that case, we already have this today: At the state level, most citizens have a Drivers license or State ID, both of which have a unique ID. At the federal level, all US passports have a unique Passport Number. Granted, not all citizens have a passport, but that system is in place to grant citizens unique identifiers.
And yet we still have identity issues. So this is part of the solution.
> Even if those are compromised, that doesn't mean it has to be easy to impersonate you. The solution may be low-tech - you may have to physically present yourself to a human who assesses if you are indeed who you say you are before opening an account. The higher tech solution physical authentication might require something akin to chip-and-pin or a (revocable) token generator a la Ubikey
This is a great idea. I believe France's healthcare system requires every citizen to have a card , which uses a chip and pin tech to authenticate the person with their doctor. This could be used for online services or over the phone too.
What the US needs is a branch specifically for administring these "identity cards". The Social Security Administration could be rebranded to an "Identity Administration" or something, then they will manage the distribution and revocation / recycling of these national ID cards.
But for some reason Americans get spooked when you say the words "National ID". Something about how "socialism is bad" and all that.
Do you really want to mandate that?
Valuing someone's personal information at $100,000 per person and then fining the snot out of companies that lose it seems like a much more "market driven" solution.
It also means that companies will work really hard to minimize any personal information at all--which is really what you want in the first place.
> This will work for a time, but what happens when the next breach occurs? How do people renew their UUID's? Expire compromised ones?
The unique identifier would be an identifier only, not something for authentication. But before you can authenticate any identity, you need a way to identify that identity. Hence I consider that a base-requirement. Then we need to build a system of authentication points around this identifier. Heck, if SSNs were unique just re-purposing those for the ID would work just fine.
No, I mean going to a physical desk and authenticating however you already can do this. This would be something like a valid government-issued ID and a birth certificate. Essentially, whatever is needed to get a passport, have the same system here. Because that is essentially your weakest link already. I added this option to appease the American fear of government tracking.
As for a proposal to fixing it, I would point to two systems.
* The Estonian system, where every citizen is given an ID-card that is also a smart-card with a public key.
* The Dutch system, which I am most familiar with.
Let me expand on how the dutch system (called DigID) works. Though I should note the system has flaws, and there are valid criticisms. However, it hasn't had any big failures. The system works as follows:
Anyone can apply for an account, at which point the government will mail you instructions for setting up a simple username-password based authentication. Key behind this system is the 'Basis register of persons'. It is a national database (maintained by the municipalities) of all legal inhabitants and some info about them. Most importantly for this system, an address. This is what makes it possible for the government to send mail to a citizen.
To my mind, the above system of mail could/should be replaced by a visit to the municipal administration, where your ID-card is verified. (Notably, everyone over the age of 14 needs a valid government-issued ID)
Obviously, implementing something like this in the US would be hard. Mostly because mandated ID-cards and a government database of addresses would not be politically acceptable. I don't know the details of the Estonian system, maybe that would require less invasive tracking of citizens
I'm guessing most European countries have similar systems of government-based authentication.
Really though, these systems start with knowing who your citizens are and being able to identify them. And should this not be a basic requirement of a government?
After validating your ID, you can use the app to do 2FA with most major services in the country, including almost every bank and financial institution.
A program like this could go a long way in the US to help cut down on the issue you describe.
How do you make this proposed new unique identifier more secure than the (admittedly very unsecure) SSNs?
1) Explicitly not meant as an identifier
2) Not unique
If not for 2, then the SSN could simply be repurposed to be this identifier.
You should worry about lightning strikes and like solar flares disrupting your business before you worry about cyber security. Why should any enterprise risk manager waste their time on an issue that has no consequences?
July 1st, 2019
"Taxpayer First Act
This bill revises provisions relating to the Internal Revenue Service (IRS), its customer service, enforcement procedures, cybersecurity and identity protection, management of information technology, and use of electronic systems."
Just because actual legislation is too boring for cable news and NPR doesn't mean it's not happening.
You can find all the bills that passed into law here: https://www.congress.gov/advanced-search/legislation?congres...
PS: Believe the 9/11 first responders bill would be more recent, but I figured people would take issue with that as a celebrity bill.
At least the 9/11 first responders bill was about allocating resources to do something, but the main reason it doesn’t serve your point is the fact it stood for 18 years as an example of our government’s incompetence and inability to do basic, non controversial things.
Reducing everything to "creates a new department or abolishes an existing one" or "does nothing and just keeps the lights on" isn't a useful rubric.
Good law is acreted over time, in the same way bulletproof code is.
US tax law has been dysfunctional and getting worse for decades, to me that says there are issues with how the system is designed and meaningful progress beyond “keeping the lights on” will require restructuring the law and the agency, not adding a new office here and giving taxpayers more notifications there. Those types of measures, as you correctly point out, have to be looked at as part of a larger plan for the organization that meaningfully addresses a problem, not in isolation. Except here we have a collection of measures that doesn’t coherently address a problem, so there is no way left to look at them except in isolation.
The law has many stated goals, as set forth in the quote I posted.
Why is your assumption that the social conservatives are the ones that have to compromise? Shouldn't both sides be compromising?
Let's take freedom of speech. One side believes it's unlimited, and the other doesn't. How do you compromise on that? You can't. How do you solve that problem? You don't, it's not the government's job to solve all of society's problems. That's the point most people don't understand: quit trying to control the behavior of others.
Though I kind of agree that with Equifax and Facebook there weren't much consequences, to me they fall into the (unfortunately) too big to fail category, ie. most of Facebook members don't care and banks still need the credit score and there aren't much competition in that sector.
They're absolutely not too big to fail. Equifax or FB? Other than the unfortunate employees and their families, does anyone give a shit? No. No one suffers. To the contrary, thinning sick herd members helpfully invigorates the health of the surviving individuals.
What these organization are are in too many pockets to be too big to jail. It's a trope but it's a fact, Jack
Edits for herd reference and reduced snark
And yeah, ditto Equifax. There are two other credit agencies already. They're all same-ish as far as I can tell. Pretty sure there are only three so they can pretend there's competition, not for any actual purpose. Again, it might be an opening for a new competitor anyway, while causing minimal short-term harm.
I think I'm just more angry about these "too big to fail" companies that are apparently immune from any oversight or consequence today. We should really just convert the FTC building into a homeless shelter and fire all employees, they have done absolutely nothing for decades so this would be great way to actually use the space effectively.
In a small market with few potential customers, reputation loss could kill you. OTOH if you sell to general population, then even a scandal involving you being featured on national TV won't hurt your company directly (related lawsuits might, though).
Yeah, i fully agree...But, what about orgs like equifax where you and I are not really given a choice about their role in our lives? Even before all the shenanigans around equifax, would i have willingly allowed equifax into my life? Heck no! So, while i 100% agree that we should vote with our wallets/purses, sometimes that isn't enough...and that makes me a sad panda. </sigh>
100m people losing a minute of their life to handling this (lets call it nanodeath) is 190 continuous years of wasted time that you could have spent with your family, napping, reading or other pleasurable life things.
That’s the kind of math that, say, an insurer, should do to determine whether an intervention should be paid for.
But usually you just get called a monster for doing that kind of math.
For Capital One, or other companies with which we directly do business, I think there's likely to be more direct ramifications - namely people refusing to do business with them. Letting anyone access your customers' data is a good way to lose those customers.
Any single IT system is hackable and will eventually be hacked. The probability that an adversary will be able to hack multiple, independent systems is much lower though, and would in many cases prevent data breaches like this one.
If you, tech geek, learn enough to speak well to The Business, you have another challenge: the market is at a place where incentives matter. You can articulate, in the right language, the need for cleaning up the company security posture, but you can't articulate an incentive. User can't sue because the ToS says 'mediation;' There's no regulatory agency that will really threaten our profits - we can afford a $10MM fine when they get around to levying such after three years of investigation ... what's the incentive to spend half a million dollars this year on additional employees and licenses when that's money destined for high-level bonuses this year, and by the time that fine arrives, this executive team will have moved on?
>In my opinion organizations still don't rely enough on "defense in depth" techniques...
This flies in the face of 'easy money.' 'Easy' meaning we, The Business, comprehend the purpose of a particular budget line item. Spending money is bad. But spending money in some places is a necessary evil, and only acceptable when it is in a place that is directly reflected in the price to the customer. Acquiring, manufacturing, assembling parts in the final product? Fine. Marketing to acquire a customer? Sure. Attaining regulatory approvals? Bah, ok. After we've articulated the costs and padded an acceptable margin, the only thing left is the self-congratulatory bonuses for executives!
Meanwhile, engineers possessing all of the above traits as well as hard skills are told to develop their other soft skills (i.e. positive attitude, courtesy, and professionalism) to make themselves more palatable to the inept.
In a vicious cycle, the feeling that everything is focused around appeasing those that contribute the least is enough to erode many engineers' soft skills.
Enter the dead sea.
Sorry, just to nitpick, creating anonymized data isn't that easy, and I'm worried something like that would get miss-used like some password breaches (the passwords were encrypted with md5, they're still secure). I can store all this customer data without consideration, because a company thinks they've anonymized something that isn't actually anonymized.
Here's a blog post I made on the topic: https://gravitational.com/blog/hashing-for-anonymization/
Granted a misconfigured firewall is surprisingly close to data with no AuthZ/AuthN but the Equifax breach was an operation.
This should be punished but the level of ignorance from both sides highlight just how immature the community is and how little concern we have in handling PII.
Thermodynamics....make the path of least resistance more secure. I feel laws find that by following the money which they seemingly tried to do with equifax.
Surprisingly small fine but on the other side, I have seen many enterprises with numerous processes/controls in place where it wasn't so easy to identify the security through obscurity that was going on.
There's a longer conversation to be had here.
Maybe this WAF wasn't the greatest software though. Simply buying something and squeezing it into your tech stack isn't enough. You have to know how it works or it could be the thing that gives a foothold to an attacker.
The talent pool is woefully underfilled.
As for mitigation, does S3 encryption happen at the user access level (GET) or S3 system level. Basically, does each GET call pass in the decryption key? This means an attacker needs another piece of information. More encryption wouldn't hurt here. This goes for Equifax too.
S3 provides server side encryption that encrypts the files at rest. This is done entirely on the server side and does not require any additional keys from the client. However, it is possible to do your own file encryption using your own keys ahead of time, but I imagine that a very small subset of AWS users actually do that.
The way these hacks usually happen is that someone configures the bucket to enable public access, either to the entire bucket or certain files within it, and then someone stumbles upon the bucket's endpoint.
The mitigation is simply not to configure your buckets to be publicly available. That used to be relatively more difficult than it sounds because of a confusing S3 UI, but AWS has recently pushed a number of changes that try to address this issue, including putting a very clear "Public" label next to buckets with these settings, sending emails to users with public buckets, and providing configurations that allow account owners to prevent users from setting buckets to public access.
Some articles are referencing a WAF configuration issue, in which case the above may not fully apply here. The commenter below me mentioned the use of temporary AWS access keys, which can be obtained from an internal AWS service known as the metadata endpoint. Typically this endpoint is only accessible from EC2 nodes (or services that rely on EC2 like Lambda or CodeBuild) and allows AWS to deliver short-lived credentials to the node that can rotate frequently. If this truly was the issue, then it's possible a WAF issue allowed the remote attacker to query the internal endpoint from an external source and obtain credentials that were previously only available to the node itself. From there, the attacker could make AWS API calls to the S3 buckets and download the files.
But then it's literally a configuration issue, right? WAF is just Rule -> Block/Allow. It doesn't proxy traffic or anything, it just attaches to a load balancer, API Gateway or CloudFront.
More puzzling, what is the WAF-Role they're talking about? WAF doesn't use IAM roles, so is this just a role they used to configure the WAF (and also had S3 permissions?)
They should've been forced to cover another company's credit monitoring solution, preferably a direct competitor's.
Lesson learned: always check the forgot password/trouble logging in feature on sites where security matters.
For every person exposed in this hack is a single victim to be added. Not to mention the numerous indirectly affected people part of small businesses. Aaron Swartz hacked some ebooks by comparison harming only a school.
Can the punishment for crimes stop being absurd. I am only reserving further outrage because those are the current charges against the hacker. We know more can pile up as they learn more.
Really though if that kind of PII can give you access to ruin someones financial life then it should be made harder to get credit cards. If you dont have a drivers license and other things to show you shouldnt get a credit card.
So after coming out of prison and finding a job, which is likely to be hard for a felon, and may not be highly paid, what wages they make will be garnished to pay the criminal/civil financial judgements.
For most people who’ve worked in technology, the potential punishment here would qualify as “destroying your life”. Enormous impact.
If they had fallen victim to some undisclosed zero-day, I’d feel bad for them - but in this case it appears to be misconfigured VPC SGs. Their error. Inadequate processes.
We are also all labouring under the assumption that she was the only person to make off with this data.
I’m willing to bet that she’s just the first one daft enough to talk about it.
If the system was designed by humans, it can be hacked.
Which means she should've been held personally responsible/impeached over what she did to Swartz. But instead Obama protected her, just like he did with all of his other government criminals, as well as Bush administration's criminals, too.
"We need to move forward." and "No abuses were found." and other such BS needs to end when it comes to government criminals. No wonder more riots are popping up and the hatred towards authorities is increasing every year.
In Europe everyone has to possess a personal ID card or a proper passport, and it is required to be presented to the bank agent (or a verification service). Yes, we do have some problems with faked ID cards and lately by fraudulent video identification, but still - not remotely comparable to the laughable "security" in the US.
That comparison is wrong on multiple levels:
1) you may be automatically at fault already when not doing what the government is requiring you to do for public safety reasons (for example, your car has a rusty brake pipe, you do not do the yearly mandatory checkup, pipe explodes, car crashes into something)
2) contrary to a feed to the police, showing your federal ID card at a bank or car dealership when applying for a loan does not cause the government to know you were at the bank or the car dealership. Therefore it is not surveillance.
Aren't we already at that point?
That's abhorrent because they are targeted which force them to carry it.
Isn't driver license a government identification anyway? Sure no one is forced to have it, but that won't change much if everyone had it.
I'm not arguing about forcing carrying it either. Just about whether it would be bad if it existed.
The EU was never comparable to, say, America in terms of unity. Europe had too much history for it to work perfectly - every one was on what had been at some point some one else's land.
You could say that about the UK - which lost a significant chunk last century and could well lose rather more this century.
At what age do you become eligible for the electoral roll? At least in the states most people register to vote before they leave the house of their parents.
Generic and incorrect statement
Also, I'm not an UK citizen and I'm forced to give up my biometrics (face) whenever flying out of an UK airport. Or when flying into the US.
That is the message here.
IMHO, the "punishment" trajectory should aim toward Capital One. After all they are the ones who ultimately fucked up.
Frankly; Oh dear my ex-employee, or someone "trusted" who was pissed off because I/We didn't think I/We did anything to piss them off is not an admissible excuse.
Why anyone should weep for a multi-billion dollar company while crowing "throw the bitch in jail" for exposing their lacking security practices is beyond me.
Who is the criminal. A large mega-corp who could not keep their shit straight or an individual who proved their security perfectly invalid, and then told us!
Cry me a river...
The absurd punishment was Swartz’s, and was 8 years ago. Are you saying that, out of fairness, the punishment for all future computer crimes should scale up to make this one tragic event seem more reasonable?
Just curious: if a prior breach, for example the Equifax breach, yields data that enables a future breach like Capital One's, can Equifax be held liable for damage to Capital One?
I get what the author is trying to say, but based on the entire remainder of this article, the large credit firms are doing exactly the right thing (for their shareholders) by not spending tons of money on security.
Isn't this due to the fact that there are no serious penalties for losing customer data, aka regulation?
Equifax seems to be the exception to the rule that most of the data lost in most of the breaches we hear about was given voluntarily; the customers are the ones getting screwed and they still willingly hand over their data to anyone who offers a small discount or even just a newsletter sign-up.
It seems like most people don't care about privacy, at least not enough to pay more for it.
I'm annoyed at the use of first person plural pronouns in such articles. It's particularly obnoxious in a story about identity theft which, as other posters on this thread have pointed out, is a linguistic con-job banks pull on customers.
I wonder if these companies are like one of the places i work at and have checkbox cybersecurity as opposed to real cybersecurity.......if you have ever had to ask your cybersecurity department "you really want me to loosen the permissions on those files so it will pass the scan¿", then you know what checkbox security is.......
I soundly believe that in most of these cases some line level security person told middle management there might be an issue, but it wasn't dealt with because of time/money considerations ("Just Ship It") or there are many legacy things that never received a proper audit/fix schedule because of lack of people/experts to even see the issue.
One time financial penalties won't fix that, because I'd bet it might be cheaper to pay it. Criminally penalizing executives may not fix it, because some of these decisions likely never made their desk.
AWS preaches the “Shared Security Model” and emphasizes what it is responsible for and what you are responsible for.
You got hacked? You must have configured it wrong because we already told you it was unhackable; Good luck proving it was our fault not yours.
Seems like it would be incredibly easy to prove that an S3 bucket was misconfigured in such a way that the data was publicly accessible. In fact this has been the case in the recent high-profile cases that I can recall.
The hacker got ephemeral keys by remotely exploiting the WAF. The WAF had no reason to have privileges to read from S3, that was a mistake.
I’m unclear if data in bucket was encrypted at rest but I guess if you get keys to read it’s a moot point.
They make no promises about your side.
The customer agreement states:
> 3.1 AWS Security. Without limiting Section 10 or your obligations under Section 4.2, we will implement reasonable and appropriate measures designed to help you secure Your Content against accidental or unlawful loss, access or disclosure.
> 10. Disclaimers. THE SERVICE OFFERINGS ARE PROVIDED “AS IS.”
It goes on with the usual shouty disclaimers.
The service terms state (I'm specifically citing IAM here because it's how you handle a ton of authentication):
> 19.3 You are responsible for maintaining the secrecy and security of the User Credentials (other than any key that we expressly permit you to use publicly). You are solely responsible, and we have no liability, for any activities that occur under the User Credentials, regardless of whether such activities are undertaken by you, your employees, agents, subcontractors or customers, or any other third party.
My read on it:
AWS generally gives you tools to secure your data, and it's largely up to you how you want to do it.
The docs state that if you set IAM to allow or deny access to a service to an authenticated entity, then IAM will do that. If you set up a VPC and shut off a port through a security group, it's going to be locked down.
AWS has a slew of services, and these things can interact in surprising ways. So reading the permissions, you're often wondering, "what permissions do I need" and it's not always clear what a permission grants.
To summarize, then, the AWS documentation at a low level gives you some very technical instructions, and at a high level will generally recommend best practices.
I will say that IAM is good stuff and works, the issue is the sheer complexity of configuring it all, and a few footguns thrown in for good measure. But AWS should look at adding "security agreements" similar to their Service Level Agreements that guarantee availability.
These companies get sued, that is a reaction.
Congress? Well if you make a law twice as illegal, I'm sure that will make it stop /s.
No one wants to be hacked, let's not pretend there is no fallout from ignoring security.
No mate, making it doubly illegal (such as actually fining and imprisoning the negligence in leadership that chooses forgiveness over permission) would undoubtedly help. There are plenty of ways to keep our data secure and they didn't do enough.
Be this on S3 or on your private assets, without proper controls for internal threats these things have a likelihood to happen.
Keeping all your eggs in one basket (the cloud) is never a good idea. If you have to do it try and give yourself as much control over sensitive data via encryption of no longer to be accessed data.
It's great if companies had unlimited resources to spend on security, and didn't screw their customers with fees.
Let me remind you, even Apple had their phone hacked. More laws won't make mistakes go away.