What I really wanted to see was some discussion about how vulnerable the entire US identity system is in the first place. The data is mostly valuable to hackers because there's an assumption that anyone with a birthday, social security number, etc is the person they claim to be.
A verified twitter account shouldn't be more secure than the best the government can offer. Governments need to step up with an modern identity system. Having one secret social security number for life is ridiculous. Having to do financial transactions in-person at a bank so you can present your driver's license and sign with a pen is ridiculous.
You shouldn't need to use a social security number directly. There should be a token system where you authorize companies to reference your data. Just like using twitter to sign into another service. You verify that you're actually the owner of that identity.
The US needs to learn that social security numbers were never meant to be used as identifiers. It's awful. Just fix the real problem.
The Real ID act was aimed at making driver's licenses more useful as authentication/proof of identity. Whether it constructively did that is a separate question.
Real ID was basically just adding citizenship info to your driver's license so you didn't need to also carry your passport. It didn't fix any financial fraud issues.
If you're born in a US hospital, you kind of are born with a SSN; through a program called Enumeration at Birth . It's not mandatory, and many people aren't born in a hospital, and certainly a great many people aren't born in a US hospital.
Enumeration at Birth wasn't around when I was born, I believe my parents had my siblings and I enumerated at the same time; I'd guess when it was requested for school enrollment.
If you replace the signature with a CSR along with the drivers license/passport, then the bank can then give the new account holder the certificate. Then the account holder can use it for future online transactions.
Also social security numbers were never intended to be use as unique identifiers. There's no code or anything. Add one to yours and you get someone else's number. Only thing hackers are doing are matching numbers with other identifiers. It is odd to me how much we rely on SSNs when they are so bad.
In countries with modern identity solutions, this is not possible.
Belgium has a digital ID card. It's a smart card that contains fingerprint info. So you can put the card in a fingerprint reader to verify you match the card. Cryptographically secure so you can only get them from the government. Second, they have an online identification service and mobile app that can be used to verify your identity to banks online. There's also an under 12 kids version of the card that includes emergency contact info, etc.
Looking up info for Switzerland has a bit of a language barrier. It looks like it's still mostly just a national number similar to a SSN and national identification card. Documents from as far back as 2016 have their government planning a biometric id card and an electronic id. Some docs said they were aiming for a 2019 rollout but I couldn't find any recent updates.
Although I know having a unique compulsory personal identity number (as many countries in the world have done) will have these so-called "privacy advocates" in the U.S. screaming at the top of their lungs. So either deal with it or leave it.
> The former Countermeasures Manager at Equifax served in an acting and
then permanent capacity from 2016 to 2017. He believed Equifax suffered a
cybersecurity breach because of a “sophisticated” and “highly motivated”
adversary. He added that, “if asset management was a perfect silver bullet then
perhaps this may not have happened.” He told Subcommittee staff that he does
not think the Countermeasures team could have done anything differently in
response to the March 2017 vulnerability. He was as surprised as anyone that
Equifax suffered a breach because of the “combination of the sophistication of the
attack and the talent at Equifax. We had rock stars at Equifax who were de facto
pillars in the field.” The former Countermeasures Manager believes the response
to the vulnerability was “not only defensible, but justifiable.”
The real problem is that Equifax treats data which rightfully belongs to individuals as its own, and the solution is to give individuals control over their own data.
The real problem is that we use publicly available, non-secret data to establish identity. Instead, we need a secure, unique personal identification system, as well as tougher punishments for people that commit identity-based fraud and theft.
That is precisely the data that rightfully belongs to individuals. It should be much, much harder for companies to collect it and sell it.
"Data is a toxic asset." — Bruce Schneier
Pretending that a "secure" system can be devised which will protect that data is irresponsible, because it is impossible. The data will leak eventually.
When it does, under the current regime, the damage inflicted on individual victims is wildly disproportionate. Equifax is the analogue to a gross polluter dumping dioxin into a community's backyards. But getting damages proportionate to the harm out of Equifax is hopeless, because US laws do not acknowledge the toxicity of data.
The US needs something like the GDPR in order to protect us from grotesquely harmful institutions like the CRAs. We are in the early days of the internet, akin to the early days of the industrial revolution — before the harm that toxic chemical pollution causes was recognized and regulated.
That is precisely what it's not. Your existence isn't a secret. Things like your address, phone number, and legal information are freely available from a variety of sources. When you do business with banks or other lending institutions, you are explicitly granting them the right to exchange your payment information with other institutions and CRAs. When you seek or attain employment, you are told if the company makes use of some service like The Work Number for previous employment and salary tracking.
> Pretending that a "secure" system can be devised which will protect that data is irresponsible, because it is impossible. The data will leak eventually.
This is sadly true.
> grotesquely harmful institutions like the CRAs
Without CRAs, how would companies judge your credit worthiness? Access to credit and loans would be much more difficult, and would be much more expensive, without them. Or some similar mechanism to share that sort of information.
Again, the harm doesn't come from the CRAs. A CRA doesn't grant an identity thief access to your accounts, or open credit in your name. That's the banks, CC companies, utilities, etc. who will open accounts with insufficient proof of identity.
It's not the data that's toxic, it's the misuse of it. Allowing accounts to be opened or accessed with easily searched or guessed information is where the real problem is.
It’s not like we have a choice. Every such business participates. I suppose it’s possible to avoid credit reporting by not having a bank account or utilities or a job, but that’s not a very practical way for most people to live.
I don’t see the point in blaming banks instead of credit reporting agencies. They’re all part of one big system that works together. All parts of the system make it the way it is. It’s no more sensible to try to blame an individual component of the system than it is to debate whether to blame a murderer’s hands or his legs.
Think about how the process works. A criminal goes online and applies for a CC, using your personal information. The CC company goes to a CRA to check credit worthiness. The CRA says "Sure, mikeash is a responsible person who pays his debts on time". The CC company says "Great!" and hands the criminal an account in your name. It's the CC company that didn't do the due diligence to verify that it was really you.
However, since the Equifax hack and the resulting backlash, the CRAs have started offering much greater consumer control to consumers. Equifax has a free service that allows you to keep your information locked and actually notify you in real time when a request is made, that you can block or allow as you wish. Basically 2FA for credit. If the other CRAs don't have that service, they soon will.
A third party I’ve never interacted with gives out my private information without first asking for my permission. This is a crucial step in a common pattern of fraud. If they asked me for permission before they gave out my private info, it would shut down this entire type of fraud. They are fully aware of all of this, and continue with business as usual anyway. How is any of this ok?
But they don't give out private information. They don't get anything but some aggregate data about how well you pay your bills. I.E. how many accounts are open, how many have been paid late in the last 30 days, etc.
> If they asked me for permission before they gave out my private info, it would shut down this entire type of fraud.
That is now possible with the two CRAs:
Experian and Innovis: I didn't see anything similar, but I'd be shocked if it wasn't imminent for them as well.
These locks just provide a way to shut up the outlier loudest complainers and a rationale for victim-blaming everyone else. ("You could have locked your credit, you know...") By what right do the CRAs demand individuals consumers' time and protection money?
It is incredibly frustrating to see these feckless, disingenuous half-measures cited as progress. It only confirms to me the futility of collaborating on market-based solutions with market players whose interests would be harmed by meaningful solutions.
I wish it didn't come down to imposing regulations, because regulations truly harm market efficiency. But what choice are we left with?
But that's what we need. But what do our "leaders" think when people rant and rave without showing they even understand the problem?
Citizen: I need you to do something about identity theft!
Lawmaker: Yes, it's clear we need to get some regulation in this space.
C: I'm tired of getting turned down for loans, or having credit cards opened in my name! We need to shut down the CRAs!
L: ... Uh, you know that CRAs don't actually do that? It's your bank or credit card company that opens the account, they're the ones that need to verify identity to make sure that people are who they claim to be...
C: Don't try to defend them! They kept me from buying a car!
L: ... Thank you, citizen. I'll get right on it. Vote Quimby!
It’s possible with two of them. Great. Let me know when it’s mandatory for all of them.
Any one warehousing my data needs to tell me what, how, why.
PS- You and rectang are doing yeoman work in this thread, thank you.
You already have all of this from CRAs:
* They are required by law to tell you what they have related to you every year, for free.
* These reports include the source of the information.
* The why is well known, as it's their business model.
Edit: not only that, but the law restricts who can actually request access to any of this data, under "permissible purpose".
First, I have to ask. I'm advocating default opt-out. For everything.
Second, they're always wrong and hard to correct. Been there, done that.
In this system, you'd never get credit. You'd apply for a loan or CC, they'd ask you for permission to check with a CRA and share your info, you'd say no, and they'd show you the door.
> Second, they're always wrong and hard to correct. Been there, done that.
No, they're not. Sometimes they're wrong, either because bad info was reported, or there's a problem at the CRA. Sometimes the errors are hard to correct. I've never had any bad information on my reports. That doesn't mean it doesn't happen, any more than you seeing something wrong on your report means they're "always wrong". It just means that no system is perfect.
That's like saying strychnine is harmless unless you swallow it.
The credit reporting system is built atop the tortured bodies of the victims of identity theft. Every time an individual's personal data is transmitted, that individual is put at risk. The CRAs profit from the profligate dissemination of sensitive data, but the individuals who were compelled to participate and then came to harm are systematically prevented from obtaining recompense.
Well, yeah... of course.
> The CRAs profit from the profligate dissemination of sensitive data,
Everyone profits. The CRAs who are the aggregators, the companies that share the information, and the consumers who get access to cheap and easy credit.
Victims of identity theft exist. Even if you are content to sacrifice them for the greater good, they are real people.
All I'm saying is that the system we have now provides a great many benefits to people, and that we should look at how to fix it, rather than eliminate it. The CRAs are just 3 companies, out of the thousands that have had data stolen over the years, and will continue to do so forever, because it is impossible to have perfect security.
People have cars. Sometimes they get stolen. Should we eliminate cars, so that we can eliminate car theft?
Counting on Equifax to solve this problem is absurd. Equifax doesn't care about individual consumers because those consumers are not their customers. Combating identity theft is a cost center; CRAs cannot differentiate themselves in the marketplace through better tools to prevent an externality.
Consumers have to fight for their own interests because the CRAs will only be dragged along kicking and screaming.
Who cares if people know person X's firstname is Mark.
When you start combining that data with address, phone number, age, SS#, employment history, mothers maiden name etc and that set of data also happens to be globally misused by the industry as auth basis data it becomes dangerous.
If they can't secure the combined data set (and you seem to agree that they can't) they certainly should not both be proposing its use for authentication (which they offer as many costly services for business and govt) and also creating the huge data store against the population.
The net here is that this data is unsuitable for auth -- but at the same time it IS USED for auth -- finally it is gathered and stored as if it is not credential material.
I also agree that this review/investigation is barking up the wrong tree -- the most serious crime here is that this information (even without any major mental gymnastics) can and will be easily used to identify likely covert government employees easily by and adversary that has access to it.
Secure. Unique. Personally identifiable. Pick any two.
It was stolen, not made "publicly accessible". You or I could never have accessed it, it took a sophisticated team months to do it.
Don't forget the Yahoo! and Marriott breaches, both of which were considerably larger than the data "Equifax made [...] publicly accessible".
It takes teams months to get to the moon too, but that doesn't mean it's not out there for anyone to access.
The moon thing is a great analogy. Everyone knows the moon is there, but the process of getting to it is an enormous task beyond the reach of all but the most determined, skilled, and well-financed parties.
I know for sure that I'd rather take extra burden for any future credit checks, but in my experience it isn't even possible to get the . I've not been fully able to consistently freeze my credit with all three major agencies and not for lack of trying (several times of very slow correspondence that claims "insufficient identifying evidence"). The fact that its drastically harder for me to freeze my credit than it is to open a new credit card at a brand new address is just batshit insane.
There are alternatives to (economic) slavery and death.
For personal credit scoring, in particular, I'd also accept either Sweden (https://en.wikipedia.org/wiki/Credit_score#Sweden) or Switzerland (https://en.wikipedia.org/wiki/Zentralstelle_für_Kreditinform...) strategies.
Both are civilized, transparent, accountable. Alas, sadly, no unnecessary corporate profiteering.
You get your cut when you apply for cheap, fast credit and get it.
> There are alternatives to (economic) slavery and death.
How is any of this "slavery and death"?
So instead your proposal is turning it over to the government? Interesting.
My cheap, fast credit comes from my high earnings, high savings, and low burn rate. I receive zero benefit from Equifax (et al). They, however, benefit from my data.
My data is me. Anyone profiting from my data owes me my cut.
Further, aggregating and warehousing my data is done on the expectation of future profit. So, again, I should get paid.
AKA royalties and licensing, respectively.
> My cheap, fast credit comes from my high earnings, high savings, and low burn rate. I receive zero benefit from Equifax (et al). They, however, benefit from my data.
And how would you prove those things to a prospective creditor?
> My data is me. Anyone profiting from my data owes me my cut.
It's not yours. It's the data that belongs to the credit grantors that provide it to Equifax.
> Further, aggregating and warehousing my data is done on the expectation of future profit. So, again, I should get paid.
The "future profit" is CRAs establishing themselves as trusted aggregators of this information for their customers, and therefore getting repeat business. And again, it's not your data.
You trust corporations, but not governments, to maintain a list of liens (unpaid debts)?
"It's not yours. It's the data that belongs to the credit grantors..."
Who owns your medical history? The lab company? Your care providers? Big Pharma? The hospital IT dept?
Is your medical data is yours? Do you have some righteous sovereignty over it, how it's used, who gets to use it?
If yes, how are medical and financial data different?
Just to make my position clear: We need a better way of establishing identity. We need tougher standards for credit grantors to use these better standards. We need better control over financial data misuse. The systems we have are insufficient.
Apparently all it takes to lift a freeze with them is your name, address, SSN, and phone number. They do not ask for the pin that was setup. They don't do any other verification. That's all you need.
At least, that was the case when I did a temporary lift the other day. Imagine my horror when I sat there, pin in hand ready to punch it in, only to be presented with the successful lift screen already...
> He believed Equifax suffered a cybersecurity breach because of a “sophisticated” and “highly motivated” adversary.
Given my experience above, I very much doubt there was any sophistication required. You can probably go to Equifax's admin page and just punch in the name of the CEO, telephone and address for Equifax's HQ, and be on your merry way.
The reality is Equifax are lazy stupid idiots who
* couldn't patch well known holes in infra for months (note that a 2 engineer startup could do better by eg running rails and checking bundler-audit on every deploy);
* disabled their IDP because they didn't keep a cert up-to-date;
* didn't notice their disabled IDP FOR 10 MONTHS because lazy stupid incompetent;
* strongly appear to have no internal access controls, so regular app users in the db could exfiltrate the entire DB without creating any internal alerts
In contrast to the self-serving nonsense in the Bloomberg article, there was nothing sophisticated here. These people couldn't get the most basic things right, all part of the most very basic security posture: patch known holes. Regularly verify such patching. Run an IDP. Verify the IDP is running. Have controls around odd behavior or bulk queries for the DB users running on your front-end boxes.
2 engineers produce 2 engineers worth of code/infrastructure. Equifax has much more than that to manage. And at corporate scale just knowing what you're running is a significant challenge, which is a step before version management.
I'm not trying to justify bad assets management, but if anyone's comparing their situation to a 2 engineer startup, they don't understand corporate it environments or issues or that scale/budget.
Bloomberg has plenty of criticism, they just don't wrap it in the shrill click-bait headlines of other "news" outlets.
> note that a 2 engineer startup could do better by eg running rails and checking bundler-audit on every deploy)
On systems a fraction of the size, with a fraction of the traffic, and not under a constant withering assault from hackers.
There are really only two alternatives here: either the problem is not solvable, in which case Equifax should not be conducting its business in the way it is, or it is solvable, in which case Equifax should not be conducting its business in the way it is until it gets its security act together.
As their scale, it's also straightforward to build and maintain the list of sites you're running, what's running them, and the versions of libs in place. Particularly because you can have dedicated engineering for the above. It really comes down to whether it's a corporate priority or not.
The enormous amounts of money and time that companies spend on this problem, and still not get it right, disproves your assertion of simplicity.
1 - know what assets are on what domains
2 - know the code backing them
is belied by the many competent orgs that do that. And the fact that it's table stakes for a reasonable security posture.
Further, it's not that technically challenging, not least because it can be human automated. Detection tools are available right here: https://www.metasploit.com for $0. You can have a human run that once a week against your web properties and do way way better than Equifax. Because Metasploit would have alerted on the Struts vuln.
It's also not rocket science to have a member of a security team run that against a list of web properties (that can be trivially sourced from your DNS server even if they bypassed internal deployment processes).
Nothing about this is sophisticated beyond basic computer skills -- grabbing known sites from your security docs, backing that up with DNS entries, alerting on anything missing, and running metasploit from a script. Unless, of course, you're hellbent on claiming the above is super duper hard.
None of this is trivial, and you saying it is would indicate you really don't understand modern computer security issues. This is hard, and it's unfortunately easy to get wrong. And it's a space where even a tiny failure can lead to large consequences. Not that the Equifax failures were particularly tiny, but it was clearly more than they could handle.
if you just mean in some kind of “cold” browser, eg incognito mode, that wasn’t my experience.
Obviously "if an SSL certificate expires, transactions are no longer protected" is just wrong, so what were they trying to say here? Maybe Equifax's internal monitoring tools were refusing to connect to the target host because of the expired cert?
It's mind boggling how these folks are responsible for the credit monitoring (i.e. financial security) of millions of people.
I wouldn't trust them with my family's secret brisket recipe let alone anyone's personal financial data.
With so many articles talking about Equifax's incompetence, I guess I'm a little surprised that they even have a Countermeasures team.
> ... his office was using two different versions of Struts and that neither was among the versions listed as vulnerable in the alert. He requested confirmation that his conclusion was accurate and noted that the business impact could be quite heavy if he was incorrect.
Sadly, the business impact ultimately seems much less significant than you might imagine.
> The senior manager that the developer ultimately reported to did receive the alert but did not forward it to the developer or anyone else on the developer’s team. As a result, this developer did not receive the GTVM alert about the Apache Struts vulnerability.
I wonder if individual devs could/should subscribe to these CVEs? Is that possible? Not that this dev should have, but that you might want not want to rely on your company's bureaucracy to figure out important stuff like this.
From "Recommendations (1)":
> Congress should pass legislation that establishes a national uniform standard requiring private entities that collect and store PII to take reasonable and appropriate steps to prevent cyberattacks and data breaches.
So there you have it. 'Reasonable and appropriate'. There is such a large amount of room to wiggle in a statement like that it is laughable (with a complex subject like this).
I have been thinking about why and how such systems could be mended and what I thought was, that when the majority of the system (Equifax managers, other employees) act as immovable inertia to any positive change by either obstructing or dismissing it, there should be a system in place that allows single actors to hit the so-called emergency button. But to have that kind of system in place, it requires at least one person at the top to have competency which seems to be a quite unlikely requirement.
It always fascinates me that in many organizations I have been and read about, the regular employees often knew about the problems they have had and knew that something should have been done about it. I guess it's just that apathy that gets you when your superiors don't seem to care when you complain, hence causing a downwards spiral and morale deprivation.
They took our most personal financial information without our permission, made money from it and completely failed to protect it. These A-#&@# should have gone to jail.
The fact that people can say something as stupid as "they did nothing illegal" and possibly be correct is exactly why we need privacy laws.
What's even more horrifying by Equifax's complete incompetence before, during and after the data breach is the fact the company is still making money, no one lost his or her job and no one went to jail.
I hope this report will spur Congress to hold Equifax executives accountable.
Eh? Both the CSO and CIO lost their jobs over this. They "retired", but that's simply a nice way of saying they were fired.
> no one went to jail
I struggle to see why anyone in this situation should be in jail. They did nothing illegal. At worst, as this report shows, they were woefully incompetent. If there was a person who had maliciously caused deficiencies in their security or knowingly disobeyed relevant regulations, that might be worthy of jailtime. But the moment we start putting people in jail just because they're bad at their jobs... we'd have to imprison half of the country.
This is actually a huge issue with the security industry right now. It is very difficult to find qualified people to fill CISO roles because the general consensus is that their company will be hacked (either from a 0-day or from a long-lasting, hard-to-kill vulnerability that has been present since long before the new CISO arrived), and the CISO will be blamed. As it is, most qualified security practitioners would much rather choose a consulting position where they don't have to worry about being blamed. Now imagine how worse that situation would be if you started threatening every CISO who got hacked not just with being fired and publicly shamed, but with jailtime, too.
Over time, perhaps more companies will learn how to avoid aggregating such data, and gravitate towards business models with smaller liability exposure.
It should be hard to fill CISO roles because companies should have high standards and should be looking for the best of the best to fill those roles. But that's not what I'm talking about. There is a decent sized pool of qualified security practitioners that could be CISOs, but none of them want to be because the current way things work is that they will inevitably get fired and shamed for something that they had little control over. That's not good for them, and it's not good for us, because you know what's worse than a company having a lot of insecure PII? A company that has a lot of insecure PII that also doesn't have a CISO.
"Retired" means leaving the company with a golden parachute and laughing their way to the bank.
A degree does not mean anyone is fully qualified for a job just as not having a degree does not mean he cannot do it. The damning part in Equifax is that CISO did not have any prior experience in this domain (likely your friend has worked as developer before)
Yet Equifax is doing better than ever. No new laws, no reform, nobody goes to jail. All the rich people get to keep their money.
Thoughts and prayers.
Data breaches have significant real-world harm. Unfortunately Equifax got out of this without paying fines in several state at least. It was the regulators who failed us here, not the market: https://www.reuters.com/article/us-equifax-states-agreement/...
Customer data should be like radioactive waste. Companies should know that if they have it, they have to take appropriate measures to secure it, and if they don't they will take a significant financial hit. The financial penalties should be significant enough that if a business doesn't feel sure they can invest into proper security, it makes more sense to just not keep the user data.
On the contrary: the credit reporting industry is a perfect example of market failure thanks to regulatory capture.
The actors in the market exercise a high degree of control over the regulators. The regulators are not independent and the companies that influence them are not innocent.
All markets are constructed. The credit reporting market is just set up with rules which are grossly harmful to individuals.
Regulators should answer to the people, but because corporations are extremely motivated, fabulously wealthy, and privileged by the court system, the people have to compete for the regulators' attentions. I think it makes sense to acknowledge that Equifax is acting against the common good, even if that's just incidental to Equifax acting in its own interests.
There are no significant regulatory penalties coming at Equifax over this (the biggest change is that credit freezes are now free, which is great win but won't have a huge market impact).
Additionally, consumers can't choose not to participate in this process. I mean, if you're wealthy, you can choose not to buy anything on credit, and if you're tech savvy you can choose to freeze your credit with Equifax and force the majority of your traffic through other companies. But again, that's not the majority of the population and it's not going to have much of an effect.
Businesses aren't going to stop relying on social security numbers and background checks, because... well, for a lot of reasons, but partially because that system is really entrenched and it would take more than leaking half of the US population's data to make it change. That's going to be a very, very hard battle to win. In the meantime, that just means its even harder for consumers to avoid businesses that rely on Equifax for credit checks.
So, the market determined that consumers were not going to change their behaviors over this breach, in part because there aren't a lot of reasonable ways for consumers to do anything. It determined that Equifax wasn't likely to be harmed in the long term by the government, and it determined businesses weren't going to change their behavior around identity verification -- and it priced itself accordingly.
If you want the market to punish bad actors, you have to give consumers and people effected by the market agency to move their business or to choose not to participate in systems they don't like. Current credit systems just don't give ordinary consumers that kind of agency.
This also pops up in privacy debates sometimes where people say, "oh, people don't care." That's sometimes true, but it's generally really complicated -- and part of the reason it's complicated is because consumers have been taught over and over that there is no point to caring and nothing they do is going to stop companies from tracking them.
But not a cost or negative consequence to the organization that the data came from
Markets with true competition can only be sustained through continuing struggle, because companies hate competition and constantly innovate new ways to capture the regulators.
The best we can do is regulators who retain their humility and exercise a light touch.
Realistically, apart from the obvious "this shouldn't have happened in the first place", what more could they be doing?
There isn't many options out there, so they basically have the monopoly on identity protection.
They can offer a couple years of their BS identity protection, but your identity is ruined forever. Your SSN doesn't rotate or expire. There should be no time limit on any remedy they provide.
Do people actually lose large amounts of money due to ID theft? I've always considered ID theft to be the lenders being defrauded, and while it is a huge hassle to get sorted, the real person isn't legally liable for the debt fraudulently obtained in his name.
Also, if you think money is a bad idea, what else do you propose? I'd rather take money with the chance of losing it rather than no money and "credit monitoring" which doesn't actually prevent ID theft, only alerts you after it's already too late.
If anything, the money idea actually makes sense as insurance for future losses due to potential ID theft.
Surely it's typical for credit unions to have deposit insurance? If not, there should be regulations in place to ensure people understand that they are basically just shareholders.
I know for a fact that almost every large institution of even the slightest quality is currently in full panic mode regarding their cyber posture. Look at JPMC, spending nearly 1 billion dollars a year on cyber. I know most of the other big financials are right there as well in terms of % of revenue.
In the financial industry alone, there's a huge uptick in regulatory responsibility globally for asset, vulnerability, and threat management. The SWIFT (messaging system that all major banks communicate and send money on) auditors and regulators are requiring almost all of these issues be "solved" for or having a meaningful workflow within your respective organization. Guess what happens if you don't meet it? You have a serious finding against your institution and you will struggle to do business with any of the other more mature cyber organizations that rely on SWIFT. Worse yet, when large customers request the output of these audits and findings -- if you do not comply, they will move their money. I know several of the largest financials lost massive clients and revenue due to not complying with the cyber standards set forth by SWIFT.
I know for a fact within the US the OCC (governing body for financial institutions regarding cyber) is coming down very hard on the cyber posture of a lot of the banks and is making them move faster, otherwise they face a long uphill battle to expand or make significant changes within the US.
Unfortunately I don't think new laws would help anything, as evidenced in the past.
But they discourage similar behavior in the future, no?
How about prosecuting the people exfiltrating the data? Seems like they should be 100% liable for their actions.
I certainly want companies like Equifax held accountable when they break the law, but lamenting that shareholders haven’t lost money is to fundamentally misunderstand the financial system. The solution is to slowly ratchet up the unprofitably of Equifax when they do wrong so that it makes the sector less attractive over time. Simply crushing the company with a multi-billion fine would collapse the stock all at once, destroying significant percentages of pension funds which hurts the very people for whom you ostensibly feel sympathy.
In short, naïve, knee-jerk reactions are dangerous for markets and can cause widespread harm. The right answer is to tighten regulations around this industry so that risk gets priced into the stock which makes it less profitable over time which gives institutional investors time to reallocate with minimal shock to those who can least afford such shocks.
The poor person is denied a car loan. It was already the cheapest car they could find, and they don't have the money to fix the old car. They can't take time off work to go to the bank to fight the fraud, but it doesn't matter because they've already been fired due to not having a way to get to work anymore. Before they can find a new job, they're evicted from their apartment for missing rent.
Tell me again how rich people are more affected by identity theft.
you're basically claiming that every company purchased by any pension (so, basically, every company) is Too Big To Fail. if taking down one medium-sized company has any noticeable effect on public pensions, then the public pensions need serious reform.
> Stealing the identity of people with few assets is a lot less profitable than stealing the identity of rich people.
to echo what the sibling comment said, this is a major problem with the continued misappropriation of the word "theft" to refer to things which cannot be physically taken (cf. intellectual property "theft"). sure, if you could literally take over the identity of a rich person, that would be worth much more than the identity of a poor person. but that's not how "identity theft" works. "identity theft" means, largely, opening accounts in someone else's name. rich people are far more likely to notice, and far more able to take action. even if I had Bill Gates's full name, date of birth, address, social security number, all financial account information (hell, half of these are public information already anyways)... I'm not going to be able to open an account in his name, and even if I was, I would almost certainly be caught immediately and sent to jail.
> The right answer is to tighten regulations around this industry so that risk gets priced into the stock which makes it less profitable over time which gives institutional investors time to reallocate with minimal shock to those who can least afford such shocks.
Here's how much Equifax has spent on lobbying the last 10 years:
2008 - 500K
2009 - 500K
2010 - 560K
2011 - 670K
2012 - 620K
2013 - 690K
2014 - 770K
Interesting what happens next considering this headline:
A.Equifax Learned of Significant Cybersecurity Deficiencies in 2015
2015 - 1.02K
2016 - 1.1K
2017 - 1.07K
2018 - 1.27M
Here's the status of three bills that were introduced:
S.1815 - Data Broker Accountability and Transparency Act of 2017
H.R.3806 - Personal Data Notification and Protection Act of 2017
S.1816 - Freedom from Equifax Exploitation Act
As much as I appreciate the idealistic approach to regulate the markets but it clearly doesn't work. Period. It only takes a company or industry with a lot of money to essentially write their own laws.