How can they not be liable? How is this not negligence?
My priority of problems is
* When fraud happens, banks can pass the pain and burden of proof onto consumers.
* Banks use insecure SSNs for authorization; some data is used for validating eligibility, authenticating the application, and authorizing the loan.
* There are minimal regulations on storing different classes of personal information (We need sarbanes-oxley for auditing/accountability of aggregated personal information).
If 1. was addressed, banks would have an incentive to fix 2., and force credit reporting agencies to improve 3.
I think it's no coincidence that the current state of affairs sits right at the sweet spot of these intersecting interests. And the problem is that the model requires an element of risk which is quite hard to control. Every now and then it flares up and becomes much bigger than they want. So we have incidents like this where the insecurities inherent in the system get exploited on a massive scale and the risk threatens to cross from profitable to massively unprofitable.
This goes from the transaction terminal to the bank's server room.
Europe has had chip cards for over 20 years. In the US, it was very recently implemented; only in the past year.
It wasn't that the US banks were occupying some "sweet spot" of retail transaction risk/reward; they simply didn't want to shell out the extra bucks to send people cards with chips in them. Neither merchant nor bank wanted to pay for chip-reading terminals. So, nobody budged until just the past year.
I don't know whether it was legislation, or perhaps the growing cost of credit card fraud (i.e. card skimmers, etc), but for whatever reason, it certainly wasn't a "sweet spot". We've had chip technology for 20+ years, they just didn't want to pay for it.
But ultimately I think its the people themselves that demand more security from their banks. E.g. Bank one introduces chip based cards and more people choose that bank because they want more security. Then gradually some atms start to be "chip only", and banks start to see the chipless ones get all the skimmers and accelerate their replacement to lower costs which forces business to atart getting more pos terminals with chips to meet the demand of people with cards that have mag strip disabled.
Having more security seems to be what everybody wants and benefits from, its just that europe has smaller players which accelerates market forces in that direction, and meybe because european consumers just want more security in general.
Here in the U.S. the next step is usually asking for the social security number. I called VISA/Citi to re-activate my card after traveling and they asked for the associated phone number with my account. Neither of these are especially secure, in my opinion.
Agreed. It still amazes me how prevalent credit card fraud is. Certainly that's preventable - if they want it to be. The problem is, the banks don't bear that cost, the consumer does. Even if the bank factors the loss into the cost of doing business, that still gets passed on to the consumer.
It is merchants (stores, internet sites) that bear the cost of fraud.
Merchants don't bear the cost, the consumer does. The merchant might not hand me a bill but that cost is embedded somewhere in the price.
The bottomline is the consumer pays. No matter how you cut it, the consumer always pays.
It's my meta data. More valuable than phone meta data, and perhaps (to me) more valuable than my medical records. I have a relationship (as well as ethical and legal protections) with my doc. On the other hand, I've never met Equifax. They have no relationship with me other than to exploit my personal info. I never opted into that.
Yes. It lowers risk. But who benefits more? Who carries the risk?
Me? Or them? It's the latter, yes. Yet we have no choice in the matter? That's not kosher.
It is so strange.
What gives banks right to do that?
SSN can not be used as authorization, because it isn't secret information. And really, the same is true for credit card numbers; they're shared with too many parties to consider them secret.
We need to bury this nebulous "Identity Theft" and call out more clearly the two specific crimes that happened: 1. negligence (the entity that gave up the info) and 2. bank fraud. When you use these terms, the companies don't get a free pass. Just change the words and they're part of the mess.
It used to be called Bank fraud and it was the Banks problem. Now it's called Identity Theft and it's your problem.
 Some landlords require credit approval
 Some jobs check credit
I was also very surprised to buy a car with cash, only to have a credit check required. It's a real thing. And according to this article, not required, but dealerships are confused by the language of the law and insist on running a credit check, anyway.
The dealership wanted to run a credit check if I were to pay with a personal check, but not if I paid with certified funds. I called my bank to have my debit card limit raised to $40,000 for 24 hours and paid for the car on my debit card with no credit check.
The system is rotten and (short of moving country) impossible to avoid.
There is no need to spread misinformation. While it is true that a hard inquiry will have a minor effect on your credit score (less than a 5 point hit), multiple hard credit inquiries from car dealerships or mortgage lenders within a period of 45 days only count as a single inquiry.
You're not at all disadvantaged by taking your business elsewhere, you're just making things up to furnish your dubious story.
That said, going in with a briefcase full of cash is going to cause a lot of other problems.
Or living suspiciously like a drug dealer (without money laundering).
All of this comes down to trust. We trust our banks and credit card companies. They trust Equifax. Equifax's customer is your bank or credit lending company, not us. It's actually very similar to Google, et al. We aren't the consumer. They collect our personal information, vastly more than credit agencies. And the real customer are the advertisers who pay Google. The difference is, we probably trust Google more than Equifax (even before all of this).
A month ago, my mom said she wanted to start using Uber on her phone. I explained how to install it, and when she did (as well as the Lyft app for that matter) it wanted access to her camera, photos, contacts, a list of information on her phone. And she said fuck no. And refused to give permission. So she still uses cabs and pays cash.
Don't want to be liable? Then don't store my data.
The elements of negligence are:
2. Breach of Duty
3. Cause in Fact
4. Proximate Cause
You probably haven't suffered legally cognizable damages (yet). If and when you do, they might well be liable.
The main difference is that there is no magic number that any one can use to borrow money in your name. Lenders have to verify a person's identity using ID.
Further more, to get loan you don't have build up a score first. You could get a margage if you have never borrowed money in your life but have a stable income.
Article is in english:
Same in the US, although it's a bit of a pain in the ass. It's my understanding, though, (correct me if I'm wrong!) that Germany is a bit less thrilled about credit than most other countries—even in Europe.
There's typically a single public entity that holds insolvency records within the legal framework. In some countries nobody can query it but yourself; you're therefore asked to submit a copy of your record in some occasions.
As for solvency, when you sign a lease or contract a mortgage, you're asked to submit proof of income.
(Someone mentioned Germany earlier, it's a terrible example in my opinion, Schufa isn't much different than the US credit agencies, albeit more accountable hopefully).
Is there a law allowing that?
Common sense suggests it should only be possible if user explicitly accepted "I agree that knowing my SSN is enough to prove it's me and I agree to be liable to any debts created with just my SSN presented".
But that's probably far too sensible European thinking.
I regularly keep hearing reports of how the US handling of money is basically medieval with some badly thought out insecure bits pasted on top. And some of that gets exported! It sucks that I need to own a credit card to be able to make international purchases on the internet. Why is there not an international version of iDEAL?
What you're saying is probably true in any country; but in reality imo it's way easier not to allow that to happen in the first place.
Nothing makes things change faster than disrupting the money flow.
Equifax? Not so much. This exposure hurts consumers, but it doesn't even put a dent in the economy as a whole. That is why these firms are allowed to operate with such shitty security. If they get hacked, whatever, just a few hundred million customers data exposed to identity fraud. It takes a chunk out of Equifax stock but they'll probably survive. What is the incentive for congress to enact laws regulating corporate handling of consumer data?
Put it this way. Who do you think is louder in Washington: consumer advocacy groups or corporate lobbyists?
The already did, it's called the Gramm–Leach–Bliley Act:
> In terms of compliance, the key rules under the Act include The Financial Privacy Rule which governs the collection and disclosure of customers' personal financial information by financial institutions. It also applies to companies, regardless of whether they are financial institutions, who receive such information. The Safeguards Rule requires all financial institutions to design, implement and maintain safeguards to protect customer information. The Safeguards Rule applies not only to financial institutions that collect information from their own customers, but also to financial institutions – such as credit reporting agencies, appraisers, and mortgage brokers – that receive customer information from other financial institutions.
Note the inclusion of CRAs.
I might be biased and jaded.
Equifax is a business that is not engineering led today and probably will never be as it is a credit agency and not consumer focused. For a company that has been around since 1899 and through many technological changes, they should have really focused more on cyber security in this age. Then again, Equifax is not a consumer company, they are a credit agency and only answer to their customers that probably don't value spending on securing personal data as the ones that are accessing Equifax want that data.
The question is, should a company that doesn't care (nor their stakeholders) about securing data as a way they make their money, should they be in control of all that data?
Also, a lot of companies just don't have a thoughtful process for this.
Think of how much data Google and FB have, and they've never been breached. As much as I loathe them, I actually feel more 'secure' with the data that is supposed to be secure with FB than with my local bank.
FB depends on talent, banks depend on thick process, regulation and massive risk aversion.
Equifax depends on mediocre eng, product, ops etc..
The stock has been slammed while Equifax is being flogged in the court of public opinion, but I doubt this leak will have any lasting financial impact.
Look at the result of the Target and Home Depot breaches: whether you like it or not, the companies are still technically the victims here and no court is going to bankrupt them for data breaches that are more and more becoming the norm.
I am going to have to stop you there. As someone who works in the financial sector, I have quite a different view of this situation. Best case scenario (for the organization) is that it is fined directly into bankruptcy and someone like FIS acquires them for pennies on the dollar. I am still waiting for CFPB to drop a nuclear bomb over this issue. There will be new PII regulations around the corner for sure.
I also question the extent of the leak... I keep hearing it was just basic PII, but if someone got a dump of the entire credit history database, a huge range of financial products (e.g. Knowledge-Based Authentication) become entirely compromised.
Furthermore, do you have evidence that the PII was improperly stored, or that Equifax's security practices were lacking in any way? The vulnerability provided full RCE, and I know of no info-sec magic that inoculates you against that.
I'm wondering if Equifax is using Struts-provided REST for its entire architecture. If that's the case, gaining access to the web server was only the first step. From there the attacker could perform RCE on sensitive services.
You may want to think twice. Try to design an architecture that doesn't have that. If you think it through, you'll realize the best you can do is not to deny access, but to monitor access so that any statistical deviation in requests-per-hour will trigger an alarm. Yet nobody does that, so why should Equifax have been a pioneer in this method?
This is the uncomfortable truth that everyone is obscuring here. There wasn't a solution. Equifax got owned, and they happened to have a trove of data. Everyone now wants to see their heads roll, but you too would find yourself in the same situation if you have an RCE on your servers.
We have an architecture like that where I work. It's not that hard. Our web applications have very little direct access to databases; most of it is mediated by services downstream of the web app. That's certainly not a silver bullet, but it makes it impossible to exploit a RCE vuln in the web server in such a way that it lets you have arbitrary access to the database.
Once you've compromised a server, learning how to ask for the data you want is not hard. You have access to all the webserver's code, can make full dumps of communications occurring normally in the app, etc.
Of course, another option would be to use the web server compromise to then jump to the database service and compromise that box as well, but, again, more hurdles to jump means less of a chance of success.
Nothing is perfectly secure, but you can design systems with defense in both breadth and depth, and you can slow down or defeat many attackers that way. It's not about making Fort Knox, it's just about making breaking in more expensive than they can handle.
That means, barring other exploits, you can only access the info from the users who logged in while you had control of the machine.
It occurs to me that maybe it might seem like pivoting is a big process that takes months. In reality you can map out an internal network within a few hours. Most people keep the servers at the edge of their network meticulously up to date. Once you're inside, you find way more old software. Not to mention creds just laying around the system in many cases since the devs don't expect anybody to be able to access them.
I feel like the worst offense Equifax could be accused of is not getting regular pentests. A netpen would have caught the outdated Struts issue, and they had money to get monthly tests. But very few of us get regular pentests.
We do have monthly tests, scans, and network BGP issues. And our governmental side has a different set of scans, which also include system security scans.
Whomever the oncall is, ends up doing them during the week. It's usually pretty quick, but can turn into a slog.
Oh, and all our machines are updated appropriately, not just the border machines. There are some services we're not able to adequately update, like FreeRadius - but for each of those we review the criticality and determine if we need the resources to make it work (aka: remote priv exploit)
If the vulnerability used turns out to be the Struts one that was announced at the beginning of the year, then the "magic" here would have been quite muggle-like: update the damn dependency. Not doing so is negligence, plain and simple.
(I agree with you, though, that EFX will almost certainly come out of this relatively unscathed.)
In retrospect it seems more cost effective to do so too, even if Equifax manages somehow to pay only a few hundred million dollars.
The argument would be that the very fact that PII security was breached demonstrates defendant's negligent data storage/security practices. If those practices had been adequate, the breach would not have occurred.
But I think higher expectations, helped along with civil legal machinery that's likely to mete out meaningful punishment, would move us faster to finding out whether the problems are mostly just sloppy practice (which responds to economic penalties) or whether they're more fundamental and need a different fix.
Going long may make sense but we've not hit bottom yet.
Not really accurate because the exploit name, especially as generic as RCE, does not tell you how it was done. RCE can be a number of things that can be fixed in numerous ways. For example, file upload functionality with path manipulation and no file type validation may lead to RCE. This can certainly be removed with properly crafted file upload handling.
Some RCE, particularly on lower layers of the application stack, may be more difficult to defend against, especially in the case of unknown exploits.
The webservers should be treated as if they're potentially compromised, not given arbitrary access to a database...
Specifically: if you're working with a lot of very sensitive information, you structure your application such that there are multiple layers separating that data from the outside world. In the case of Equifax, that might mean implementing the "credit score check" as an internal service exposed through the public web servers (or whatever).
* Credential vaults that allow only-once retrieval on application startup and only keep credentials in memory
* anomaly detectors for request patterns (suspicious payload formats, processing time, CPU/memory usage, etc),
* Honeypot records in sensitive data stores (records you know should never be accessed, if they are you've been breached)
If you're storing information this sensitive you need to be paranoid, because an attacker has a $4B incentive ($30/identity on the black market * 143 million records) to crack your systems.
It seems amazing that nothing tripped up some monitoring or something.
There is zero chance of that happening. There's been no hint of this from any reputable source (i.e. not clickbait headlines). Will there be financial repercussions? Fines? Loss of stock value? Absolutely. Will Equifax go out of business or get sold? Absolutely not.
People seem to conflate the way they think things should be with the way things actually are.
Longer term, this is actually quite possibly revenue-positive for Equifax - they will trick a non-inconsiderable number of people into signing up for their credit monitoring service, and everyone who wants a credit freeze gets to pay $10 (in Illinois at least). That's $1.45B in potential revenue right there! Once Transunion and Experian see the success of this ploy, they may have strangely poorly secured border networks of their own set up invitingly for the retail hacker...
If my SSN and financial history is stolen, someone can impersonate me. They can sign up for bank accounts, loans, credit cards, etc.
Equifax's stock hasn't been this low since... oh, last November. Hm. Not really sure I'd call that slammed.
What does whether I "like it or not" have to do with Equifax's profound negligence?
I agree with the sentiment, and I’m keeping my eye out for an opportunity. But I don’t think they’ve been hammered enough. I mean, it’s not like they’ve hit a 52 week low or anything. Maybe if I can find some in-the-money calls for a reasonable price...
The vast majority of EFX's profits come from services that consumers (effectively) don't choose to participate in.
I'm not suggesting that's right, but it is how it is.
The only one I can think of would be consumers refusing to open bank accounts or credit cards with them because they run Equifax checks, which seems improbable.
Using strong cryptography, we can build pseudonymous trust graphs where nodes in the graph (cryptographic identities) publish cryptographically auditable trust relationships. Using various graph exploration techniques (e.g. unrolling the trust graph into a trust DAG with known creditors as terminal nodes and calculating path properties to those nodes, using proof-of-burn and non-distributive path combination to disincentivize Sybil attacks on the trust graph, etc.) we can estimate trustworthiness (or, more specifically, creditworthiness) of cryptographic identities rather than legal identities. In the end, you're probably still going to want to link at least one cryptographic with your bank account, but you would have vastly more control over the relationship between privacy and public verifiability of trustworthiness.
If course, just because this is possible doesn't mean it's going to happen. The primary obstacles to an open, secure trust system are that A) it's harder to make people manage their own trust network than it is to spy on them B) trust networks rely on the network effect and C) there's no obvious way to make money off it. Any extant system that resembles what I've described is mostly limited to tech nerds. I'm not sure what it would take to trick/convince the general population to use such a system.
Credit agencies were about PII before the internet, but internet companies that collect all this PII toxify the internet.
Unfortunately, also, it's safe to assume that Experian and TransUnion operate just as badly as Equifax, so attempting to live an Equifax-free existence probably isn't particularly useful. If Equifax does suffer for this leak, actually, it's a relatively safe bet that they will take security more seriously so as to better protect their interests.
How do we know that Equifax fell into this category? That this was due to negligence? I see a lot of disdain towards Equifax but yet the breach details have not been out yet.
Linkedin using unsalted sha hashes is a lot more maddening. Here you have a vulnerability being disclosed and not enough time to patch your code.
>>Zeynep Tufekci (@zeynep), an associate professor at the School of Information and Library Science at the University of North Carolina
You'd think she of all people would know better.
With that I also saw that cyber security is and will be the biggest threats of the next decade. They are many cyber security companies these days but I didn't see a single company moving forward to support the Equifax team to figure out what happened and how it can be prevented.
Cyber security companies should have volunteered for the cause.
Please let this be sarcasm...
> They are communicating it to their customers transparently.
They knew well in advance that there was an issue and did not communicate it well. They have 3 higher managers that look to have sold their stock based on the knowledge. There are some reports that they knew up to 3 months ahead of their announcement.
> They launched a specific site for security scans for their user for free.
Things that are wrong with this site:
- The site screams "phishing" when you look at the URL.
- Asks for SIX digits of your SSN. If you know the state of the person filling out the form and they were issued their SSN before 2011, you only need to try a few numbers to figure out their whole SSN.
- Gives random results when you fill out the form
- You possibly forfeit being able to sue them by filling out the form.
- When you fill out the form they basically advertise their own product to you.
At this point, as a consumer, it feels like they are doing everything in their power to get away with not being held accountable for not storing this data properly.
8 in 10 US credit card holders have their SSN and possibly other information out there. This means that I'm at high risk to have my identity stolen in the future, not just the next twelve months that Equifax is offering me free Identity Theft Protection.
Last but not least, when you freeze your credit score, they give you a PIN to unfreeze it. But if you were to lose it, you'll only need some identification to get a new PIN and unfreeze it. But they've already released that identification and it's being sold around. So no luck there.
1. They realized that Equifax uses Struts. 2. They modified struts!
and 3. Equifax used the updated code on their servers.
If the $14 billion can't do that then they certainly cannot protect data.
Americans woke up to news of yet another mass breach of data about them.
It's annoying, because it distracts from the immediate issue and causes confusion.
> Today, almost every piece of software comes with a disclaimer on its user license that basically says that the product may not work as intended […] and that’s the user’s problem. It’s a wonder companies don’t insert “nyah nyah nyah nyah” into the tiny-print legalese.
> No software system can be free from bugs […]
There's 120 millions of lines of code in an A380, and planes don't crash due to software bugs. Why is it wrong to expect perfection in critical infrastructure?
Something went wrong somewhere in software engineering. My HP42s calculator has about 6 insignificant bugs that you need to get out of your way to trigger. Your new cellphone on the other hand, when you turn it on it downloads a gigabyte of updates! That's an outrage.
Why is software immune? You ask yourself about cost of the Office of Personnel Management hack from two years ago, and before that it was the biometrics database from the USCIS.
Yes she is. I read this as completely unrealistic expectations from the author. Struts is maintained by one person.
"Most software failures and data breaches aren’t inevitable; they are a result of neglect and underinvestment in product reliability and security."
The attack happened in late July. The bug was fixed/reported in early September. It was a zero day. That's not neglect.
I see nowhere in the op-ed piece where Tufekci mentions Struts or implies that she holds Struts responsible for this. She is clearly laying this at Equifax's feet, and their responsibility in their choice of software and the industry as a whole for actively pushing against better software practices and responsibility.
The section you quoted is followed by:
> Some number of unexpected errors — bugs — are unavoidable in computer programs. It would be unreasonable to allow a consumer to sue a software company every time a program suffered a glitch.
She's laying out a much more nuanced argument than you're given her credit for. You're right in that this seems to be zero-day which are more difficult to defend against, but there are practices (among them, defense in depth, and pen tests) which can limit the attack surface. Also actively looking for known exploit types (rather than specific exploit instances). For example, buffer overflows are a known attack vector in C, so people harden their code against buffer overflows. Deserialization attacks are known in Java, so people harden their code against deserialization attacks. SQL injection attacks are a known exploit type, so people learn to parameterize their SQL queries.
It's clear that this is something you care about and are passionate about. For topics that affect me like this, I consciously take a breath and re-read what I've reacted to, to see if my second (or third) read matches up with my first.
If she wanted to lay it on Equifax, she might go into the fact that the Chief Information Security Officer at Equifax holds a masters in music,
The people that actually "do" are Chief Peon of Cube Farms, doing whatever the boss with a music degree tells them is priority.
>You're right in that this seems to be zero-day which are more difficult to defend against
There's no nuance. It is under-reporting the facts to make her hit piece look stronger. It's never the leadership's fault when there's a failure in the US, but they happily take credit when there is success.
>but there are practices
Which don't help at all against a zero-day in a dependency.
I interpret her differently:
> There are technical factors that explain why cybersecurity is so weak, but the underlying reason is political, and it’s pretty simple: Big corporations have poured large amounts of money into our political system, helping to create a regulatory environment in which consumers shoulder more and more of the risk, and companies less and less.
> This is a general feature of our lopsided world, but software businesses (and the technology sides of other companies) have acquired perhaps the greatest degree of impunity. Information technology arrived on the scene only recently, so it has faced fewer of the kinds of regulations that consumers and citizens, in more progressive eras, managed to impose on other industries.
To me, that reads as taking corporate interests and business motives to task, not software practices. Software development (like any other work) is a cost, and businesses need to balance those costs against business revenues. I'd argue who's chosen for C-level positions is a business decision, not a software practice one. If the costs of failure in production due to bugs were higher, businesses would make different decisions in hiring and how much time was dedicated to security and bug fixing. Do you disagree? Testing and quality control is expensive. If we can roll out a feature (or just continue business) spending as little as possible on testing and QA, it can certainly be an understandable decision (whether or not you agree) to do as little QA and testing as possible: you're not providing any new features (which may increase revenues): you're just increasing cost.
> It's never the leadership's fault when there's a failure in the US, but they happily take credit when there is success.
It's not clear to me which leadership you're referring to here. The government? The corporate leadership? Someone else? If the corporate leadership, I think that's entirely the point Tufekci is making.
TIL: No warranty == impunity.
Nobody MADE Equifax use Struts. The source is open to inspection. The bug existed there for 8 years. Let's see how many audits Equifax did on the source code with no warranty.
>If the costs of failure in production due to bugs were higher, businesses would make different decisions in hiring and how much time was dedicated to security and bug fixing. Do you disagree?
If the costs were higher, the one poor guy working on Struts would do a better job? No, I think that guy would probably not write the software. He'd find a different line of work. If he did write it, he would never release it for the world to use for free. Who would do that? "Here's this thing I worked on for over a decade. You can use it for free. Please sue me if you have any issues. Thanks."
I see you equating Struts and software practices with the businesses that use software. I see those as two separate things.
> Nobody MADE Equifax use Struts.
> The source is open to inspection.
> Let's see how many audits Equifax did on the source code with no warranty.
I'm not sure why you're including this. I think they should have done source code audits in accordance with how they weighed the costs/revenues. Do you disagree? I personally tend to lean towards more tests and code analysis, but I understand others weigh this differently.
> If the costs were higher, the one poor guy working on Struts would do a better job? No, I think that guy would probably not write the software.
I place the responsibly with the company using Struts in their product, not the Struts dev. I'm not sure how you're getting the impression I (or Zufrekci, for that matter) place this on the Struts dev. I'm responsible for the results of the applications I put into production, including the libraries I choose to use in that application. I don't hold generally hold the devs who wrote those libraries responsible.
Like I said, I think we're talking past each other. I still think you're reading too much into (and too little close reading of) Zufrekci, but I'm not sure how better to express what I'm trying to say. I've now read the piece through 3 times fully and I really don't see her making any of the points you're arguing against.
If you've got specific questions about what I've written, please ask. Otherwise, I'll sign off. Have a good evening!
Nobody cares where you, or I, place it. You don't write for the NYTimes. You don't have that sort of sphere of influence.
>with the company using Struts in their product, not the Struts dev.
It's very easy to explain to the public. "Those software hacker people did this to you. Look, here he is. He made the faulty software. Burn him at the stake."
Zufrekci is with them, blaming the developer.
Developer licensure, here we come. Illegal to write open source software. Another one of those crazy Richard Stallman predictions that comes true while you guys sleepwalk into the dystopia.
The author was not blaming the Struts guy. She was blaming Equifax, 100%. She would blame the decision to use Struts and assume the unavoidable risk associated with such a decision, not the development of Struts itself.
Literally every single point made in the article is about Equifax dodging accountability for their choices, and Struts is never mentioned. What on Earth makes you assert with such total certainty that she's blaming the Struts developer?
Let me translate that:
Software users, like people who use compilers, should need licenses. Obviously, people need to compile more carefully. Software needs the equivalent of seatbelts, airbags, and other government mandated safety standards. Software cannot JUST ship to github with no warranty or guarantees of safety. These licenses which absolve the developer of responsibility cannot continue to be allowed. Those open source developers should not just produce software for free, but they need to accept responsibility for it. They need to pass government mandated, Apple App store style, approval for all software shipped. Including regulations for safety and compliance with other laws like copyright infringement and decency standards.
She's attacking the foundation of the software freedom movement.
I get why you'd be upset if she was attacking the things you say she is, but she is emphatically not doing that. Every single paragraph in the article is about how Equifax should be liable for their software, which includes liability for the decision to use types of open-source software.
"the underlying reason is political, and it’s pretty simple: Big corporations have poured large amounts of money into our political system, helping to create a regulatory environment in which consumers shoulder more and more of the risk, and companies less and less."
The author is suggesting a political solution. Regulation. Laws that say "Your open source license can't exempt you from a, b, c, d."
You could then exempt yourself from lawsuit in your open source license, but that will be automatically void, like a non-compete clause in a California employment contract. Struts would be sued for the breach in her imagined world.
To be fair, I would say a third of the Infosec professionals I know have backgrounds in the ARTS (myself included). While the CISO is definitely suspect, having an arts degree doesn't make one less of an effective INFOSEC practitioner. In fact, the creative nature of those drawn to the arts has proven valuable finding creative solutions to problems within our organization.