There is always the opinion that liability might threaten innovation. That can however be mitigated by having less sensitive information stored on ones servers, or by regular insurance if such information is critical to have.
The crucial distinction is between software providers and businesses storing data related to customers. If the software providers were liable, there would be a devastating effect on the business world and society - basically it would become so that the only way to obtain software would be from giant corporations, and every coder would have to get licensed and buy insurance.
So let's not go there - but businesses selling products or services to the public and storing customer data should not be able to opt out of liability by putting some fine print in their boilerplate terms. Instead they should have to pay the full cost of repairing "identity theft" (so-called, it is actually impersonation) for every customer whose data they lose control of. (Think of Target, TJM etc. rather than Snapchat - i.e. cases where the attackers get personal details and maybe CC data)
This would establish the right incentives, but it would not impair business because with competent staff and best practices, it is possible to get good-enough security for the purpose.
I feel like the writer is a bit naieve about the realities of "insecure" software in writing this article for more than just being unaware of why a ceo has to frame the discussion as abuser-centric.
Writing secure software is not easy. Putting liability in place for writing secure code would open every software company in the world to risk of lawsuits and there would never be any real resolution. The best in the world still get security wrong and on top of that you are architechting solutions on top of layers and layers of abstraction. An attack against a lower layer is usually how you end up getting hacked.
You might as well write a law that says everyone gets free unicorns.
It's just not crazy useful... you can do the same thing by scrapping facebook or linkedin without even having to "hack" them. There's companies that expose as much through open APIs.
If we're talking about the content, then I feel it's a pointless argument. If we're saying that it should have been avoided regardless of content, I agree, but the media time and the severity at which it's been advertised is probably a bit overboard.
1. In general, negligence law works to prevent people and companies from imposing the costs of their actions on others, that is, externalizing those costs, while retaining the benefits.
For example, suppose that a pizza delivery guy drives too fast. Perhaps he wants to make more deliveries during his shift and thereby earn more money. Whatever the reason, suppose also that the pizza guy hits a parked car with no one in it (to simplify the example).
In that situation, the pizza guy (and his employer, but that's another discussion) likely will have to pay to have the other car fixed, and for a rental car for the other car's owner. The rationale is that the pizza guy and his employer shouldn't be able to retain the benefits from his fast driving while making others, i.e., the owner of the car he hit, bear the resulting costs.
(That, incidentally, is why drivers in many jurisdictions are required by law to carry liability insurance -- so that if a driver does negligently get into an accident, there will be a pre-established pool of money that can be tapped to pay for the resulting damage, even if the driver himself happens to be broke at the time.)
2. Negligence can be loosely paraphrased as a failure to use due care when there's a duty to do so. In any given case, it might be debatable whether a duty of care existed, and if so, whether the defendant complied with that duty. In assessing these questions, courts generally look at, for example:
+ the likelihood and magnitude of the potential loss from the conduct in question (i.e., the expected loss);
+ the incremental cost of additional measures to prevent the expected loss;
+ in the case of a business, whether that incremental cost can be amortized across the business's customer base by the business's buying insurance (or self-insuring) and then increasing its price accordingly;
+ which party is in a better position to take measures to prevent the expected loss, and/or to bear the loss if it comes to pass.
For the mathematically-minded: A famous case studied by all U.S. law students is United States v. Carroll Towing Co. , where the opinion was written by the legendary judge Learned Hand (yes, that was his name). Judge Hand put it in algebraic terms:
"Since there are occasions when every vessel will break from her moorings, and since, if she does, she becomes a menace to those about her; the owner’s duty, as in other similar situations, to provide against resulting injuries is a function of three variables:
"(1) The probability that she will break away;
"(2) the gravity of the resulting injury, if she does;
"(3) the burden of adequate precautions.
"Possibly it serves to bring this notion into relief to state it in algebraic terms: if the probability be called P; the injury, L; and the burden, B; liability depends upon whether B is less than L multiplied by P: i.e., whether B < PL."
3. So how does this apply to criminal hacker break-ins? Suppose that: (A) a company fails to use "due care," whatever that means, by way of security precautions; and (B) as a result, third parties are damaged in ways that "reasonable people" would have foreseen. In that situation, it's not hard to imagine that the company might well be held liable for such damage.
As a practical matter, in a negligence trial, the plaintiff's lawyers will often think up additional precautions that the defendant supposedly could have taken without undue cost or burden. The defendant's lawyers are then in the position of having to convince the judge or jury that the cost or burden would indeed have been "undue." That can be an uphill battle, especially when the plaintiff is a sympathetic sort and the damage is something that judges and jurors can identify with.
The fiduciary admitted they do not suspect the victim drained the account on their own, and admitted there was no laxity on the victim's part they are aware of (no claim of password has too little entropy, for example). We suspect only a few accounts were hit, and all were drained due to an information security breach of some kind (the victim has some ties to sources who work on Wall Street who indicated they've heard about the breach).
The fiduciary proactively notified the victim of unusual account activity. That was the ONLY positive reaction to date (14 months ago now as of the time of this writing). The fiduciary then proceeds to:
* Lock out the victim from their own online account. Even read-only access to review historical transactions was denied.
* Mark the account "under internal security review", refuse any further discussion pending results of review.
* Make no effort to communicate with victim (my guess is they are likely expecting to lawyer up and go to litigation).
* Refuse at the local branch office to print off historical transactions (victim had switched to paperless a long time ago).
List of agencies contacted and who have refused to assist due to lack of jurisdiction:
* local police
* state banking commission
* state financial regulatory commission
I am waiting for the victim to give up all hope and give me permission to politely go through my attorney and if rebuffed out this publicly to any financial news reporter who wants the detailed information (account numbers, what little paper history trail exists, etc.), while giving the fiduciary a fixed amount of time to resolve. I might also involve the IRS after consultation with a tax attorney; there are some interesting tax laws that might be relevant from some basic research I did.
To add some genderism to the fire: if a woman choses to wear clothes that some would consider "provocative" is it her fault that she gets raped because she was inviting undue male attention? Is she simply "asking for it"? I don't think so. And I think it's a human's right to freely express him or herself as s/he choses. If I want to leave the window to my car down, that's up to me, the owner of the car. If someone steals my car, that's their fault.
As far as software security is concerned, I think there are definitely reasonable and unreasonable steps that can be taken in the development of it. But I don't expect the developers of the software I use to be on the edge of their seats watching the internet for the every single security exploit that pops up so that they can instantly apply a patch. I do expect them to take reasonable steps to reasonably secure the software, and when something becomes obvious, to deal with it. When I entrust the storage of my personal details with a website, I have certain expectations about how that information is handled, but I don't expect it to be 100% impervious to attack.
I also think there lies some responsibility with the user to choose to use software they trust (trust being a spectrum, not a binary distinction). For example, I wouldn't expect a lot of security from some kind of seedy porn software, and would actually expect the software to actively compromise my system. If I chose to install such software, I think I'd be partly liable for installing something that is obviously insecure. But I have different expectations from my banking software. I think those expectations, which are relative to particular industries and markets, and which are fuzzy and ill-defined, are partly a user's responsibility (but not wholly... I still expect my bank to reimburse me if someone breaks into their database and steals my money).
My point is that liability in software security, to me, is a fluctuating grey area, and that the areas are defined differently for different kinds of software which should, at least partially, be apparent (and avoidable) to the end user.
I'm not trying to defend Snapchat or say that you should leave your car unlocked, just that I think these kinds of issues are not black and white, nor that one party is 100% wrong and another party is 100% right.
I'm disturbed by car == snapchat software. They didn't get the security wrong either. Whatever people can do to abuse the system, at least the system exists in the first place. I fear that if you try to regulate software in terms of right and wrong or in terms of open car windows etc. you will end up with goofy standards that will stifle innovation. I'm sure someone could mention one of those standards.
The software is just doing what it does. You could call this "attack" a feature for power users.