
So sue me: are lawyers really the key to computer security? - shawndumas
http://arstechnica.com/tech-policy/news/2011/07/will-your-employer-get-sued-for-your-security-screw-ups.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss
======
pittsburgh
I hold an unpopular opinion that lawyers and lawsuits are a great way to
motivate companies to "do the right thing", where in this case the "right
thing" we're talking about is protecting customer data.

Another great motivator for doing the right thing is knowing that customers
will vote with their wallets. Unfortunately this isn't always a strong enough
motivation because some markets don't have enough competition, or the cost and
hassle of changing the companies you do business with is too high. (Don't you
wish customers had left AT&T in droves over the NSA spying ordeal?)

That's where another force comes into play, which is government regulation. I
lean libertarian, and although I think some regulation is an absolute
necessity (especially on environmental issues) my preference is to have the
least amount of regulation necessary. That brings us back to the attorneys.
When a company like Sony screws up and exposes their customers' data, I'd
rather see them get their pants sued off than have the government step in and
regulate. Fear of being sued is a much more compelling reason to "do the right
thing" than fear of breaking a law, which might only get you a slap on the
wrist.

Do frivolous lawsuits exist? Yes, and they piss me off like the next person.
Do scumball attorneys exist? Yes, and I hate them like you do. Ironically, I
think some of _this_ problem could be solved with new laws, but I haven't
really thought about it enough to more specific. (Maybe something along the
lines of the loser having to pay the the other side's legal fees, but I can
also argue against that from ten angles. I really haven't spent enough time
thinking about how to minimize frivolous lawsuits to feel like I can say
anything intelligent about it, other than to say that I bet something can be
done.)

Anyway, my point is that companies have different forces that can/should/do
motivate them to provide data security, and the threat of lawsuit is an
excellent one, right up there with fear of losing customers and fear of
government regulation. Too much of any one of these forces is bad, but we
wouldn't have a healthy mix without attorneys and their lawsuits.

~~~
fleitz
It's going to work for about a year, then when there is a security failure the
company will turn around run git/svn blame and sue the individual employee.
Hopefully the laws would be written so that when you post security best
practices in your TOS and the customer does not follow them the liability can
be mitigated (eg. don't reuse passwords on multiple sites)

Re: AT&T where are they going to go? T-Mobile?

~~~
pittsburgh
_It's going to work for about a year, then when there is a security failure
the company will turn around run git/svn blame and sue the individual
employee._

You bring up an interesting topic which is an extension of the first one. If
the forces motivating a company to do the right thing are 1) desire to gain
and not lose customers 2) desire to not be penalized by the government and 3)
desire to not be sued, then what motivates an employee to also do the right
thing? (I'm generalizing the question because this applies to so many things,
but I'll switch back to talking about "building secure software" as a specific
example of "doing the right thing".)

A software developer should be motivated to build secure code because of these
motivators: 1) Desire to build or maintain a good reputation among peers 2)
Desire to not get fired 3) Desire to protect employer from harm 4) Desire to
protect customers from harm 5) Desire to just do things the right way for the
sake of preferring good things over bad. (There's probably a more elegant way
to phrase that last one, but it's like how an architect might fight against
proposed changes to a blueprint for the sake of the building itself.)

To err is human, and companies are composed of humans. When a company hires a
software developer, they are inherently taking on the risk that this human
will make mistakes, so I don't think developers should be legally liable for
bugs or vulnerabilities in their code unless they are incredibly egregious or
intentional. It's the company's responsibility to anticipate the possibility
of bugs and vulnerabilities, and to mitigate that risk by hiring good people,
and by having good policies, procedures and training. (By having code reviews
and conducting security audits, for example.)

I'm sure we're in agreement that developers shouldn't be sued for mistakes in
their code, but whether or not they _can_ be sued for honest mistakes is
another question. I don't know what the law has to say about that, but if
employees aren't already protected against lawsuits for non-egregious mistakes
then that should be changed.

 _Hopefully the laws would be written so that when you post security best
practices in your TOS and the customer does not follow them the liability can
be mitigated (eg. don't reuse passwords on multiple sites)_

I totally agree. Customers have to bear some of the responsibility as well.

 _Re: AT &T where are they going to go? T-Mobile?_

My point exactly! Going back to three motivators I mentioned for companies to
do the right thing, AT&T knew it wouldn't lose a significant number of
customers over the NSA spying issue because there's not much competition in
their space. (That and apathy, unfortunately.) Also, they knew they wouldn't
get penalized by the government for, well, forking data over to the
government. That leaves the only viable option being to sue AT&T... except
that power was taken away by retroactively granting AT&T immunity by FISA. (
<http://en.wikipedia.org/wiki/Hepting_v._AT%26T> ) This is what makes the
AT&T/NSA issue so upsetting. All motivations for AT&T and other telecoms to
"do the right thing" have been taken off the table.

------
ScottBurson
_Still, Halderman warned that too much litigation could cause companies to
become excessively security-conscious. Software developers always face a
trade-off between security and other priorities like cost and time to market.
Forcing companies to devote too much effort to security can be as harmful as
devoting too little._

While I suppose there is always some risk of obscure, exotic vulnerabilities
that take substantial creativity to find, the breaches that have been making
the news lately have not been of this kind; they've all involved "kindergarten
security" as Bruce Schneier put it. Securing applications against these kinds
of exploits is not difficult!

~~~
tptacek
Yes it is. Most devastating bugs are actually trivial. English or metric
units? The security problem isn't how hard or simple any one bug is; it's how
to eradicate them across entire immense codebases, while still shipping with
the market.

~~~
tetrad
The vulnerabilities which I'm aware of in regards to the Sony breaches were
all SQL-injection based. There are readily available tools which perform
automated tests bombing a website with various SQL injection techniques, which
I imagine is how they were found by the attackers.

It is negligent to run a website that contains the personal information of
thousands+ people and not run a tool like this or do similar analysis to
_identify_ these problems. Fixing them may be another matter (although for SQL
injection it should be a matter of sanitizing all of your input and
parameterizing all of your queries), but I think the ball is in their court in
terms of not knowing about them.

~~~
tptacek
The idea that every team (in-house and outsourced) in Sony that owns an
application has a security resource, or that the central resource in Sony
knows about every application, does not square with the reality of _most_ of
the companies I've gotten to know.

This is the same problem I mentioned upthread (trivial bugs sneaking into huge
codebases), just generalized out one level.

The original comment I responded to asserted that "securing applications
against these kinds of attacks is not difficult". Again: yes it is. I know
companies who spend huge amounts of money trying to defend against simple
attacks, and they are not 100% successful. It isn't just "not not difficult";
it isn't just "difficult"; it's one of _the hardest_ problems in IT.

------
dangrossman
The class action suit against Dropbox sounds frivolous... the claimant says
she wasn't even aware of the possible security lapse until days later when she
read about it from a news source. That's on top of not being notified by
Dropbox, which means her account wasn't accessed during the problem window.
What possible damages could she be claiming?

~~~
tptacek
1\. That the unfair competitive practices Dropbox engaged in when they told
people untruths about their security caused her and people like her to select
suboptimal storage solutions, which is a claim that arises from a California
unfair competition law.

2\. That some people in the class had their privacy invaded, the precise
number of whom might be found during discovery.

3\. That the negligence involved in opening this hole in Dropbox incurred
damages at customers for instance by requiring them to take time off to move
files off Dropbox.

4\. That Dropbox breached its warranty and owes its customers a refund.

Happy to help.

------
floppydisk
Rather than drag the lawyers into this, (just more paperwork)why don't we look
at it from the perspective of the tools we are using? For SQL injection, what
stops databases from building an input scrubber that sits between the database
and user input and scrubs input to block SQL injection? Or for that matter,
why don't we see languages and frameworks used for web development touting the
fact they include robust security that's easy to use?

Part of it is culture, we actually have to care about security and part of it
is ease of security. If we build tools that make it easier to create a (more)
secure environment and push it by default, we can at least make improving
security easier.

~~~
tptacek
Nothing stops them from doing that. But they don't. Now what? See, we're back
at the premise of the article.

~~~
floppydisk
Partially because there hasn't been user demand for it to date. Such a change
will require the programming community to start clamoring for tools to make
things secure. We want speed, easy to use syntax, things like that.

~~~
ZoFreX
You'd be hard pushed to find a framework or language that didn't have, and
recommend, ways to access a database that guarantee injection can't happen. If
people choose not to use them, there's not a whole lot you can do.

That's just a small part of the puzzle, though, and not every security issue
would or could be fixed by solutions of that nature. There is no fix, other
than developers being informed, capable, and diligent.

------
robtoo
I have always believed that it will ultimately be the insurers (through
liability insurance conditions) who enforce server security, rather than
courts and lawyers. I don't see this as a bad thing.

~~~
tptacek
You should. If insurance is going to solve this, wait 15 years and we'll all
need certifications to commit code.

~~~
HeyLaughingBoy
Something is definitely going to change.If it's not insurance driven, then it
will be legislated for some types of code.

In most states you need a license just to cut hair. Software can easily cause
more damage than a hairdresser.

------
Dylan16807
The lede is rather disappointing on this. These lawsuits are for _services_
being breached, not being the author of _code_ that gets breached.

------
GaryOlson
Current computer security for companies is analogous to medieval castles:
large crude systems with large support requirements and little concern for the
security of small individual contributors. Once computer infrastructure
effectively moves to less crude large scale centralized forms and provides
effective minimal security for every small contributor will the key be
available.

My home is my castle; my community infrastructure supports that
implementation. Therefore the community does not require a castle. When
personal computing equipment is equally robust, large computing systems will
not be as necessary and neither will the legal implementations.

Laws and lawyers at the individual level are the key to computer security.

