

On Hacking (or Why We Need Security Ratings) - blackhole
http://blackhole0173.blogspot.com/2011/06/on-hackers.html

======
m0nastic
So what you're talking about is covered by both regulatory compliance (like
PCI, or HIPAA), and third-party services (like ICSA and those "hacker proof"
logos).

The first are an imperfect implementation of a gameable standard which some
argue is better than not having anything at all (and others argue makes things
worse).

The second are at best of limited usefulness (as the half-life of any type of
"hacker proof certificate" is incredibly short), or at worst snake oil.

While far from a solved problem, the current best practice methodology of
implementing an SDLC program, increasing developer awareness, and both
internal testing of applications as well as independent third-party security
review before letting an application go live really does solve 99% of these
issues.

Dramatically increasing the overall security posture of the internet at large
doesn't require a "cure for cancer"-type breakthrough. It just requires
adopting a programatic approach to application development and deployment that
is presently not the standard.

[edit]: Saw the the submitter was the actual author of the blog, so changed
the "he's" to "you're"

~~~
blackhole
But there's no incentive to implement these fixes. Unless the market can
figure out the difference between good security and bad security, people will
just assume all security stinks and there will be no improvement. This is what
has been happening.

~~~
m0nastic
There are incentives, they are just not at an equilibrium with the costs.

Prior to compliance legislation (like PCI, GLBA, HIPAA, SarBox, etc.) there
wasn't much downside to having insecure systems; or more accurately, the
downside only hit you after something bad had happened, so for most
organizations it didn't make sense to worry about it until that point.

In today's business climate, the costs have increased somewhat: Potentially
losing the ability to process card transactions, having to do breach
notifications and pay for credit reports, being fined by the SEC, being made
fun of on messageboards, etc.

They're still mostly only applicable after something bad happens though. It's
like trying to convince people to back up their hard drives. There's a
percentage of them who fundamentally accept that it's a good idea, and are
proactive about it. There's probably a larger percentage who only consider it
after they've had a drive failure and lost a bunch of data. I don't think
there's anything you could say to them to make them understand that though.

But the "good" news (and I use that term loosely) is that the costs of "non-
compliance" are going up, and I don't just mean from a regulatory perspective.
Sony will spend a lot of money dealing with the implications of the PSN hacks.
Citibank will as well.

Executives of other companies who weren't directly part of a breach will sit
around in conference rooms and ask "So, what happened to them couldn't happen
to us, right?" and some CSO will try to stifle a laugh and say "Oh no, we'd be
screwed if the same thing happened to us."

~~~
blackhole
So why, then, do you think that incentives that have nothing to do with the
average consumer are sufficient? The average consumer is still helpless here.
They simply cannot know if a company has good security or not, especially if
its new - like a startup. The end result is that most of them assume that all
security is bad. We have seen this before, repeatedly, and its even showing up
in the comments of HN: <http://news.ycombinator.com/item?id=2671441>

People keep asking if good security is even practical. Security experts _know_
it is, but nobody else does. If the incentives you describe were actually
working, we would be properly assigning blame to bad security, instead of
everyone simply asking if good security is even possible.

~~~
m0nastic
I'm sorry if my post led you to believe that I think the incentives are
presently sufficient. They most certainly are not.

My point was that they _exist_ , and believe it or not, as bad as they seem
now, they are better than they were when I first started working in InfoSec
ten years ago.

I'm not a "visionary" type person, but the only incentive that I can think of
that would actually have practical implications for the consumer would be the
ability to hold a provider responsible for bad security decisions (ie: Your
site was hacked, my information was compromised, so now I'm suing you).

I also don't think this will ever happen (on a large enough scale to be
effective); for a variety of reasons.

I know this sounds stupidly obvious, but companies on the whole will only take
security seriously when the cost from being insecure is greater than the cost
it will take them to secure things.

Maybe insurance will be the thing to finally tip the scale. Companies are
already buying "hacker insurance", and I imagine if there's any industry with
an incentive to enforce best practices, it would be the insurance industry.

~~~
blackhole
I am convinced that something will have to happen, and I agree that the
insurance deal sounds promising. Companies will be forced to have good
security if they want good insurance.

