
When Security Takes a Backseat to Productivity - todsacerdoti
https://krebsonsecurity.com/2020/06/when-security-takes-a-backseat-to-productivity/
======
Veserv
I will echo a comment I made in a different thread about the CIA hack.

Why does anybody assume they would be able to protect their systems even if it
was their main priority?

The CIA works on cyber-weapons designed to surveil countries, disable
infrastructure, and destabilize governments. How capable and well-funded
should a person or country need to be to destroy an economy or destabilize a
government by stealing the CIA's weapons? $1B, $10B, $1T? A team of 1,000,
10,000, 1,000,000 specialists?

I think most people would probably agree that $1B is a lower bound for nation-
destroying capabilities. You could hire a team of 100s-1000s of offensive
security specialists full-time for 10 years with a budget of $1B. Does anyone
know of any system or organization in existence that would even be willing to
claim they can stop of a team of 1000 dedicated offensive security specialists
working full-time for 10 years with a $1B budget let alone put it in writing
or have evidence to back up that claim? What is the highest you have heard? Is
it even in the general ballpark? I have personally never talked to an
organization willing to claim a number higher than $1M and willing to put
their money where their mouth is.

If nobody is even willing to claim that they can provide an actual defense,
let alone having the extraordinary evidence required to backup an
extraordinary claim of $1B, why is there any reason to believe that the CIA
would be able to protect themselves even if they prioritized the problem?

~~~
hnzix
Sure, any public facing system is theoretically vulnerable given infinite
monkeys. What I don't understand is why this data wasn't airgapped. I've
spoken with enough colleagues in defence to know that plenty of facilities do
this as standard, even for tooling source code, let alone a trove of juicy
intel data.

And FTA "Segmenting one’s network", even my shoddy underfunded local health
dept has implemented that one. The CIA didn't even do the basics here.

~~~
Veserv
That is missing the point. It is not that infinite monkeys could break in, it
is that the NECESSARY level of security can not be met even assuming the best
known practical system. Therefore, they MUST NOT use/create such a system
since they can not achieve the MINIMUM requirement. There is no point in
improving systems from ridiculously inadequate to very inadequate since the
system still does not work and you MUST NOT use systems that DO NOT WORK in
critical capacities.

Like, imagine a world where the Army made tanks out of tissue paper. You could
say, "Look at these clowns. Don't they know regular paper provides better
defense than tissue paper.". While true, it does not really matter since if
the best armor available is paper, every strategy should probably avoid
depending on tanks.

My point is about looking at OBJECTIVE requirements and evaluating solutions
against them. At a basic level this boils down to two questions:

1\. What is the NECESSARY level of security?

2\. Can anybody achieve the NECESSARY level of security?

If the answer to 2 is no, then the system MUST NOT be used/created.

~~~
ForHackernews
I mean, you can still be a hard target or a soft target.

I guess this doesn't apply to unique assets like the CIA, but for your regular
old e-commerce firm, you can have defences that wouldn't stand up to a
sustained, targeted attack by state actors, but still make regular hackers go
steal somebody else's DB of customer details.

~~~
Veserv
Soft/Hard only considers one side of the equation, the level of security
provided. It ignores the other side which is what is needed or expected.
Without doing that you can not tell if you are dealing with soft/hard or
tissue/paper. A more meaningful distinction is profitable/unprofitable and, if
you really must rely on other people being tastier fish in the barrel, ROI.
For example, if company A costs $10K to hit for a return of $100K, but company
B costs $100K to hit for a return of $100M, the only reason someone would hit
A knowing this information is if they did not have enough capital to hit B.

I agree that not everybody needs to be able to withstand an attack by state
actors. It is up to the involved and affected parties to choose the level of
security needed. However, the highest level of actual security I have heard
from people is ~$1M and I would be hard pressed to find any appreciable system
in a moderately-sized business where the negative consequences would be as low
as ~$1M. Frankly, $1M is chump change in the commercial world. If that is all
it takes to compromise nearly any system or organization in the world, then a
sizable fraction of the people reading this comment and around 46,800,000
people worldwide have the personal resources to compromise any system in the
world. That is terrifying.

~~~
ForHackernews
Are you sure there are negative consequences of ~$1M? You're talking a lot
about profitable/unprofitable, but you haven't linked to any sources that back
up your numbers.

Maybe the marketplace just doesn't value security? Customers seem happy to
give away all their data to Google/Facebook for free. Equifax got completely
and thoroughly owned but it hasn't seemed to cost them anything. Zoom is a
security nightmare but keeps getting more popular.

Companies aren't going to value security until the lack of it starts to sting
their stock price.

------
mlthoughts2018
I think this is backwards.

If you want security to be a first class constraint, you must make security
features extremely easy to use and ergonomic above all else (even above being
secure!).

Nobody is going to willingly agree to abandon their productivity or
flexibility for your security tool or policy. If you make them choose,
security will lose 100% of the time, forever, in every walk of life.

You need to stop viewing it as if you need people to sacrifice for security
and instead design for ergonomics and usability as the obsessive, #1 priority.

This is why consumer password managers succeed (and help people to be more
secure!) but internal security teams can’t get anything done in private
companies.

Your first responsibility is to make something your users want and like to
use, period. After you solve that, then, without disrupting usability, you can
modify it to actually adhere to security constraints and achieve other
results.

If I see that a company has an internal security team my first question is,
where is the product manager?

If you don’t treat internal security tooling like you’re delivering a product,
then you’re done. Just go home and watch Netflix because you’re not solving
security problems.

~~~
tialaramex
> You must make security features extremely easy to use and ergonomic above
> all else (even above being secure!).

That's definitely not right.

The correct security design is that when things aren't secure you fail
entirely. This will sometimes be _very_ annoying but the temptation to prefer
not failing leads to disaster. Instead an organisation that prioritises
security must dedicate resources to resolving the actual security problem as a
priority _because_ it is very annoying.

For example 'thisisunsafe' and its predecessor 'badidea' are indeed, unsafe
and a bad idea. The correct design is to simply fail instead. Which
organisation do you think gets successfully phished with invalid HTTPS
certificates - the Chrome embracing organisation that has taught people they
can just type "thisisunsafe" or the one where everybody uses Firefox and it
brick walls when HSTS denies access?

> Nobody is going to willingly agree to abandon their productivity or
> flexibility for your security tool or policy.

This is almost correct. Humans are very lazy. They _will_ give up their
productivity or flexibility for your security tool or policy if it's easier
than the alternative.

For example when your users are trying to give their credentials to bad guys,
you need to make this _so difficult_ they give up.

You might think you can train your users not to want to give their credentials
to bad guys, but this is unlikely to be successful enough to bother. Instead
get to a place where your users, even though they really want to help the bad
guys, just can't see an easy way to do it.

They may even file a helpdesk ticket because they genuinely don't realise what
they're trying to do would be a very bad idea. Try not to be smug when
responding to the ticket.

~~~
mlthoughts2018
> The correct security design is that when things aren't secure you fail
> entirely.

Yikes, this is extremely wrong. Security failures should be proportional to
the actual cost and consequences.

On top of this, you can’t just fail systems in a business. You’ll lose all
your customers and go bankrupt. On the other hand, you _can_ allow security
vulnerabilities to continue existing. Sometimes you might lose customers or
face legal consequences, so you might _have to_ address those security
situations, but they are in the rare minority of all security issues overall,
many of which you just need to apply expected value thinking towards and treat
like any other trade-off.

Security is a resource to be traded off against other concerns, not an
absolute necessity.

~~~
tialaramex
> Security failures should be proportional to the actual cost and
> consequences.

That's wonderful for Nostradamus, but everybody else is obliged to operate
without knowledge of the future. What will the _actual_ consequences be of bad
guys being able to send email from the VP Asia Pacific's account to the CFO's
office five minutes before close of business?

Maybe nothing right? Or maybe an "urgent cash payment" of $48M to secure a
take over deal vanishes into a maze of international accounts never to be
recovered...

Security is a special problem because you have unknown sentient adversaries.
You completely lack intelligence about the adversary because you don't even
know who they are. Don't think about security decisions the way you'd think
about decisions like whether to hire a back-up venue in case the company
picnic is rained off.

------
l0b0
When does it not? Seriously, has anyone here worked somewhere security
_actually_ was front and center but people were also able to build new things?

~~~
strstr
My current job at Google working on virtualization?

~~~
tlarkworthy
Don't think it counts. That's a neccisary condition to multi tenancy cloud
tech. It's a product feature that customers require and ask for.

------
seven4
From the article referencing the Wikileaks taskforce - _" The CIA acknowledged
its security processes were so “woefully lax” that the agency probably would
never have known about the data theft had Wikileaks not published the stolen
documents online._

If hollywood did one thing well - it was to inspire in me a misplaced faith in
the competency of security/government institutions.

~~~
rrmm
If you're in search of cure, get a job in government. It's an education both
in brute force and ignorance.

------
Lind5
There are big shifts in the economics of security technology
[https://semiengineering.com/fundamental-changes-in-
economics...](https://semiengineering.com/fundamental-changes-in-economics-of-
security/). More and higher value data, thinner chips and a shifting customer
base are forcing long-overdue changes in semiconductor security

------
nominated1
I’m led to believe that the CIA is run like Equifax but I can't shake the
feeling that this is all a smokescreen.

~~~
tra3
Good point. But Occam’s razor, and Hanlons as well. Given the evidence I’m
going to go with equifax.

------
bargle0
The most secure work is the work that doesn’t get done. That is to say, if it
doesn’t exist, it can’t be stolen. That’s where security above all inevitably
takes us, and the defensive guys get to pat themselves on the back and declare
victory.

~~~
unnouinceput
that's not security, that's obscurity. don't confuse one with the other

~~~
chaosite
No, "obscurity" does not mean "bad security", it specifically refers to the
practice of hiding the details of the mechanism hoping that it makes it more
secure.

Things like "Pet names and mother's maiden names are common security
questions, so maybe lets not store those in our employee info database" are
valid security considerations. And of course they make the employee database
slightly less useful.

------
C1sc0cat
The Comment "No effective removable media controls" Is shocking

15+ Years ago QinetiQ used to solder up the usb ports on its lap tops and
that's for those working on avowed jobs, not the Secret Squirrel ones.

