Hacker News new | past | comments | ask | show | jobs | submit login
When Security Takes a Backseat to Productivity (krebsonsecurity.com)
59 points by todsacerdoti 15 days ago | hide | past | web | favorite | 38 comments



I will echo a comment I made in a different thread about the CIA hack.

Why does anybody assume they would be able to protect their systems even if it was their main priority?

The CIA works on cyber-weapons designed to surveil countries, disable infrastructure, and destabilize governments. How capable and well-funded should a person or country need to be to destroy an economy or destabilize a government by stealing the CIA's weapons? $1B, $10B, $1T? A team of 1,000, 10,000, 1,000,000 specialists?

I think most people would probably agree that $1B is a lower bound for nation-destroying capabilities. You could hire a team of 100s-1000s of offensive security specialists full-time for 10 years with a budget of $1B. Does anyone know of any system or organization in existence that would even be willing to claim they can stop of a team of 1000 dedicated offensive security specialists working full-time for 10 years with a $1B budget let alone put it in writing or have evidence to back up that claim? What is the highest you have heard? Is it even in the general ballpark? I have personally never talked to an organization willing to claim a number higher than $1M and willing to put their money where their mouth is.

If nobody is even willing to claim that they can provide an actual defense, let alone having the extraordinary evidence required to backup an extraordinary claim of $1B, why is there any reason to believe that the CIA would be able to protect themselves even if they prioritized the problem?


Sure, any public facing system is theoretically vulnerable given infinite monkeys. What I don't understand is why this data wasn't airgapped. I've spoken with enough colleagues in defence to know that plenty of facilities do this as standard, even for tooling source code, let alone a trove of juicy intel data.

And FTA "Segmenting one’s network", even my shoddy underfunded local health dept has implemented that one. The CIA didn't even do the basics here.


That is missing the point. It is not that infinite monkeys could break in, it is that the NECESSARY level of security can not be met even assuming the best known practical system. Therefore, they MUST NOT use/create such a system since they can not achieve the MINIMUM requirement. There is no point in improving systems from ridiculously inadequate to very inadequate since the system still does not work and you MUST NOT use systems that DO NOT WORK in critical capacities.

Like, imagine a world where the Army made tanks out of tissue paper. You could say, "Look at these clowns. Don't they know regular paper provides better defense than tissue paper.". While true, it does not really matter since if the best armor available is paper, every strategy should probably avoid depending on tanks.

My point is about looking at OBJECTIVE requirements and evaluating solutions against them. At a basic level this boils down to two questions:

1. What is the NECESSARY level of security?

2. Can anybody achieve the NECESSARY level of security?

If the answer to 2 is no, then the system MUST NOT be used/created.


I mean, you can still be a hard target or a soft target.

I guess this doesn't apply to unique assets like the CIA, but for your regular old e-commerce firm, you can have defences that wouldn't stand up to a sustained, targeted attack by state actors, but still make regular hackers go steal somebody else's DB of customer details.


Soft/Hard only considers one side of the equation, the level of security provided. It ignores the other side which is what is needed or expected. Without doing that you can not tell if you are dealing with soft/hard or tissue/paper. A more meaningful distinction is profitable/unprofitable and, if you really must rely on other people being tastier fish in the barrel, ROI. For example, if company A costs $10K to hit for a return of $100K, but company B costs $100K to hit for a return of $100M, the only reason someone would hit A knowing this information is if they did not have enough capital to hit B.

I agree that not everybody needs to be able to withstand an attack by state actors. It is up to the involved and affected parties to choose the level of security needed. However, the highest level of actual security I have heard from people is ~$1M and I would be hard pressed to find any appreciable system in a moderately-sized business where the negative consequences would be as low as ~$1M. Frankly, $1M is chump change in the commercial world. If that is all it takes to compromise nearly any system or organization in the world, then a sizable fraction of the people reading this comment and around 46,800,000 people worldwide have the personal resources to compromise any system in the world. That is terrifying.


Are you sure there are negative consequences of ~$1M? You're talking a lot about profitable/unprofitable, but you haven't linked to any sources that back up your numbers.

Maybe the marketplace just doesn't value security? Customers seem happy to give away all their data to Google/Facebook for free. Equifax got completely and thoroughly owned but it hasn't seemed to cost them anything. Zoom is a security nightmare but keeps getting more popular.

Companies aren't going to value security until the lack of it starts to sting their stock price.


It wasn't public facing, it was taken from an internal network it seems. https://www.businessinsider.com/cia-vault-7-leak-woefully-la...


I think this is backwards.

If you want security to be a first class constraint, you must make security features extremely easy to use and ergonomic above all else (even above being secure!).

Nobody is going to willingly agree to abandon their productivity or flexibility for your security tool or policy. If you make them choose, security will lose 100% of the time, forever, in every walk of life.

You need to stop viewing it as if you need people to sacrifice for security and instead design for ergonomics and usability as the obsessive, #1 priority.

This is why consumer password managers succeed (and help people to be more secure!) but internal security teams can’t get anything done in private companies.

Your first responsibility is to make something your users want and like to use, period. After you solve that, then, without disrupting usability, you can modify it to actually adhere to security constraints and achieve other results.

If I see that a company has an internal security team my first question is, where is the product manager?

If you don’t treat internal security tooling like you’re delivering a product, then you’re done. Just go home and watch Netflix because you’re not solving security problems.


I’ll pile onto this- your security process also has to be user friendly. No security tooling in the world is going to protect you if it’s so hard to get a new product out the door that business users actively start subverting the system.

I’ve had US government customers take 3 months to review a CR to connect two systems. I’ve been tossed non-sensical documentation and told to fill it out without any help, a process that can take literally weeks of person time. And I’ve seen security teams cripple critical business functionality for, at best, marginal security gains.

In environments where things like the above are happening you start users and even business organizations rebel, they start operating outside of the it systems (txt message driven shadow business processes are common) and IT broadly and security specifically begin to be seen as the enemy.

How to fix this? Well one, i think security teams really need to focus on making themselves aids to everyone else rather than traffic cops. One model I like is to embed security team members in any development process or acquisition effort as team members, with the expectation these v-team members actually deliver work, including advocating for the team with in the security process and creating any security artifacts. That’s expensive, so they’ll have to be some cross billing set up or something, but it’s worth it IMO.

Regardless, security that reduces an organizations ability to do its mission runs the risk of getting avoided altogether.


> You must make security features extremely easy to use and ergonomic above all else (even above being secure!).

That's definitely not right.

The correct security design is that when things aren't secure you fail entirely. This will sometimes be very annoying but the temptation to prefer not failing leads to disaster. Instead an organisation that prioritises security must dedicate resources to resolving the actual security problem as a priority because it is very annoying.

For example 'thisisunsafe' and its predecessor 'badidea' are indeed, unsafe and a bad idea. The correct design is to simply fail instead. Which organisation do you think gets successfully phished with invalid HTTPS certificates - the Chrome embracing organisation that has taught people they can just type "thisisunsafe" or the one where everybody uses Firefox and it brick walls when HSTS denies access?

> Nobody is going to willingly agree to abandon their productivity or flexibility for your security tool or policy.

This is almost correct. Humans are very lazy. They will give up their productivity or flexibility for your security tool or policy if it's easier than the alternative.

For example when your users are trying to give their credentials to bad guys, you need to make this so difficult they give up.

You might think you can train your users not to want to give their credentials to bad guys, but this is unlikely to be successful enough to bother. Instead get to a place where your users, even though they really want to help the bad guys, just can't see an easy way to do it.

They may even file a helpdesk ticket because they genuinely don't realise what they're trying to do would be a very bad idea. Try not to be smug when responding to the ticket.


When things that people need to do fail entirely, people will develop hacky, under the radar alternatives.


> The correct security design is that when things aren't secure you fail entirely.

Yikes, this is extremely wrong. Security failures should be proportional to the actual cost and consequences.

On top of this, you can’t just fail systems in a business. You’ll lose all your customers and go bankrupt. On the other hand, you _can_ allow security vulnerabilities to continue existing. Sometimes you might lose customers or face legal consequences, so you might _have to_ address those security situations, but they are in the rare minority of all security issues overall, many of which you just need to apply expected value thinking towards and treat like any other trade-off.

Security is a resource to be traded off against other concerns, not an absolute necessity.


> Security failures should be proportional to the actual cost and consequences.

That's wonderful for Nostradamus, but everybody else is obliged to operate without knowledge of the future. What will the actual consequences be of bad guys being able to send email from the VP Asia Pacific's account to the CFO's office five minutes before close of business?

Maybe nothing right? Or maybe an "urgent cash payment" of $48M to secure a take over deal vanishes into a maze of international accounts never to be recovered...

Security is a special problem because you have unknown sentient adversaries. You completely lack intelligence about the adversary because you don't even know who they are. Don't think about security decisions the way you'd think about decisions like whether to hire a back-up venue in case the company picnic is rained off.


You are right in that security is a trade-off, but a security vulnerability should be considered almost as a failed system from a business perspective imho, because sooner or later it will turn into one. If you architect your systems to fail when not secure, you should detect the problem (and fix it) even before going live in production. It’s the fail fast, fail early strategy.


1000%. Modern software security is an exercise in optimizing developer UX.


I think this is an important point.

The first priority needs to be to make it easy and obvious to do the right thing.

Trying to make rules to forbid doing something insecure isn't helpful at all. If I need to do something and the only way I know how is the unsafe way, there's a big risk I'll pick that way. Unfortunately, it seems that security work is often just about trying to stop people from doing things, not helping them to do it the right way.


This reminds me of the way ssh became so popular so quickly in the mid-1990s: it was simultaneously more secure and more pleasant to use than unencrypted telnet or rlogin. I've tried to take the general lesson when writing my own systems: if security is making anything less convenient, I'm not doing it well enough.


When does it not? Seriously, has anyone here worked somewhere security actually was front and center but people were also able to build new things?


Kind of. Normally network access is fairly restricted, as well as administration rights on machines but developers quickly are exempt from this for productive work. I remember sins like modifying SSL checks for several package managers.

lie isValidTLSCert(){return true;//suuure}

Today is is mostly a configuration issue. Of course you will never be praised for the security of a solution. If something bad happens security quickly becomes the most important thing of course.


My current job at Google working on virtualization?


Don't think it counts. That's a neccisary condition to multi tenancy cloud tech. It's a product feature that customers require and ask for.


Can't be that secure if you go around telling people that's what you do.


The health technology space is kind of like this; HIPAA concerns etc. makes security a business continuity imperative.


Fair point. On the other hand, do these ever go beyond the basic checklists which provide the absolute minimum to certify the software? I doubt it.


Seems like more of a pessimist's take. I know the company I work at[1], at least has an ingrained culture of security that permeates through the org and dictates software development practices and tech stack. It probably helps to be a startup, where smaller missteps would be a bigger deal than for an established player.

[1] https://www.commure.com/


And the state of EMR software pretty much supports the point that it is, indeed, nearly impossible to build new things under those constraints....


Well, EMRs are generally part of older, established presences which have inertia. But companies like Apple and Alto are innovating in the medical space, and they're still governed by HIPAA.


- I know one company in SV that has done that for regulatory reasons, at huge, on-going financial expense. It's not normal.

- if you have engineers with previous implementation experience on HITRUST or similar, it can be done. But it's hard to retrofit if everything that's installed is stale.

- otherwise you end up with "paper certifications" without the engineering to support it. That works for a while, but eventually becomes a problem if an on-site audit is scheduled.

https://en.wikipedia.org/wiki/HITRUST


From the article referencing the Wikileaks taskforce - "The CIA acknowledged its security processes were so “woefully lax” that the agency probably would never have known about the data theft had Wikileaks not published the stolen documents online.

If hollywood did one thing well - it was to inspire in me a misplaced faith in the competency of security/government institutions.


If you're in search of cure, get a job in government. It's an education both in brute force and ignorance.


The CIA has been active in Hollywood since its creation ensuring it gets this type of treatment.


There are big shifts in the economics of security technology https://semiengineering.com/fundamental-changes-in-economics.... More and higher value data, thinner chips and a shifting customer base are forcing long-overdue changes in semiconductor security


I’m led to believe that the CIA is run like Equifax but I can't shake the feeling that this is all a smokescreen.


Good point. But Occam’s razor, and Hanlons as well. Given the evidence I’m going to go with equifax.


The most secure work is the work that doesn’t get done. That is to say, if it doesn’t exist, it can’t be stolen. That’s where security above all inevitably takes us, and the defensive guys get to pat themselves on the back and declare victory.


that's not security, that's obscurity. don't confuse one with the other


No, "obscurity" does not mean "bad security", it specifically refers to the practice of hiding the details of the mechanism hoping that it makes it more secure.

Things like "Pet names and mother's maiden names are common security questions, so maybe lets not store those in our employee info database" are valid security considerations. And of course they make the employee database slightly less useful.


The Comment "No effective removable media controls" Is shocking

15+ Years ago QinetiQ used to solder up the usb ports on its lap tops and that's for those working on avowed jobs, not the Secret Squirrel ones.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: