The military has a better model of security. They protect things at different security levels.
I've spent time in the classified world. Security can be obtained, but costs are high. At the aerospace company, we estimated for bid purposes that running a project at SECRET doubled the cost. Running at levels above that became even more expensive, and much slower. You have to partition things, so that only the really critical stuff gets the most expensive protection. It's common to have a project where the project is mostly unclassified, many things are SECRET, and a very few things are at higher levels.
The military views security as time-limited. When and where the attack will start is highly classified until the attack is underway. After that, there is no secret. New weapons systems eventually get used or cancelled, after which they're less secret. The intelligence community wants to protect info forever, though.
The credit card services get this. The CVV is required to have a higher level of protection than credit card numbers or names and addresses. Banks understand separation of functions and mutual mistrust. Most computer security work doesn't think this way.
> Most computer security work doesn't think this way.
The computer industry is determined to relearn the well-known lessons the hard way.
For example, we still have a single password breach brings down the entire system. It's like building a battleship that would sink if a 2 inch hole in the hull happened.
The way to make systems secure is not to devise a perfect password system. The way is to make the system secure in spite of password breaches. It's a completely different mindset.
To return to the battleship analogy, it isn't about preventing holes in the hull. It's about surviving holes in the hull.
This is exactly how Qubes OS approaches it with security through compartmentalization: You run everything in VMs, and any compromised VM will not lead to a compromise of the whole system.
In my time as a computer security specialist, this has been the most difficult thing to get other computer people to understand.
Especially when it comes to cryptography. No cryptographic system can, or is intended to, keep something secret indefinitely. The whole point is to keep the secret for long enough that by the time it's cracked, the information itself is out of date and therefore disclosure is no longer harmful (or, at least, is much less harmful).
All security, computer or otherwise, is like a fire door. No fire door will hold back a fire forever. They are all rated in terms of the number of hours they'll be effective in a fire. Security must be thought of in similar terms.
But I understand. People want perfect security and wishful thinking is seductive. People also want security "for free" -- meaning something they can install or configure and then not have to worry about it anymore. But that's impossible, in large part because actual security doesn't have a technological solution. Security is a process that depends on proper human behavior. Technology can and does act as a "force multiplier" here, but a security approach that is 100% technological is a security approach that will be subverted sooner or later.
Interesting that your experience was a double increase in costs from going up in security level. My experience was it was an order of magnitude (10x) at each level .
E.g., OFFICIAL let's say is $1,000,000
PRIVATE becomes $10,000,000
SECRET becomes $100,000,000
What we did experience was the cost in time to deliver a service doubled however. E.g., OFFICIAL was 3 years, PRIVATE was 6 years and so on. I expect it varies significantly between types of system, ours was a SaaS platform that read telemetry from embedded systems on assets in the field, so I'd assume that would be one of the drivers for my differing experience than yours.
I agree with the solution, but what do you mean with "better" in the first sentence? Spelling out the complete approach wasn't suitable for the article, but the ISO 27001 framework spells out roughly what you describe. At each "security classification level" you have a different risk appetite, and will apply different controls.
It's the partitioning that's the hard part in commercial organizations.
In military and intel, different people, offices, and equipment handle different security levels and compartments. This has an enormous influence on the structure of the organization. It's the main cause of higher costs.
All this is about security, not integrity. Security is keeping info from getting out. Integrity is about keeping attackers from breaking things. Integrity used to be mostly a wartime problem, but now there's so much anonymity online that it's a problem all the time, a bigger one than security. Hence ransomware.
Here's a true story about one instantiation of "security is our top priority". You be the judge on whether this is an extreme outlier or pretty much the norm.
I was at one company where the PM blurted out (paraphrasing, but very close) "Enough of security talk. We will put out some nice wording on our website stating that security is our top priority and our product is perfectly secure. And that's that. We will not spend a single penny beyond that on making it so-called secure."
So, that's often what companies mean when they say "security is our top priority".
(The referred PM has since had a long career at a fruit-shaped FAANG, presumably making products secure. I hope they have grown up a bit.)
Safety is not a “luxury enabled by profit”. It’s one of many necessary dimensions for efficiently creating value over time.
If you are a sole trader and you fall off a ladder and hurt yourself because you took some stupid shortcut, the cost is extremely high.
If you work for someone and they make you do something stupid that hurts you, the cost to them has traditionally been relatively low, so they are motivated to push you because that’s better for them. Fortunately, in most modern countries this is no longer the case.
Safety only becomes a “luxury” when capital becomes concentrated into the hands of people for whom other people are disposable.
I bought a chainsaw recently for $10. I then bought about $100 of kevlar armor to wear while I use the saw.
No money => no safety
Take a look at history as far back as it will go. Safety expenses only happen when the money is there.
Next time you see someone using a chainsaw, take a look at their protective equipment. If they work for a company, they're likely wearing:
1. ear protection
2. face shield
3. helmet
4. kevlar gloves
5. kevlar chaps
6. kevlar boots
If they're some rando with a saw, odds are pretty good they have none of that on. They won't spend their own money on their own safety. It's only deemed essential if someone else is paying for it.
This is just rubbish. Safety arises from multiple dimensions - Kevlar-threaded armour is a single, last line of defence that helps only when everything else has failed.
Actual safety comes from training, practice and operations - such as understanding how the tool works, what unexpected dangers exist, what techniques to avoid, the state of the immediate environment, prioritising safety (eg by not creating time pressure) and the design of the tool itself (eg chain guards and anti whip mechanisms)
By the time your Kevlar armour comes into play you are well outside the safety zone and deep into the shit your pants zone.
You're right that the kevlar is the last line of defense. Nevertheless, one would be a fool not to wear it, as machinery fails and people make mistakes. There are photos on the intertoobs of shredded kevlar and no injury to the "thank god I put on my armor!" sawyer.
Increased productivity => increased profits => shareholders see number goes up => number must go more up year on year => everything (including safety) falls to the wayside in the pursuit of more profit
There is no such thing as "safe" and "unsafe". There's a continuum between them. At some point, making things more safe becomes so costly it is unworkable.
Absolutely true. This is a thing I talk about quite a bit. However, it doesn't speak to the idea that "security is a luxury", it speaks to the idea that security, like all things, is subject to a cost/benefit analysis.
Frustrating but recognizable. Having a framework in place allows you to quantify the risk and effort, and makes it less of an opinionated shouting match and more of an analysis. The challenge will then be to get everyone to take the time to understand the framework.
> In my ideal world, companies would say "We maintain a state of the art security system [...] it is one of the most important things we work on and we spend a large amount of our effort on it."
Tbh as a consumer I'd rather a company not just give self-appraisals of security in the form of overt marketing lines but let pen tests, post-mortem analyses and such speak to their robustness/lessons they've learned. Marketing is virtually always that security is top notch regardless of realities so it's hard not to be skeptical of ordinary spiels.
It's like third-party VPN services. It's all well and good to market security but when they get breached/raided/etc and it turns out they don't hold up to the claims then it just increases one's cynicism. At least those like Mullvad from everything I've seen match their statements (no affiliation, nor do I even use it, just useful for this example).
Ed Catmull of Pixar once brought up in a talk, referring to cliched lines like "Story is the most important thing" despite various productions' output being mediocre in that regard, that once an important idea can be encapsulated into a concise statement that the statement per se can be used without fear of changing behavior. Could be said for a lot of marketing.
> but let pen tests, post-mortem analyses and such speak to their robustness/lessons they've learned.
It depends on which audience you want to reach when you want to advertise how secure your product/service is. Most people outside IT security have no idea what a penetration test is nor can they make sense of jargon-heavy post-mortems.
They can put such things in a company blog for those who are looking for it, it doesn't have to be on the front page. Google, Cloudflare, the previously mentioned Mullvad, etc take this approach (and they often wind up submitted to the relevant audiences). This strengthens their brand trust in terms of security among key groups.
Instead of Chesterton, I think Thomas Aquinas made the same point more simply: "If the highest aim of a captain were to preserve his ship, he would keep it in port forever."
The only thing I really disagree with in the article is that security needs are opposed to a good user experience.
It is certainly true that security needs can sometimes get in the way of better UX although there is plenty of security that users never encounter. It is also true that the UX of many security designs is awful. However, it is not true to say that security requirements are opposed to good UX.
You can absolutely create good UX with good security, but it will require more effort. Improving your UX doesn't normally hurt security. In fact, having better UX can help security. Conversely, improving your security does not inevitably hurt your UX (although it certainly can if you don't give it sufficient consideration).
I think github is an example of those. 9 of 10 integrations just ask for blanket permissions. Effectively, "give us permissions to edit and make commits to all of your repos, and every other possible thing that's possible to do on github and in return we'll setup everything so our service works for you immediately"
Vs, here's 10 to 20 steps you can follow to give us the minimum permissions to use our service with a single repo. Go manually make repo specific tokens with specific permissions (10 steps) then go paste those tokens into our service (4-5 steps), now go add these actions to your repo (4-5 steps) and set these settings in that repo (5-15 steps)
So in this case, security is at odds with UX.
IMO, this is github's fault. Instead of providing an API and UX that would let users specify specific repos and choose specific permissions per repo, they did the more obvious and technically simpler thing. But, unfortunately, the more obvious easier thing, means the path of least resistance for integrations to do is ask for all permissions.
I would argue you could paint that as github has not made security their top priority. If they cared about security then the path of least resistance would lead to the most secure integrations instead of what they have now which leads to the least secure integrations.
That's a great example of where the design space gets harder and requires more effort to do well, but not that security is inevitably at odds with UX. Just that it gets harder to design well.
I think you're right about GitHub taking the most obvious route and prioritising UX over security.
Managing permissions and getting to least privilege is a huge problem in security. If you rely on users to do it, they will pretty much always assign maximum perms. It's too much hassle to figure it all out, and maybe have something break later.
Just spitballing, but maybe an alternate way would be for GitHub to require integrations to specify their least privilege permissions to operate in a manifest. Users would just approve an integration for whatever repos they like, and GitHub would set things up so it operated in least priv config for you in those repos. The user gets better security and a better UX.
Maybe that wouldn't work for some reason, there are probably loads of things wrong with it. But my point is that further thought about the goals of the user (which includes having a secure system) could lead to a much better UX and better security.
It is in the interest of the integration to ask for maximum level of privilege, since it future proofs their app, and there is no cost to them (security breaches are externalities they don't pay for).
My employer uses Microsoft as a single sign-on and the poor UX is a huge risk. People just get sick of endless shoddy login prompts and authenticator app warnings. It trains users to enter credentials without enough thought. Something like passkeys could be so much better. Security needs to consider unintended consequences of user behaviour much more thoroughly.
I don't know about impact on UX, but I do think it's true that security and convenience are opposed to each other. Increased security comes at a cost of decreased convenience.
I can't prove this is a fact, but I do know that I have yet to see an exception to it.
This is obviously true in a trivial sense. You don't want people stealing from your house, so you lock the door and have to carry keys, and get locked out if you forget them.
The need for security creates a set of additional requirements that have to be fulfilled. If you didn't have to fulfil them, things would indeed be more convenient.
This is also true of requirements in general, not specific to security. If you don't need them, things can be simpler and more convenient.
If security is top priority, to me it starts from deciding what kind of data actually needs to be stored. Then the service is designed from that perspective.
More sensitive data I need store, more I need to think about security as well (and more expensive it gets).
If it's obvious that the service doesn't have much worth of hacking for, it's already better starting point than if I know it's going to be hack attraction.
While this is correct, it's also not a particularly useful point.
A company can't just do profit. Profits aren't a thing that a company can choose (typically), they aren't an input, they're an output. You can't choose your outputs, you can only choose your inputs and aim them towards specific outputs.
How do you achieve profits? Well you do things that your customers want, and one of those things might be security. For some companies, the input of investment in security will result in the output of profits. For others it won't, or the two will be less correlated. If security does convert well into profits then it follows that it should be a priority input for the company, because that's how they will achieve profits.
This is still a huge generalisation, but perhaps less of one than "profit is the top priority".
There's a big problem of information asymmetry, though. "Security" is a very noisy metric, and not a quality that's easily evaluated by 99.9% of your customers. Outside of specialty domains like national defense (and even then...) customers don't generally care about "secure", they care about "not obviously insecure", and almost all products will appear to be "secure" enough right up until the moment that they're visibly or publicly compromised. If you either stay under the radar or get lucky long enough, you can turn profits for quite a while before the chickens come home to roost.
I feel skeptical; except abuse of worst nature; and do not care enough to comment coherently. A future lives around me. Yet what to Dan means a better model, distantly brings to mind the four conveyor belts from:
Author here. I agree with what I think you mean, but stating it like that is too simplistic in my view. Money is a great way to measure effort and results, and it's top priority in that at the end of the line, without profit there is no company. But without people, products, a mission, and not being hacked too many times, there is also no company.
Wanted to say I appreciated the GK Chesterton quote, I think it set up the article well as a balancing act between all things.
As the saying goes, we could have perfect security if we lived in the Stone Age.
I always finish that sentence in my head with something like "we care about your privacy so much that we want to collect all your personally identifiable data and sell it".
Big tech companies generally care about security because most of the time, it aligns with their self-preservation. They don't want their secret sauce leaked, they don't want internal mails and payroll posted on the internet, they don't want their users hurt. It's not "top priority", but security probably shouldn't be your top priority; if it is, there is only one winning move: don't build anything. That said, security often ranks high, especially if the company had a couple of close calls before.
In contrast, preserving privacy is often in direct contrast to business goals. You want to preserve it to some limited extent to maintain user trust and avoid offending the regulators, but there is a constant business pressure to get real close to that line.
Big tech companies care about profit. That's fine, they're companies. If security - or a pretense thereof - is needed to increase or safeguard profits, only at that point do they start 'caring' about security.
With privacy it's even worse. Think Apple cares about privacy? It's just - currently - aligned with their product strategy.
> It sounds nice, but does this in practice mean that whenever anyone has an idea to improve security, at the expense of UX, consumer prices, etc, you still implement it? Because that is what it sounds like.
Does it sound like this? If a restaurant says that "our food is our top priority", I don't think anyone would think that means they're not going to lease a premises, buy chairs/tables, hire waiters etc.
To me, the very fact that you say something is your "_top_ priority" implies that you have other priorities.
It feels like the author is responding instead to the premise "security is our only priority".
These companies do not sell security, they sell products that need to be secure. The restaurant sells food which needs to be safe.
A more apt analogy would be a restaurant saying "food safety is our top priority". To me that sounds like a restaurant that wastes a lot of plastic and produces a lot of food waste that might spoil soon.
(I don't fully agree with the message though. It's not just the user of the process that is responsible for their safety. It's mostly the designers of the process.)
> The needs of security are opposed to the needs of a convenient user experience. Improving one typically hurts the other.
I must be Dr Contrary tonight but this strikes me as bullshit.
SSH is more convenient that telnet. Passkeys are more convenient than passwords. TouchID and FaceID are more convenient than passwords.
In general, security is an afterthought that is inconvenient to developers to add back. But in the digital world I haven’t seen many examples of security being less convenient than the alternative.
(I am writing this from an airport and definitely do not assert that this applies to the built environment.)
The title was edited here by mods (I liked the original, but "BS" is a maybe a bit clickbaity?). Could it be changed to: "Security is our top priority" is meaningless. ?
When Microsoft pivoted to security, they seemed to really mean it. They stopped shipping new features for something like 6 months, and spent the time hardening the OS. The results weren't perfect, but they made Windows not the total joke it had been.
I've spent time in the classified world. Security can be obtained, but costs are high. At the aerospace company, we estimated for bid purposes that running a project at SECRET doubled the cost. Running at levels above that became even more expensive, and much slower. You have to partition things, so that only the really critical stuff gets the most expensive protection. It's common to have a project where the project is mostly unclassified, many things are SECRET, and a very few things are at higher levels.
The military views security as time-limited. When and where the attack will start is highly classified until the attack is underway. After that, there is no secret. New weapons systems eventually get used or cancelled, after which they're less secret. The intelligence community wants to protect info forever, though.
The credit card services get this. The CVV is required to have a higher level of protection than credit card numbers or names and addresses. Banks understand separation of functions and mutual mistrust. Most computer security work doesn't think this way.