"migrating to programming languages that eliminate widespread
vulnerabilities."
"Some
examples of modern memory safe languages include C#, Rust, Ruby, Java, Go, and
Swift."
"Too often, backwards-
compatible legacy features are included, and often enabled, in products despite
causing risks to product security. Prioritize security over backwards compatibility,
empowering security teams to remove insecure features even if it means causing
breaking changes."
"While customer input is
important, the authoring agencies have observed important cases where customers have
been unwilling or unable to adopt improved standards, often network protocols. It is important
for the manufacturers to create meaningful incentives for customers to stay current and not
allow them to remain vulnerable indefinitely."
---
The fundamental challenge is that by the time a "secure default" has been universally agreed on, and implemented widely in a space, the target moves again. Meanwhile each vendor decides what is "most secure" based on what they have been able to implement. Businesses are left with the integration challenge, and maintenance burden, of operating equipment that changes underneath their feet with each upgrade/update in the name of "being more secure."
Government agencies could reduce the integration and adoption window by providing implementations of the "secure defaults" that were ready-to-use in the recommended programming languages. To do this they would need to be able to incentivize and recruit personnel that were capable of doing this, and adopt methods and practices that could produce such modules in a timely manner. Do governments want to distribute implementations that are usuable by any actor? Can they produce it in a timely manner? Would industry trust the implementation if it was produced by governments?
When "legacy features" are being asked for it is most likely because they have been shown to work, and integrate, well across the business. A new product may be perfectly secure, but is it usable? The last quote alludes to this, customers need to run the business to generate the revenue to afford the security.
> The fundamental challenge is that by the time a "secure default" has been universally agreed on, and implemented widely in a space, the target moves again.
That is certainly true, but there is also such a thing as "definitely insecure default", which can (and I believe should) be discouraged piecewise.
The problem with Java and PHP is not that they are insecure, its that there is barely any barrier for entry and thus much of the existing code is very low quality.
The problem with Java and PHP is mostly bad stdlib design although in completely different ways (PHP also has some weird footguns about equality but those are avoidable).
PHPs stdlib is of the "stickball" variety - they just started adding to it with no style guide, which has resulted in very inconsistent naming, poor rules and lots of security issues that they for ages had to fix with workarounds since people coded against the security issues as if it was correct implementationwise.
Java suffers from an overly interfaced stdlib; the stdlib pretty much only contains standards on how to do something rather than actual implementations. The result is that the default values in the stdlib are ancient/unsafe and also aren't amended, so you don't break compatibility with older Java APIs.
Basically both have too much low quality code in their stdlib which propagates out into bad library design (since the quality of the stdlib tends to the rest of the ecosystem - see how javascripts absurdly barebones stdlib led to it accruing thousands of microdependencies).
PHP has a lot more of C++'s "All the defaults are wrong" disease than Java does, which can matter for security because if there's a security default, in PHP it's probably wrong unless you fixed it.
MFA -> this seems like a no-brainer to have out of the box.
SSO -> if we're talking SAML and "mega corp looking to put everybody sync'd to their Active Directory" then it seems kind of reasonable. In part I think that the pain of employee onboarding/offboarding is just much more at larger corps.
I've commonly seen companies create secure by default programs around there deploying there Infrastructure (terraform/cloudformation templating). Maybe some companies do this with packages and vulnerability management. While its commendable to try this, until an organization is pretty large very little effort will be put into making secure by default design.
Hopefully one day we can get away from systems that are vulnerable by design such as Windows, Linux, iOS, AWS, etc. where security was not considered during their design process and are thus structurally doomed to perpetual vulnerability as decades of history have shown that security can not be bolted on. Only a greenfield redesign incorporating security designed, verified, and demonstrated to thwart the best efforts of nation-state attackers such as the NSA is adequate for the future. Even now we see commercial systems regularly targeted by groups identified as nation-state or state-sponsored attackers, so anything less than stopping nation-state attackers is inadequate to protect against expected and proven threats.
Luckily, we have historically had systems and certification processes adequate for the task such as the Orange Book Level A1 which SCOMP and GEMSOS [1] were successfully validated against, and the Common Criteria SKPP [2] which required a NSA penetration test with no identified deficiencies that INTEGRITY-178B [3] was successfully validated against. The practical evaluations obviously being in addition to the formal specifications and formal proofs of correctness needed to demonstrate a secure design as part of the validation process.
Only by the application of principles used there, which have demonstrated success against nation-state attackers, to do greenfield redesigns of the entire computing stack is meaningful security against proven threats possible. Everything else is lipstick on a pig that is attempting to bolt on security which has never once succeeded in providing meaningful security against the threats we find ourselves faced with today despite decades of effort, billions of dollars, and endless failed attempted validations by large software companies via these same certification processes. In fact, so many, such as Microsoft and Apple, have failed that the Common Criteria standard [4] itself points out that (to paraphrase a bit) it is likely economically infeasible to retrofit an existing product line to protect against attackers with a "Moderate attack potential" as nobody has ever succeeded.
Security was absolutely considered during the design process of Windows and iOS. I'm less familiar with AWS and Linux history but it's hard for me to imagine AWS wasn't designed with security in mind given a lack of security would mean people freeloading off of Amazon's servers and security was hardly an unknown subject at the time of AWS's design.
One of the main selling points of Windows NT when it was released was per-object ACLs. Windows NT was designed to be sold by the government and military, Dave Cutler wasn't oblivious to the fact that the OS would need to have security in order to get lucrative government contracts. So I'm not really sure how "Just consider security during the design process" is going to save us given all the wildly insecure systems we have where security was considered during the design process.
I also don't see Microsoft releasing "Securedows" and replacing Windows, I see any such potential Greenfield secure system as being a specialised product with limited performance/functionality for high security applications which will complement low security systems.
Also as passé as the concept is, you really can get a fair degree of security by just disconnecting servers from the internet because few hackers are willing to travel halfway across the world and sneak into a building to infiltrate a network. We're probably never going to able to run nuclear missile silo's out of AWS even if they redesign it from the ground up.
Microsoft has continuously attempted and failed over multiple decades to get Windows NT of any flavor or configuration certified against anything more stringent than EAL4+ which is not even certified to protect against attackers with a "Moderate" attack potential. A older version of the standard prefaced EAL4 as being limited to protecting against "casual and inadvertent threats". That does not, by any measure, qualify as a vote of confidence in the security of the product. Under the old Orange Book standard EAL4 would approximately qualify as level C2 as Windows systems contemporaneously certified against both the Orange Book and the Common Criteria were certified at EAL4 and C2 simultaneously. A post-mortem on the Orange Book evaluation criteria [1] clarified that C2 was designed to allow commercial systems with no security to get used to the idea of doing a certification so that they could in the future design a proper system with actual security [2]. To add insult to injury, Microsoft has only ever achieved these levels with stripped-down, hardened versions so they could only achieve such vaunted heights as "protection against attackers with a Enhanced-Basic attack potential" after security configuration.
Also note that EAL4 is the level pointed out in the Common Criteria as being the highest level a existing commercial product can be retrofitted to. The standard explicitly points out that to achieve a higher level requires a greenfield redesign with security designed to achieve higher levels from the start (hint hint they added that in because Windows kept failing in their attempts to get their EAL4 quality system certified at EAL5).
Maybe you are correct that security was considered during the design process of Windows, but the security designed in is nothing short of a complete and utter joke. No security professional of any competence at designing systems that can actually protect against skilled adversaries would stand behind their security pedigree. I mean, for gods sake, even they would not stand behind their own security pedigree; find me one person on the Windows development team who would dare declare that Windows can survive against a NSA penetration test. It is laughable just thinking about it.
As for their ability to sell to the government. The total incompetence of the entire commercial IT world is so complete that under pressure to allow COTS vendors to make billion dollar deals with the government the procurement standards were lowered to allow EAL4 systems in high security contexts (as long as they were never connected to the internet, thank god for small wins). It has actually regressed even further in the recent years as in recent years they lowered the standards further to allow commercial firewalls and antivirus systems which are generally only certified to EAL2 under the swiss-cheese theory where if you just stack enough systems with egregious holes together then the system will be secure. We can see how that is working out. It is now to the point where now they just allow anything that is "certified" at any level.
That every commercial IT system is at best EAL4 quality is part of the reason why everything is so horribly vulnerable these days. The world runs on systems certified to absolutely not protect against "moderate" threats. To rise above that we must use systems that are designed for real security from the start as we have already seen from history that none of the existing systems can be retrofitted to do so.
You sure put a lot of stock in these bureaucratic certifications. I agree modern OS’s are woefully insecure, but focusing on the certifications the way you do is putting the cart before the horse. Certifications are just some other org’s opinion. Hard to take them seriously when they haven’t built anything so great themselves. Following their list of rules isn’t going to change anything.
I do, in fact, put stock in quality certifications. Most certifications are not worth the toilet paper they are written on; anything that gives a system as insecure as Windows top scores is obviously useless. In contrast, the best that Windows can certify against with respect to the Common Criteria is that it can verify that it is easily hacked by minimally skilled attackers which has been proven out in practice as true. That is a good sign of a security certification with standards, though by no means exhaustive.
You will find that all the certifications you are aware of, which has led you to the idea that all certifications are worthless, fall into this bad category since the commercial IT world is hopelessly incompetent with respect to security and their certifications are the corrupt supporting the corrupt. The standards and prescriptions they suggest are worthless as you state because they have not built anything great, have never seen a great system, and have no intention of creating acceptance criteria that their own incompetently designed systems would fail. Frankly, the number of security standards of any value is basically 0.
This does not mean that certifications created by entities that have never built the system they are trying to certify are useless. It is actually quite easy to create excellent acceptance criteria as long as you stray far enough away from prescriptive standards and accept based on the outcomes instead of mechanism. For instance, I am not a mechanical engineer, but I can design the acceptance criteria for a bridge as requiring 3x the maximum weight of fully loaded vehicles laid across the length of the bridge. I do not need to know how that is achieved, just that you must do so before I accept the bridge. Assuming such a standard is desirable (I am not a mechanical engineer so I do not know how bridge load specifications are actually done or what they must achieve), I created a fairly good standard with a acceptance test despite having no knowledge of how the bridge must be built. In fact, I can do this even if such a bridge can not be built with existing technology. In that case we have deemed that a adequate bridge with the desired safety properties is just plain impossible and should not be built.
In the case of the Common Criteria SKPP standard, one of the acceptance tests was that the NSA red-team must fail to find any deficiencies. Yes, the literal standard requires that the NSA be unable to hack it and it is verified by the NSA having a team attempt to hack it with full access to the source code, specifications, and proofs of correctness. Now for anybody whose gut instinct is that they will just pretend to fail (which is totally the right gut instinct to have), the NSA did the verification work for the only certified system, INTEGRITY-178B, at the behest of the DoD to verify that the core OS of the F-22 and the F-35, the top-line fighter jets of the US military, can not be hacked and either disabled or turned against the US by enemy countries. This is also the same system used in the flight and weapons control of the B-1 and B-2 intercontinental nuclear bombers and in various other NSA and DoD systems. So, pretending to fail is just shooting themselves, the US Air Force, and part of the US nuclear arsenal in the foot.
So yeah, go find me one of these uncertified systems that has achieved a "major advance in security" that the NSA can not hack and then we can start talking about whether there might be some good lessons to be learned. Until then we should probably look to the systems that can actually stop the prevailing threats in practice which the commercial IT sector thinks is literally impossible (that being protecting against nation-state attackers such as the NSA).
The entire point of an OS is to handle low level unsafe stuff so the user doesn't have to. Writing in a lower level language doesn't just make sense, it's straight up necessary. Rust is the only "modern" language that's anywhere near the stability and maturity required, but it's still quite young and rough.
Totally agree that a low level language is necessary, but my point was that memory-unsafe languages are not necessary, which it seems you agree since you mentioned Rust as a candidate.
I think it's unfair to say those didn't consider security. Security was one of the big features of the Windows NT familiy in its early days, and Linux also obviously has a lot of security in its design. Their problem is that the threat model completely changed since their design.
Windows and Linux were written with the assumption that a computer has multiple users, most of them without admin privileges. The threat model includes things like account security, processes from one user influencing processes of other users, (at least on Windows) users viewing files of other users, non-admins modifying the computer configuration, one user taking all resources, etc. They mostly solve these problems well, but some of these problems are now much less relevant, some of the core assumptions have become less valid over time (the admin/non-admin split), and many new considerations are now part of the threat model, such as protecting against rouge applications run by the same user, untrustworthy peripherals, etc.
This is one of the issues with security-by-design: your definition of security will change over time, in ways your design can't easily accomodate
Windows NT was designed as a multi-user system in which everything [0] lived in one of a small handful of global namespaces. They built an unbelievably complex ACL and “privilege” system for it. And they allowed remote systems to connect over various Ethernet protocols and interact directly with those global namespaces. And they did the latter with cryptosystems that were obviously terrible even at the time. Of course it didn’t work.
Linux is modeled after Unix, and I don’t think security was a serious initial goal. Heck, Unix has setuid, of which nothing good can be said. Most of Linux’s security advantages were due to:
1. The lack of the aforementioned global namespaces in such an egregious way. The main global namespaces is the filesystem, and that one stares you in the face. The object manager hierarchy in Windows was basically invisible until Sysinternals figured out how to enumerate it and stuck a UI on it.
2. A lack of all the Microsoft RPC crap. Your files and processes on Linux were not all visible from outside. You only had an SMB server if you chose to run one.
3. A much less complex syscall interface. Seccomp on Linux is fairly straightforward. Windows had to come up with a complex “integrity level” scheme. (Windows is full of nasty complex things like restricted jobs that seem unlikely to be used correctly by anyone.)
But at least Windows has a sensible process model and does not have privilege increases when CreateProcess is called.
Security systems that actually work are simple.
At least they had a SAK, which everyone seems to have given up on these days.
It is absolutely fair to say they did not consider security (at least to a level that can be considered real security). They targeted a Orange Book Level C2/Common Criteria EAL4 level of security (at best) which is a level that was not even really intended to indicate there exists any security at all, and, as a result, were permanently unable to retrofit beyond that. They attempted to many times and failed every single time despite decades of effort. There are multiple systems designed both before and after that were designed to achieve the higher levels of security and succeeded. The security designed in was incompetent at the time and incompetent now and, as history has shown us, unfixable.
To then address your second point indirectly, I agree that there are systems, such as Windows, that have "security" designed for systems where all agents are inherently trusted. These systems are not inherently worthless as it is perfectly fine to use in situations where security is unnecessary such as children's toys or running a disconnected device. It is, however, unacceptable to use such systems in contexts that require a minimal level of security such as banks and hospitals like how they are used now. Using systems designed only to protect against casual and inadvertent attacks in critical systems is engineering malpractice.
You seem to be from a bizarro world where these certifications are the be-all/end-all. No major advance in security was ever achieved by trying to follow some standard written by a bunch of people who haven’t themselves solved any of the engineering challenges.
The only way you can do secure by design/default is if the customer of your service experiences a productivity gain.
For example, if they can deploy infrastructure more efficiently (and securely as a byproduct). Or if they're able to get reliability out of a software library (and security as a byproduct).
You have to find these win-win situations where the developers/clients are not even aware of security improvements.
As a security advocate, I hope for a future where we can move away from systems that have been vulnerable by design such as Windows, Linux, iOS, and AWS. These systems were not initially designed with security in mind and have remained structurally vulnerable over time, despite attempts to bolt on security measures. To adequately address future security concerns, we need to design and verify a new computing stack that incorporates security principles specifically aimed at thwarting nation-state attackers like the NSA.
Although some commercial systems are still being regularly targeted by nation-state or state-sponsored attackers, historical systems and certification processes such as the Orange Book Level A1, SCOMP, GEMSOS, and the Common Criteria SKPP have provided adequate validation against such attacks. Successful evaluations have included practical testing, formal specifications, and formal proofs of correctness as part of the validation process.
Meaningful security against proven threats is only possible through the application of security principles that have demonstrated success against nation-state attackers, leading to a greenfield redesign of the entire computing stack. Attempting to bolt on security measures to existing systems is unlikely to provide meaningful security against current threats, given decades of failed attempts by large software companies via certification processes.
It's important to note that while some systems, like Windows NT, were initially designed with security in mind, they still suffer from vulnerabilities. A potential solution to this issue may be to create specialized, limited-performance systems that complement lower-security systems instead of replacing them. Additionally, disconnecting servers from the internet remains a viable security measure as it is unlikely for hackers to physically infiltrate a network.
Overall, our goal should be to develop a new computing stack with security designed, verified, and demonstrated to thwart nation-state attackers, rather than attempting to retrofit security measures onto existing vulnerable systems.
I would take this document more seriously if they removed the Canadian government logo, I remember in early 2021 how easy it was to exfiltrate passport and other PII from the a certain Quebec gov agency agencies, because they had web apps written in ASP.Net and didn't do proper authorization/authentication measures, so anyone with basic knowledge of curl and python could easily exfiltrate mass amounts of data. People are stupid, they upload sensitive documents (passport info) and other things under the assumption that the government is competent enough to secure their stuff.
One might consider that the Quebec government having such significant problems is all the more motivation for the Communications Security Establishment to drive change.
"Some examples of modern memory safe languages include C#, Rust, Ruby, Java, Go, and Swift."
"Too often, backwards- compatible legacy features are included, and often enabled, in products despite causing risks to product security. Prioritize security over backwards compatibility, empowering security teams to remove insecure features even if it means causing breaking changes."
"While customer input is important, the authoring agencies have observed important cases where customers have been unwilling or unable to adopt improved standards, often network protocols. It is important for the manufacturers to create meaningful incentives for customers to stay current and not allow them to remain vulnerable indefinitely."
---
The fundamental challenge is that by the time a "secure default" has been universally agreed on, and implemented widely in a space, the target moves again. Meanwhile each vendor decides what is "most secure" based on what they have been able to implement. Businesses are left with the integration challenge, and maintenance burden, of operating equipment that changes underneath their feet with each upgrade/update in the name of "being more secure."
Government agencies could reduce the integration and adoption window by providing implementations of the "secure defaults" that were ready-to-use in the recommended programming languages. To do this they would need to be able to incentivize and recruit personnel that were capable of doing this, and adopt methods and practices that could produce such modules in a timely manner. Do governments want to distribute implementations that are usuable by any actor? Can they produce it in a timely manner? Would industry trust the implementation if it was produced by governments?
When "legacy features" are being asked for it is most likely because they have been shown to work, and integrate, well across the business. A new product may be perfectly secure, but is it usable? The last quote alludes to this, customers need to run the business to generate the revenue to afford the security.