A good way to look at the problem is that trusted software needs to be far less vulnerable, and untrusted software needs to be kept in a cage where it can't make trouble.
On the untrusted side, all games, for example, should be caged or sandboxed. (Yes, this breaks some intrusive anti-cheat mechanisms. Tough.) Applications on phone-type platforms should have far fewer privileges, (Yes, that breaks some ad networks. Tough.)
Until somebody with enough power to make it stick takes a hard-ass position and sets standards, there's not going to be progress. It would be progress if AT&T or Comcast or Verizon deployed secure routers, for example.
Chrome-like sandboxing seems like the current state of the art, and it's complementary to all the techniques mentioned. There will be vulnerabilities, but making attackers chain vulnerabilities to get into the system will have the effect of "dramatically reducing them".
I don't see any numbers in that paper either, which seems like a big oversight.
Chrome has proven to be more difficult than other browsers to exploit, and this is with hundreds of thousands of dollars on the line. I think the Pwn2Own escape in 2016 required the attacker to chain 4 exploits. I don't think people hacking servers need 4 exploits today.
The browser is operating under a more extreme environment than a server: it contains tens of millions of lines of C++ code, and it exposed to attackers (web sites) by billions of people continuously throughout the day.
This is also the solution that doesn't require rewriting hundreds of millions of lines of code, which is an economic impossibility in the short term.
There is some current research on how to help programmers split programs into trusted and untrusted parts, e.g.:
They use the term container. It's a subheading:
2.2.1 Operating System Containers
"...Container-based isolation can clearly reduce the impact of software vulnerabilities if the isolation is strong enough."
I think current events show the urgency around cybersecurity in the government. But it sounds more research-oriented, whereas I would expect it to be more about accelerating current practice (and also debunking technologies like Docker, which are sloppy with regard to security).
Maybe I'm misunderstanding the purpose of this paper though.
Docker and seccomp-bpf are techniques for sandboxing server applications, and they're useful, but they run into the same problem as sandboxing any complex system: the privilege to do many of the things the application ordinarily does --- writing a row in a database table, for instance, or creating a new queue entry --- can unpredictably result in privilege escalation.
The techniques you need to apply to make sure that basic application actions don't escalate privileges are largely the same as the ones you'd take to ensure the application doesn't have bugs.
Don't get me wrong: I think more and more applications will take advantage of serverside sandboxes, to good effect. But I don't think it's going to be a sea-change for security, and I don't think it's valid to compare the "4-exploit chain" needed for Chrome to the situation in a complicated web application.
Running containers (and applications) with Docker
implies running the Docker daemon. This daemon
currently requires root privileges, and you should
therefore be aware of some important details.
For example, Chrome's sandboxing tool (minijail I think) and systemd-nspawn are NOT DAEMONS. They just set the process state and exec().
Docker is sloppy as hell from a security perspective. It is indeed embarrassing that they mention it in this paper.
Docker has also enshrined the dubious practice of dumping an entire Linux image into a container, and then installing a bunch of packages from the network on top of that.
And now you need a "container security service" as mentioned in the article. How about you just understand what your application depends on, rather than throwing the whole OS plus an overestimation of dependencies in there?
1) You might be underestimating how far a typical server configuration is from least privilege. It's true that you need to write to database tables or a queue service for the app to work, but a typical configuration has the capability to do hundreds or thousands of other things that are not necessary for the app.
If you have a PHP app running inside a Apache process, how many system calls are needed? What about PHP as a FastCGI process under nginx? Linux has 200+ system calls, and I'm sure the number is a small fraction of that.
What about the privileges for connecting to local daemons? Most people are dumping huge Linux images into Docker and then installing tons of packages by default. Last time I checked, apt-get install will START daemons for you.
It takes some work to figure out what privileges a given application needs, and that's probably why people don't do it. But I would contend that much less hard, and much more bang for the buck than a lot of the techniques mentioned in this paper.
2) You might be underestimating diversity of privileges needed for different application components ("microservices" if you will). For example, in distributed applications I've dealt with, the auth code doesn't need to talk to all the same back ends that the normal application logic does. If it lives in a separate process, which is a good idea for a number of other reasons, it can have different privileges than the rest of the application.
In any case I view this as a kind of "walk before you run" type of thing. If you can't even figure out what privileges your application needs, how are you going to get programmers to add pre- and post- conditions, and apply symbolic execution and techniques like that? The paper sounds absurd and divorced from the realities of software development.
Now that I read the paper, they do devote just 2 pages to "containers and microservices" (which are weirdly buzzword-y terms for a traditional practice). It still seems unbalanced to me.
Lots of potential for improvement using tech developed since then that does way better or older tech that does job little better.
edit adding references:
On the language side, there's things like SVA-OS for automating it if they don't want to do rewrites in safer language.
Capability-security used to combine usability with higher security esp if they got rid of the Java crap underneath E. I can't recall if this niche made any recent improvements on effortless software security with reasonable overhead.
On hardware, we have tagged CPU's like SAFE, even better a C-compatible one w/ capability security running FreeBSD called CHERI, those designed for safe languages like jHISC, and those that protect software + fight malicious peripherals with confidentiality & integrity at the page level within the memory subsystem.
All of these have at least prototypes with a subset deployed commercially by interested parties. Separation kernel approach having most suppliers with CodeSEAL being one processor + compiler approach that saw some adoption outside small stuff for smartcards. CodeSEAL was confidentiality + integrity of pages + control flow enforcement w/ access controls for execution paths. Quite a few companies are also building stuff using tools like Astree & SPARK that prove absence of critical errors in code done in a restricted style. One just commercialized a combination of QuickCheck with C programs. Always happens if a small company with decent funds is willing to do it.
Lots of shit going on. I haven't even got into formal methods as applied to software (down to machine instructions) or hardware (down to gates). I haven't mentioned all the advances in automated or spec-driven testing with tools backing them. Hell, there's so much to draw on now for anyone studying high-assurance security a long time that my current problem is where to start on tackling future problems. I have to filter rather than find ways to knock out classes of vulnerabilities or show correctness. I about need to stop learning most of this shit to properly sort out all the newest findings but improvements are coming at phenomenal pace compared to 90's or early 2000's. :)
Note: This is all possibly part of a larger theory I have about how knowledge gets siloed and/or forgotten in IT due to different cultures with different teachings passed on. My idea was a curated resources collecting as many things that got results as possible with open discussions from people in the many groups. Can't build it right now but mention it for feedback periodically.
Note 2: One of the best ways I find real (aha HA) security versus mainstream when looking for new papers or tech is to put this in quotes: "TCB" All high-assurance engineers or researchers care about the TCB with its LOC usually listed. They also tend to list their assumptions (aka possible screwups). This filter plus words browser and security took me to Gazella, OP2, and IBOS last run. To be clear, Gazelle didn't have TCB in document but described it & was in another's related work. All it took given most efforts at hardening browsers apparently don't know what a TCB is or don't care. Works well with other things, too. Sadly.
Where do you get that from? Ever heard of the macOS Seatbelt framework? Jails in BSD?
The problem is soon everything will be important. Think we can classify ourselves out of this? Good luck.
So, we can make things less vulnerable at a software level. Or we can pick our data. For example, your personal information should be more resilient to problems.
The problem is this is as much societal as software. If someone "robs" you by buying tons of digital non goods, there should be cheap recourse. If they bought real goods, should be the same.
But then we get to physical things like cars. Nothing is as good as the sandbox that currently exists of isolated devices. As soon as they are fully networked, the concept of sandboxing is laughable. Desperately needed, but laughable.
Two challenges with this least-privilege strategy for software security:
1. For single-function devices, like a customer prem device for a cable network, there may not be meaningful privilege boundaries to exploit.
2. As privilege models get more complicated, the risk that privileges may accidentally equate to each other increases. This was an observation Bernstein made of his qmail security model in his retrospective paper.
1. Secure, automatic update process that keeps user config where possible.
2. Default configuration that's as secure as factory setting can get plus without unnecessary services running.
3. Running on a self-healing microkernel if possible.
4. Code for key functionality written in languages like Ada, SPARK, and/or Rust for reduced 0-days.
5. Embedded firewall with rate limiting from host network in event ISP detects DDOS.
6. Optionally, more secure way to connect to admin interface comes with installation CD or software.
Those seem like nice baseline for router that would increase its security and availibility at relatively, low cost. There used to even be a vendor selling switches and routers with INTEGRITY-178 separation kernel enforcing POLA down to the ports. Rockwell uses AAMP7G CPU to enforce it down to the thread or process level with mathematical proof of its correctness. So room to improve further to high-assurance for that end of the market, too, on top of what my list already gives.
EDIT to add what just showed up in another feed. Case in point about how much better baseline could be with small effort.
"we just don't have a lot of Ada, SPARK, Rust programmers in the world"
Sounds bright until one remembers almost every significant market in this space is an oligopoly where there's only a handful of companies. Profitable, too. They could split all the Ada, SPARK, Rust, etc programmers while still making plenty of money and getting results. Worked for companies using Ocaml (eg Jane Street), Haskell, and even Prolog. Those sorts find they get better talent when they ask for uncommon, tougher stuff.
"ISPs already have the expertise and the infrastructure to deal with them, once it's going to cause too much problems for the quality of their service. I've witnessed this myself."
Didn't stop most DDOS's from doing their damage at all. Took many players working together. Recent one had a mitigation vendor straight-up dump Brian Krebs. The problem being as easy as you describe wouldn't have such results. It's either hard or they don't care so much.
What would you sandbox in a consumer router? If there's a web server in there, it needs very limited access the rest of the router.
It's a good example of something that seems like it should be straightforward to sandbox, but isn't.
How many people dumped Uber because Uber now insists on knowing your location at all times?
Was expecting something a little more balanced on the topic.
In other words, containers are better than running normal processes for security. Not better than running a VM.
(yes, I get those are two different scopes of the word privilege)
 shamelss plug: https://github.com/duneroadrunner/SaferCPlusPlus
Firstly, they are talking about using microservices which would be ok (I've used a few microservices for specific applications that actualy make sense to service-ify) but I would by no means consider this a safer way of doing things. When you're talking about services run by our government, who isn't notorious for having their network-security done right, I'm very weary of them moving to a microservice architecture.
MTD is another thing that sounds concerning. This seems like the bottom of the barrel of security ideas and looks like it would be far more complicated then the other methods mentioned. If used this along would probably introduce more bugs.
"Education and Training" can basically be summed up as universities being stuck in the 70s and not teaching CS but teaching the Math that CS needs.
The "Liability" section is keeping me torn.
I'd liken it to OOP encapsulation or the idea behind linux executables.
(i) address space layout randomization: this is by no means impervious but it is worth doing
(ii) there being several "official" Etherium clients such that if somebody hacked just one client they could not take over the whole network.
This is not a study, it's just a report. It's mostly a summary of a lot of different sources, approaches, and opinions.
A proper study on this question would certainly include an RFC or other form of opinion polling. But a thorough study would go far beyond opinion polling/solification and also attempt to measure the accuracy of solicited opinions in an objective way.
* insider malfeasance, such as exfiltration by Edward Snowden"