Hacker News new | past | comments | ask | show | jobs | submit login
If a new Secure OS were to be developed, what OS architecture would it be?
13 points by iloveappleman 3 months ago | hide | past | favorite | 17 comments
If developing a new security-oriented (general-purpose) OS, what kind of OS architecture (the overall structure of the OS to begin with) would it be? And why? Are there any existing OSs that are similar?



This has been asked a few times now.

The structure would most likely need to be a hardened microkernel, similar to se4L; and would necessarily need to mask every unique piece of information that comes from the hardware in a generic way that prevents or dramatically impacts fingerprinting in a non-deterministic way.

To do anything programmatically in a resilient way you need to uniquely identify what you are working with first.

Also, in practice, there can be no secure OS so long as there is no control over the hardware. Existing consumer equipment would need significant redesign at the hardware/signal domain level.

Firmware malware is becoming more common, almost ubiquitous, and frightening with the applications it allows, thanks in large part to existing companies creating backdoors. As all companies build this functionality in, and there is very little benefit to them in short or long-terms to do so, its reasonable to assume its by some secret order. Free-market companies don't generally spend money on features that target only a small portion of their customers when it also creates liabilities that can balloon.

Application wise, here is a thought experiment for you:

How hard would it really be to silently upload software to a target running in a SMM context (Ring -2) that scans or hooks memory looking for high entropy strings of common fixed widths (i.e. a Rabin-Karp search), and when entropy is above a threshold saving those into the service sector of the attached hard drive in a cache-like structure.

While a bit rhetorical, what would the consequences be for this? Would encryption keys stored in memory persist and float to the top for remote or physical querying? Is there anything you can do about that without access control to SMM? Is there any software instrumentation that lets you have visibility on something like that?

The general obvious answers are yes its likely in the first, and no to the rest.


It would look a whole lot like IBM OS/400.

Hardware-dependent portion of the operating system is separated from the part that users actually interact with and run code on. Both parts would be immutable, with a hardware-anchored trusted boot chain. A/B image switching (a la ChromeOS) for OS upgrades and rollbacks.

User applications compiled to bytecode, which would be JIT compiled to native code and then appended to the executable object for future use.

Single-level store, where memory and disk is treated as a single address space and where the machine takes care of storage tiering etc for you. The platform would also take care of encrypting block storage for you.

Mandatory access control, including multilevel security capabilities and capabilities-based restrictions on programs.


It's hard to tell. My guess is:

- Probably a microkernel. See also: seL4 and similar.

- Largely written in a safe language, like Rust (political turmoil willing).

- It would be UNIX-like in some fashion.

- Be immutable or partially-immutable (like macOS).

- Support containerization.

- Be capabilities-based and support Mandatory Access Control.

- Satisfies Common Criteria requirements. https://en.wikipedia.org/wiki/Common_Criteria


Add a rootless concept to it to represent the OEM state.

(E.g. in MacOS where /usr/local represents user changes, if it's deleted, the OS is back in OEM state)

Android actually came pretty close to this, if you ignore the vendor messups when it comes to partitioning.

Due to lack of alternatives, my current "secure deployment" strategy is to use containers that have only a kernel, no shell, and "CGO_enabled=0" go binaries that don't link against C libraries. Even if they segfault it's close to impossible to guess the offsets correctly to be able to create an exploit for it.

The only attack surface left is what's mounted in the container, and you have full control over that.

But applying this to a desktop system is the hard part. QubedOS has nice ideas, but is unusable by unexperienced users.

Something like a distro that makes it a first priority to ship apparmor and selinux profiles for every package might be a way to get there, but it's a huge load of work.


Yes! I really love how ChromeOS is architected as well, even though it's not for me (power user/programmer) in its current form.


Why would be UNIX-like? Is it meant to be a UNIX architecture?


I'm a UNIX zealot and UNIX-like would be the most compatible with existing software for backwards compatibility. I would view the UNIX-like aspect as more of a subsystem or personality as the architecture itself is outdated, IMHO.


I would add QubesOS like sandboxing with micro kernels.


One based on memory-safe platform/language (think java). Distribution: byte codes, which are get compiled for the actual platform.

Encryption by default. Full separation of applications / processes. Each application sees the permitted sub-space of the file system. No more system-wide and easily corruptible registries or files.

No drive-by installation. No application can get more than permitted.


Is full application/process separation due to virtualization or sandboxing? Or by a validated kernel hypervisor? Or something like Unikernel?


Every API (a group of methods) can only be accessible via an interface object (accessor). The application requests an API, gets the accessor, make the call(s).

If permission for that accessor was explicitly granted, the calls succeed. Otherwise, the calls never reach the destination.

What would you call this arrangement? A supervisor?


I'd do "ChromeOS minus Google". Simple for the vast majority of users, immutable, and still able to execute software from almost every platform.


If your goal is security-oriented then the result will not be general purpose.


That's a false choice. Just because we're stuck with Systems based on ambient authority, doesn't mean we have to be crippled by them forever.

We're at a stage analogous to just before fuses were adopted to prevent circuit overloads from causing chaos in electrical distribution.

Fuses and circuit breakers are almost trivial to operate.


Why?


Usability and security are diametrically opposed. The more a system is hardened the less usable for most users it becomes. The opposite is also true in that the more user friendly a system becomes the more it compromises around conventions of security.


[Edit]Upon reflection....perhaps I've got an axe to grind? Sorry

Short answer:

  Microkernels reduce the attack surface to a minimum
  Capabilities instead of ambient authority mean you don't have to trust code
  MULTICS, KeyKOS, EROS are historical examples, and now Genode and HURD are rising
  If it's got a GUI, it'll have "PowerBoxes" which return capabilities
My OS bets on the future are currently riding on Genode[1], and GNU Hurd[2]. Both are based on microkernels, and capability based security.[3] Unlike Linux and Windows, they are not based on ambient authority[6] (the permission to access everything on behalf of the user).

For applications, my bet is WebAssembly[7,8], which provides capabilities to applications on top of our existing ... mess.

Unlike the "allow access to X" permission flags in your Smartphone, a Capability is something assigned at run time by the user in the process of running code. Like passing file handles to applications, instead of letting applications have default access to everything.

We're only capable of having a national power grid because we have circuit breakers and fuses all the way down to the individual circuits in your house. You don't give a lamp the ability to draw all of the power of the entire grid, that would be silly. Why give all of your files to an app you just downloaded?

This solution to computer security was discovered after research spawned by security problems discovered during the VietNam conflict. It was seen as too complex at the time, by association with Multics. However, in a world of mobile code and persistent network connections, it's the very basis of sanity.

One of the reasons for the widespread adoption of virtualization is that it's an ersatz capabilities system. You treat your application as a whole as defaulting to only the disk images and network connections you allow the virtual machine.

You can implement capabilities yourself in systems by giving out access tokens instead of passwords to users for various things. The great thing about capabilities is you can revoke them, or narrow the things they are allowed to do. For example, you can take a whole file system access, and filter it to be read only.

Flickr allows the creation of capabilities called a "guest pass" that you can create to share what would otherwise be private photos. Here's one I created last year[5]. It can be revoked at any time.

Blaming users, OS vendors, programming languages, Three Letter Agencies, and spending Billions of dollars to provide "CyberSecurity" are all wasted effort in my somewhat radical opinion.

While we're near the idea... check out Unidirectional Networks[4]. These are network links that only transmit data in one direction by design. So you could, for example, monitor the flow of oil through pipelines, or status of a server farm, remotely with no possibility of control ingress. You put a pair of servers on the network, one inside that monitors the stautus and continually broadcasts it to the other outside that monitors it and lets users connect with conventional UIs.

Letting the laws of physics keep hackers out is a good idea.

[1] https://genode.org/index

[2] https://www.gnu.org/software/hurd/

[3] https://en.wikipedia.org/wiki/Capability-based_security

[4] https://en.wikipedia.org/wiki/Capability-based_security

[5] https://www.flickr.com/photos/---mike---/shares/1M49EfNL43/

[6] https://en.wikipedia.org/wiki/Ambient_authority

[7] https://en.wikipedia.org/wiki/WebAssembly

[8] https://webassembly.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: