Hacker News new | past | comments | ask | show | jobs | submit login
I have come to bury the BIOS, not to open it (Bryan Cantrill at OSFC22) (slideshare.net)
40 points by chalst on Sept 20, 2022 | hide | past | favorite | 3 comments



Abstract:

    Historically, proprietary systems created the need for binary machine interfaces. This extended to the deepest layers of the system, where the PC BIOS was created to make machines look sufficiently identical to one another that an opaque blob -- the operating system -- could run on them. We enshrined these sedimented layers as "firmware", but in fact there is no such thing: this is merely proprietary, opaque software by another name.

    The approach that we're taking at Oxide is radically different: instead of merely relying on marginally better implementations of dated abstractions, we are eliminating the abstractions entirely. Rather than have one operating system that boots another that boots another, we have returned the operating system to its roots as the software that abstracts the hardware: we execute a single, holistic system from first instruction to running user-level application code. This has, of course, been technically challenging, as it has required us to incorporate the lowest levels of machine initialization. But that our small team has prevailed also shows its viability: this is delicate, but it isn't impossible -- and indeed, having been to the mountaintop, we believe that not only is the holistic artifact more robust, the path was in fact faster than relying on a proprietary initialization layer.

    In this talk, we will discuss our holistic approach: why we have taken it, the challenges that we faced, why we believe that this approach is increasingly viable in an all-open world -- and what we need out of CPU vendors to support such systems.
(from https://www.osfc.io/2022/talks/i-have-come-to-bury-the-bios-... )

Video should be available in the next couple of days.


I have a hard time believing that doing a systemd-style violation of encapsulation and separation of concerns is a good idea. It may be true that $BIOS_VENDOR is not great and that there's too much layering, but on the other hand, how often do servers boot anyway, and is this a core competency for a small server team? But, just reacting to the abstract, looking forward to the video.


I have always dreamed of working for a company like Oxide, but never applied, but still hope very much for their success. I have a few comments, hopefully this won't be too lengthy because I've thought about this for a long time.

The most important future computing feature, in my mind, is what I will call "verified computing", because I doubt any concept in what is presently called trusted computing fully accurately captures what I am thinking.

At some point, let's say the early 90s, we (for some subset of we) owned hardware and ran software, which we implicitly trusted, on that hardware to get our work done. The operating system existed to isolate faults in one piece of software affecting another. Then, with the advent of the web, we began running software on our own hardware that, again, we implicitly trusted, however without the authenticity assuring properties of purchasing boxed software from some retail channel. The software we were beginning to run became adversarial to our own security. Parallel to this, the defacto operating system vendor (Microsoft) and the defacto silicon vendor (Intel) with pressure from other parties (mostly media conglomerations), decided that it was necessary to run software that was not auditable by us, the hardware owners, in order to protect certain interests of those other parties.

Now we own and operate computers that run software that we simply cannot trust, and our own data is processed by software that we can no longer inspect. I can discuss this more deeply, but I shall move on.

Due to this, Intel, and AMD provide server-grade silicon that is infected by our consumer, personal computing, past. Everything about the design of their system reeks of this baggage.

Oxide, of course, is trying to escape this. But fundamentally, x86 is infected. ARM, and RISC-V too, are infected because of influence and pressure of Microsoft to maintain security structures (based in the past) with specifications like UEFI and ARM ServerReady/SystemReady (and SBBR).

The security model for Microsoft and the broader industry, is that the user gets to run software on hardware (that they own) that the software vendor themselves can verify has not been tampered with, including remotely. Great, what about the owner of the hardware? How do they assure themselves they know what is running on their own hardware? The answer to this seems to simply be, well then don't buy the operating system and software you don't trust.

The problem with this is that even in purchasing the hardware and owning the product, we still remain the product. The success of Apple (through gardened hardware) and Google (through remote server processed user data) entrenches this viewpoint, and reinforces what silicon vendors provide to hardware designers.

Of course, Oxide's customers are different. But the very silicon that is available to Oxide is tarnished by this mindset.

So what do I mean by "verified computing"? Well, I've blabbed on a bit, so I'll try to be brief. For me verified computing is silicon that implements a specification that is software readable. This silicon is sold by a vendor A for others to integrate in their hardware, say vendor B. As a consumer, C, I purchase hardware from vendor B, but only have to trust silicon vendor A. The only way B can screw me over is by a denial-of-service where their hardware simply does not run the software I want it to. As part of purchasing hardware from B, I get a key from B (via A) that allows me, C, to contact A and tie the silicon to my private key. When I sell the hardware, I use the same method that B uses to provide the next party with a key they can use to tie the hardware to themselves. When I purchase software from vendor D, I sign that software with my key to run on that hardware. If vendor D needs to run software hidden from my inspection, for various reasons, they need to get my permission by having me sign their software in a special way so that software vendor D can trust silicon vendor A in that their software is running on their silicon in a way they believe is sufficient to preserve the security given the, going back to the beginning, formal specification of the silicon and their trust in silicon vendor A.

Sorry that the above is long winded. But this is the only future that is reasonably worth fighting for.

Briefly, how would Microsoft adapt to this system. Well, B provides you with the system firmware (for Microsoft operating systems). You sign that and install it in your hardware. Then you sign the Microsoft OS. The Microsoft OS verifies the firmware is acceptable and then runs the operating system. B could compete with other vendors by providing minimal, or modular, firmware which can be customised for Linux, other other uses.

Will this future arrive, I'm not sure. But I hope so. I'd love for Oxide to fight for it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: