
Chrome OS KVM - A component written in Rust - cyber1
https://chromium.googlesource.com/chromiumos/platform/crosvm/
======
zaxcellent
There is a somewhat expanded README that has yet to be reviewed and checked in
here:
[https://chromium.googlesource.com/chromiumos/platform/crosvm...](https://chromium.googlesource.com/chromiumos/platform/crosvm/+/837b59f2d97b005ef84ac36efa97530c1bbf2a79/README.md)

~~~
mato
Solo5/ukvm unikernel monitor co-author here: I've been thinking about
rewriting ukvm in Rust off and on for some time now, your work provides a
proof point that it can be done. I'll be following it with interest.

Aside: I really like what the ChromeOS team has done over the years to advance
the state of OS security for consumers, keep up the good work!

------
jhoechtl
Isn't stuff like that exactly counteracting Rusts raison d'etre?

> // This is safe; nothing else will use or hold onto the raw sock fd.

> Ok(unsafe { net::UdpSocket::from_raw_fd(sock) })

[https://chromium.googlesource.com/chromiumos/platform/crosvm...](https://chromium.googlesource.com/chromiumos/platform/crosvm/+/6f366b54604e4012b43822d5dc2afe7d1616287d/net_util/src/lib.rs#55)

~~~
petertodd
That's low-level code implementing a wrapper around the underlying libc socket
API; there's no alternative to unsafe blocks there as it wraps a legacy API
written in C.

The "this is safe" comment is just a (very verbose) explanation as to why the
wrapper code is doing the correct thing. Notably, there's another similar
comment just above it. In fact, looking through the file if anything I'm quite
impressed at how carefully it's written.

~~~
zaxcellent
Author here: one of the policies we tried to stick to when writing unsafe code
was to document each case. It can be tedious but it encourages having less
unsafe code and makes the author really think about if this unsafe code is
really meeting the same guarantees as safe Rust according to
[https://doc.rust-lang.org/nomicon/meet-safe-and-
unsafe.html](https://doc.rust-lang.org/nomicon/meet-safe-and-unsafe.html)

~~~
bluetech
The line above has this:

    
    
        // This is safe since we check the return value.
        let sock = unsafe { libc::socket(libc::AF_INET, libc::SOCK_DGRAM, 0) };
        if sock < 0 {
            return Err(Error::CreateSocket(IoError::last_os_error()));
        }
    

I think in this case it would be better to put the return value check within
the unsafe block, this way the unsafety does not "leak out" of the block, so
to speak, so it is easier to audit. Of course in such a trivial case it does
not matter much.

~~~
staticassertion
I think it's best to keep unsafe blocks as small as possible. Within an unsafe
block there is undefined behavior, so you want to get out of there ASAP.

Just my opinion.

~~~
DSMan195276
Personally I disagree, but mostly because I think the `unsafe` model Rust has
is worth much less then people give it credit for.

For one, there is the current problems of documentation - it's not documented
what features/invariants the optimizer and language actually require to be
true, and the nitty gritty details are very fuzzy, so switching from `unsafe`
to `safe` is error prone and you're going to get it wrong. The more you do it,
the more likely it is your code will be broken in the future when you find out
something you thought was OK isn't actually something the Rust devs like or
isn't something they wanted you doing. If you do more of the `unsafe` work in
one big `unsafe` block rather then jumping in and out, you're less likely to
have issues in the future because there's less points where you have to ensure
all the Rust invariants are met.

But the bigger detail for me is that, even if the above problem is fixed,
`unsafe` doesn't really denote the areas we would consider the `unsafe` areas
anyway, so "getting out of there ASAP" is not always a helpful mindset and can
easily be counter-productive and result in you marking things `safe` when
they're not actually `safe`. For example, dereferencing a pointer is `unsafe`,
but doing pointer-arithmetic is `safe`. So you can easily just wrap the
dereference in an `unsafe` block and your technically good to go (You can even
wrap it in a pretty interface, like I've seen people do). But all the spots
where you do pointer-arithmetic can easily introduce bugs into your `unsafe`
code, making it hardly any better than C code that could have the same problem
(Half the point of using Rust is to avoid bugs from unchecked pointer-
arithmetic!).

My point being, just because your `unsafe` blocks are small doesn't tell you
anything about the correctness of them, and it likely means they rely on
outside information to be correct. And if that is the case, then that outside
code is effectively just as dangerous as your `unsafe` code. This may be
obvious to you, and I apologize if it is, but this is an issue/misconception I
see a lot. IMO, you should mark anything `unsafe` if using it within the
bounds of `safe` Rust could potentially cause `unsafe` code to fail, even if
the code itself is completely `safe` code. Only if you have an interface the
meets all the invariants that Rust requires should you allow it to be
considered `safe`.

------
indy
Is Rust an officially sanctioned language at Google?

~~~
zaxcellent
Author here: Rust is not officially sanctioned at Google, but there are
pockets of folks using it here. The trick with using Rust in this component
was convincing my coworkers that no other language was right for job, which I
believe to be the case in this instance.

That being said, there was a ton of work getting Rust to play nice within the
Chrome OS build environment. The Rust folks have been super helpful in
answering my questions though.

~~~
ekidd
> _The trick with using Rust in this component was convincing my coworkers
> that no other language was right for job, which I believe to be the case in
> this instance._

I ran into a similar use case in one of my own projects—a vobsub subtitle
decoder, which parses complicated binary data, and which I someday want to run
as web service. So obviously, I want to ensure that there are no
vulnerabilities in my code.

I wrote the code in Rust, and then I used 'cargo fuzz' to try and find
vulnerabilities. After running a billion(!) fuzz iterations, I found 5 bugs
(see the 'vobsub' section of the trophy case for a list
[https://github.com/rust-fuzz/trophy-case](https://github.com/rust-
fuzz/trophy-case)).

Happily, not _one_ of those bugs could actually be escalated into an actual
exploit. In each case, Rust's various runtime checks successfully caught the
problem and turned it into a controlled panic. (In practice, this would
restart the web server cleanly.)

So my takeaway from this was that whenever I want a language (1) with no GC,
but (2) which I can trust in a security-critical context, Rust is an excellent
choice. The fact that I can statically link Linux binaries (like with Go) is a
nice plus.

~~~
Manishearth
> Happily, not one of those bugs could actually be escalated into an actual
> exploit. In each case, Rust's various runtime checks successfully caught the
> problem and turned it into a controlled panic.

This has been more or less our experience with fuzzing rust code in firefox
too, fwiw. Fuzzing found a lot of panics (and debug assertions / "safe"
overflow assertions). In one case it actually found a bug that had been under
the radar in the analogous Gecko code for around a decade.

------
sangnoir
How does this square with rumors from a few days ago that the Pixelbook will
be able to run virtualized Windows (or Linux)? I do not understand the
implication of "No actual hardware is emulated."

If the Pixelbook _can_ run Window or Linux in a VM, then its price slides a
little closer towards justifiable.

~~~
mcpherrinm
At least what exists today appears to me aimed at running Linux (or other KVM-
aware guests, maybe BSD). I have only briefly read the code.

This may be used as part of running Android apps on ChromeOS in more secure
sandboxes, but the Wayland integration suggests to me this might be for
running traditional Linux desktop applications on ChromeOS.

I think there's a good chance we'll find out something at the rumored upcoming
Pixelbook launch.

------
inondle
Is this a new component for Chrome OS? Would this be a replacement for a
project like Crouton or is something like this already being leveraged by that
project?

~~~
dward
Crouton is just a chroot to some linux distro. It doesn't use any sort of
virtualization.

~~~
corybrown
Could this be a sanctioned way to run linux on a Chromebook w/o dev mode?

~~~
inondle
The README contains commands that seem to require access to the developer-mode
shell. Perhaps in the future though.

------
jacksmith21006
This is huge for chromeOS. Hope we learn more next week from Google on the
4th.

------
O_H_E
Whether you want Rust to go or, Go to rust

~~~
wybiral
I see a place for both of them in my toolbox.

Go still makes network code and certain models of concurrency stupidly simple.

Rust is more of a replacement for C/C++ for me.

~~~
XorNot
Go confirmed for me what I suspected: which is that I hate hate _hate_ the
futures style of async programming.

I'm on the lookout for channels and green-threads in Rust (so I can basically
write borrow-checked Go-style code in Rust).

~~~
tmzt
Channels are there but you have to use an API for them. Green threads were
removed a few years ago, though there are implementations of co-routines, etc.
as crates.

Rust is getting an unstable form of async and await as macros/syntax
extensions, and there are RFCs discussing adding them to the language in some
form. This would still be a wrapper for futures, but a more ergonomic way of
using them.

~~~
infogulch
I'm not entirely convinced that async annotations can't be completely elided.
I think that any call stack that touches a sync/async API could have the
decision of which to use bubbled up to a top level function via generics.

------
mtgx
The Fuchsia OS microkernel should be rewritten in Rust, too, especially if
it's going to take another 5 years before we even see it in a commercial
product. If Google wants to make a modern new OS that will help it avoid many
of the existing security problems it needs to keep fixing with Android/Chrome
OS right now, then it should do it right and avoid collecting a lot of
"security debt" down the road because of unsafe code/poor initial security
architecture decisions.

~~~
Ygg2
They should rewrite Go in Rust! That way we can avoid all this Go vs Rust
discussions ;)

Problem with your idea, is that low level kernel will use a lot of unsafe
Rust, which will lose lot of benefits.

~~~
ekidd
> _Problem with your idea, is that low level kernel will use a lot of unsafe
> Rust, which will lose lot of benefits._

I've actually worked on a toy kernel in Rust (using the excellent tutorial at
[https://os.phil-opp.com/](https://os.phil-opp.com/)), and it turns out that,
yes, you obviously need to use unsafe code to talk to the actual hardware. But
in most cases, you can encapsulate the low-level hardware inside a safe API:

[https://github.com/emk/toyos-
rs/blob/fdc5fb8cc8152a63d1b6c85...](https://github.com/emk/toyos-
rs/blob/fdc5fb8cc8152a63d1b6c85cd357737e6b1aebb5/crates/cpuio/src/lib.rs#L43-L75)

In this example, only I/O port _creation_ is an unsafe API, because you need
to specify a memory address to read and write. But once the port is created
(pointed at an appropriate address!), it's perfectly safe to _use_.

So, yes, kernel-space Rust will use "unsafe" far more often than regular Rust
code. But you can still make at least 80% of your code safe, and maybe much
more. And the remaining "unsafe" APIs act as a useful warning to pay attention
to what you're doing. Plus, Rust is a really nice language to write kernel
code in, anyway.

~~~
Spiritus
>But in most cases, you can encapsulate the low-level hardware inside a safe
API

I'm probably missing something obvious. But isn't that true for most
languages?

~~~
Ygg2
Not really. I mean let's say you program in C, how will you enforce some
pointer is never null? In Rust you can say _& Object_ and that reference is
never null (modulo any unsafe shenanigans).

~~~
bjz_
Null is not really the most pertinent example in this context - at least you
get a segfault. What is more important is that in C or C++ (even C++17), it is
trivially easy to produce buffer overruns, use after frees, dangling pointers,
invalidated iterators, data races etc. That is the unsafety that we are
talking about here. Opt-in nullability via Option<T> is nice to have though.

~~~
Ygg2
Yeah, I went with familiarity/simplicity in that example.

My point was similar C/C++ don't have a safe subset.

~~~
adrianN
Not a safe subset anyone would want to use at least. You could just not use
pointers in your code and you'd have memory safety.

~~~
steveklabnik
Your code, or any of the code it calls; iterator invalidation, for example,
wouldn't force you to use pointers directly, but can still cause memory
unsafety.

