> For system-level folks, Rust is one of the most exciting security developments of the past few decades. It elegantly solves problems which smart people were saying could not be solved. Fuchsia has a lot of code, and we made sure that much of it (millions of LoC) was in Rust.
> Our kernel, Zircon, is not in Rust. Not yet anyway. But it is in a nice, lean subset of C++ which I consider a vast improvement over C.
A few thoughts on this.
1. Why is it that Chrome hasn't had more Rust adoption? It's so obvious - Chrome constantly has to make the tradeoff of security vs performance with regards to things like parsing in sandboxed code. There's so much potential - Firefox started with the obvious place, encoding and decoding, which are too dangerous for C/C++ but too performance sensitive for OOP.
Feels like it's gotta be organizational, and somehow Fuschia has managed to break off from the part of Google that's still clutching to C++.
2. I'd be interested in hearing more about the design of Fuschia. Rust in userland is cool, or kernel modules too, but a memory unsafe kernel is an unfortunate thing. Even just the teaser of "not yet at least" is changing my view on the OS (I find the idea of another memory unsafe kernel/userland more depressing than exciting, even if there are neat things like capabilities).
3. Also curious to hear about their experience with a strict subset of C++. Again, looking at Chrome, we now see common ITW exploits - and Chrome is perhaps the single most fuzzed piece of software in the world, and leverages many modern C++ techniques (or the Google equivalent of them).
As for the rest, verified execution seems interesting - is this like what iOS does, where it's basically impossible to RWX (except with an escape hatch for JITs, but behind a permission boundary)?
TBH if Google pivots Fuschia to do something interesting like bottom-up memory safety with novel isolation mechanisms, I'll bite. Until then, I'll wallow in this awful pit we continue to dig ourselves into.
The web platform code in Chromium uses a C++ garbage collector [1] which gets a lot of benefits of a memory safe language and integrates well with V8/JS. That fixed the vast majority of issues without having to rewrite any of the code in another language.
Oh, I totally disagree with you. I'm well aware of how Chromium's C++ code works, and I'm aware of oilpan, I don't think it's fair at all to say it solves memory safety issues.
> It requires a human to actually look at the contents of the unsafe block.
Yes, that's an incredible capability - that a human can actually look to find bugs, that's insane, that's huge.
> Most of the time those humans are even rarer than those that validate unit tests are properly written.
Maybe, maybe. Is it rarer than fuzzing with sanitizers though? Or rarer than serious sandboxing efforts? When unsafe has been used gratuitously in Rust libraries the community has pushed back, to a fault.
I don't know what it is you want to try to argue here though. My questions were really targeted to someone at Google who can speak more directly to what I've been seeing for years.
I am trying to argue that just by using a safer systems programming language, the security exploits don't go away.
Someone needs to take care that the remaining 30% are taken care of.
While the unsafe blocks are still responsible for the remaining 70% if no one bothers to actually validate them.
Actix Web is a very good example, how the community still has to learn how to deal with such issues, and how effective the security would have been, if its security wasn't validated by fellow humans.
Yes it is definitly better than C, C++ or Objective-C, just not zero effort to keep it safe.
If your >95% of your codebase doesn't feature unsafe, your rare human with the review skills only has to look at the remaining 5%, maybe 10% in order to understand what the 5% are doing.
Fuchsia's kernel (Zircon) is based on LK [0] [1] ("Little Kernel"), it's not a from scratch thing. Would be interesting to know when LK started, the first commit[2] in the git repo is a big code drop so doesn't tell the story.
Very simple, the Chrome and ChromeOS code bases mostly predate Rust becoming a thing. It's a complex code base. Replacing any of it is a big task. It took Mozilla many years to develop Rust and use it in their own browser, which is likely to be ongoing work for many years to come. Additionally, developers familiar with the Chrome/ChromeOS code base are probably very hard core C++ developers. Switching to Rust is not going to happen just like that. They'd have to want it and agree on doing it. And then they'd have to learn it and get productive with it.
Fuchsia development has been ongoing for quite long as well but most of the code base is from more recent times. When they first announced Fuchsia, they did not talk about Rust at all. But obviously they managed to attract some Rust developers since then and perhaps made a conscious choice to not use C++ for new development and try to contain what they had already as much as they could where it was already there.
It's one of those high level technology choices that kind of became an obvious one a few years ago. If your goal is to develop fast and secure software, Rust seems the preferable language. Google has the track record to prove that C & C++ are inherently risky since despite decades of trying to prevent issues, they regularly have to deal with them still. Including in new code. I imagine somebody at a high level just drew a line in the sand and put a stop to most new C++ development on Fuchsia. Do it right the first time. Or something like that.
> In conclusion. Too early, lack of experts, rapid evolution pains. It was stacking risk on top of an already risky project.
Rust was extremely immature and too early to be used in an OS at the time. It just hit 1.0. By then, Fuchsia already wrote its kernel Magenta (now Zircon) in C++.
In short, it did not make sense to write the OS kernel in Rust from both a technical and business standpoint at the time.
Dude it's a kernel. What other alternatives do you suggest? ATS and all sound nice for academic projects but it's an industry project (even if some research purpose is there) they have different tradeoffs.
I'm sure they don't need my recommendation, and the whole thing is moot today if Rust has fixed the issues in the years since.
But to play with counterfactual history: Apparently Go is fairly big there. There was a Usenix paper about a Go kernel a while back, and there's also gVisor. Then there is Ada, and Kotlin native, and various safe dialects of C. MS did one in .NET - there were lots of options.
There are papers with Python kernels. Doesn't mean Python's a good language for writing a kernel. gVisor had to fight Go quite a lot, I think it was a huge mistake for them to choose it.
Ada is the only language you've mentioned that really fits IMO. One of the safe languages like P would be interesting but it's unproven in the space.
As for Ada, it's a sad story, but ultimately the language is unlikely to gain traction after too many early mistakes (particularly around licensing).
So its from quite a prestigious institution, has at least one big authors, and concludes that it works quite well and recommends it over C for new kernels and VMMs.
I also disagree the "no traction" argument about Ada since it's been used in real projects and had a big enough developer community for a long time, and is alive and well. People can surely pick up new programming languages when they join a new project, I know I've done it many times. Tooling licensing should be fine as well since GNAT is available under GPL + Licensing exception, same as Google has been using with GCC since dawn of time, and under the Adacore license which Google probably can afford.
So, I hold that there were many other options vs Rust. Including the others in my original comment that we didn't get to in details. Except Kotlin Native, I made a timeline mistake there, it was too early in 2016.
I read a possibly dated view by a security researcher that slowly introducing Rust into a C/C++ codebase may actually make the codebase less secure — since the C part can be made aware/can mitigate memory errors while in pure Rust, there is no such thing due to lack of need - so a buffer overflow can potentially cause bigger problems this way.
If someone with more knowledge on C and Rust compilers and security, I would be interested in their opinion on this.
> slowly introducing Rust into a C/C++ codebase may actually make the codebase less secure
I think to some extent this used to be true, and to some extent still is.
Rust didn't always have MIRI support which points out if you try to do some unsound thing. Maybe 4 years ago you would have been the first person to use a specific niche static analyzer that builds on LLVM with Rust, so there would have been bugs inevitably. Same goes for wanting to analyze both the C and Rust that call each other. So static analyzer support has improved I think.
Also, C is better understood while in Rust it's less obvious how things need to look like. With better understood I don't mean by the developers who write the code, but from a language specification point of view. E.g. views have evolved on how accessible uninitialized memory should be in unsafe Rust, so MaybeUninit has been added. It's only two years old at this point.
Also there is the issue of safe interfaces between languages. In the days before cxx, if you had a "safe" cpp smart pointer, moving it to Rust and back would have involved possibly more unsafety than operating on that smart pointer yourself.
There has also been the long standing issue of unwinding at FFI boundaries.
If you have a codebase with 5 thousand lines of cpp and now replace 1 thousand lines with a Rust component of 500 lines, it might still be a better idea to just keep the old solution and give it a thorough review. It's different I think if you have a codebase with 5 million lines of cpp though and replace one component of 50k lines that has a clean interface but is an extreme source of vulnerabilities. It's a question of how dangerous "surface" is between the two languages.
Does all of this mean you shouldn't invest into Rust? Absolutely no. If it's a 5k lines codebase, you can rewrite it completely. If it's a 5 million lines codebase, I'm sure you'll find a security bug ridden component that is well isolated that you can replace.
I'm knowledgeable on C and Rust and security and it sounds like a red herring to me.
It's unclear what they're trying to say, but a charitable interpretation would be that C code assumes unsafety, therefor mitigations are in place. But it's actually the opposite situation - rustc is generally quite aggressive about turning mitigations on, and adopts them very quickly.
Re parsing: there's work being done on Wuffs which is meant for decoding data formats safely and efficiently. It's a new language, but it's one that compiles into C code.
It's a Google project, and I wouldn't be surprised if it eventually makes its way into Chrome.
Wuffs is not a Google project; it's a project whose code happens to be owned by Google. Most likely it's some engineer's side project and while I wouldn't be surprised if it was incorporated into a Google project, it's definitely not inevitable. It has probably the same odds as any other well-maintained open source project.
I fully acknowledge that the README says that it's not an official Google product. But rather than debating the exact "Google status" of the project, here's the tracking bug[1] for its integration into Skia (the low level graphics library in Chromium). Here's the tracking bug[2] for its integration into Flutter.
Chromium still uses C++14, not even C++17, due to binary comparability with PNaCl plugins. In 2022 the support for those will be retired, then more options will be possible.
Actually, Chrome were evaluating rust for there case. They were pretty much for it but they had to make it interface with the existing C++ codebase. Here is the relevant discussion:
Yeah they've been evaluating it for years, and then I read that document and it frankly felt like a poison pill. Reading that document it sounds very much like it's "how do we keep writing C++ while addressing the growing interest in rust".
No, the problem here is very real - rust doesn't have a reasonable c++ interop story, which means you have to rewrite potentially very large amounts of code at once and hope you don't regress anything. At the same time you have to continue normal development work, which is necessarily still C++, because again, you can't adopt rust in small sections.
Basically to rewrite one DOM feature in rust would have a high likelihood of requiring rewriting the entire DOM implementation in rust, which would also require rearchitecting. While doing that other people would be wanting to continue working, but they don't have a rust build to work on so they have to continue in c++, thus sadness and conflicts. The other solution is to implement features in rust, and use a c++ wrapper to interface with that rust implementation, but now you've lost the memory safety advantages of rust (most of which are lifetime related, not bounds checking), and the C++ interface itself could be exciting as it could end up having to marshal other types.
> No, the problem here is very real - rust doesn't have a reasonable c++ interop story, which means you have to rewrite potentially very large amounts of code at once and hope you don't regress anything. At the same time you have to continue normal development work, which is necessarily still C++, because again, you can't adopt rust in small sections.
This was a somewhat more accurate summary a few years ago (though saying "you can't adopt rust in small sections" ignores the fact that Firefox among others did just that), but the cxx crate has made huge strides lately, and it's much more feasible now to interoperate at a deep level between Rust and C++. (In fact, dealing with this interoperability constitutes a large part of my job nowadays, and thanks to cxx it performs swimmingly.) We're even starting to talk about advanced features like seamless C++ async/await support in cxx.
I would love your opinion on whether migrating to Rust is a good idea for us, since you're experienced in this realm.
My team has a Java project that's written in a modern OO style. We defined "modern" OO as encapsulation and polymorphism, without inheritance, and it fits our use case very well.
We don't want to give up the benefits we get from the OO architecture, for good reasons that I won't get into here (because I don't want this to de-rail into a OO bash-fest, as fun as they are!)
The conundrum: In my experience, Rust doesn't support polymorphism very well. When the borrow checker meets polymorphism, a lot of state is forced into parameters (state which is inherent; we already keep state to an absolute minimum), which opens it up to mutation from all sorts of places; not very encapsulated.
We can have a bunch of private member Rc<RefCell<T>>s instead of bringing in all state via parameters, but it's unidiomatic, and in the end we'd probably have something slower and less safe than our original Java program.
The question: Is there any way to make Rust a good fit for a use case like ours?
(The same problems would apply to C++ codebases, though it's a little more convincing for them, since C++ isn't as safe as Java)
Not the GP, but is your problem that you have a complex object-graph without clearly defined owners of objects (as is common in GC'd languages, especially OOP ones)?
There's a pretty clear tree of ownership, actually. Looking at any given component, it's fairly easy to know when a component is a "subcomponent" (owned) or whether it's a reference to some other component elsewhere (borrowed). Some of us are C++ veterans, so we tend to think in terms of single ownership no matter what language we're in.
There are a few references in our "main" (so to speak) where its ownership would be clearer if we stored them in local variables and then handed them to the only component that used it, but that's a cosmetic concern, really.
There are very few (if any) places where we actually make use of Java's shared ownership. The only place I can think of is where we cache some files, and farm out "shared" references to any file. Still, at that point, the cache could be considered the owner, with a slight adjustment.
Note that the philosophy of cxx isn't to support all of C++'s semantics (templates make this impossible in the limit as Rust has no SFINAE rule for example), but rather to support the important use cases that arise in practice.
It's also worth noting that, these days, bindgen has support for instantiating C++ templates, though it operates at a much lower level. Compared to cxx, bindgen is oriented more toward being comprehensive than being easy.
Have you gotten a chance to use cxx in a large “production” “legacy” c++ codebase? It’s great but there’s quite a few roadblocks that I encountered even trying something I would have considered that should be “easy”. I could easily imagine that Chromium would have an even tougher time. Some highlights:
* if you’re not careful, you’ll end up with linkage issues if you link in multiple crates. This means 1 uber create that has all your Rust dependencies. This can mess with parallelism in your build since Cargo isn’t coordinating with the broader system.
* speaking of build systems, chromium probably wouldn’t use create meaning extra build system integration, especially since they don’t use Bazel/Blaze.
* cxx still can’t pass across certain types (Arc and Option being the big ones) and is missing a general story for extending it with custom types.
* cxx can frequently cause you to have to reach to unsafe anyway because the lifetime semantics you want to express cannot be done in pure Rust because the other half lives in c++.
Don’t get me wrong. Rust is fantastic and a joy to use. Dismissing the challenges a project like Chromium has as “they just want to stick to c++” isn’t a fair look at the situation. Lots of smart people work on these projects and are keenly aware of Rust. Additionally managers and executives are keenly aware of the security benefits so there’s definitely downward pressure to try to adopt these (and large efforts spent investigating how to enable this whole maintaining developer velocity).
cxx doesn't necessarily help for cases like refactoring legacy C++ code. It seems to require more work than bindgen. I suspect switching to cxx in Firefox, to replace bindgen, would be quite involved.
Replacing small components like a single codec or parser wouldn't require anywhere near what Google is asking for in that document. I'm not saying to replace V8 with Rust.
> but now you've lost the memory safety advantages of rust
But replacing small isolated components is not a very interesting option. It means you're paying the costs of a multi-language codebase, in terms of build complexity, having to duplicate infrastructure, and siloing of developers.
In return you'll only be getting limited benefits from it, since the usage will only be restricted to few cases and a full migration can never happen. Every single decision on what language to use for a component in becomes a hard to reverse ("will this component always remain a leaf node with a really simple interface?"), slowing down decision making and ossifying the architecture.
> But replacing small isolated components is not a very interesting option.
I seriously disagree. Again, I mentioned things like JSON parsing which deal with untrusted data but also are too performance sensitive to move OOP. Chromium's own security guidelines explicitly states that you should never mix memory unsafe code with inprocess code with untrusted input.
I strongly believe you'd get significant returns by incrementally replacing components like that with Rust.
> Every single decision on what language to use for a component in becomes a hard to reverse
This is purely an organizational issue in my opinion. If the decision were made to follow the guidelines their security team has laid out, there is no question or slow-down. The guidelines are very competent and clear; memory unsafe, untrusted input, in-process - pick one at most.
JSON is just a straightforward example because it's easy to understand how it falls into Chromium's threat model. Feel free to replace it with font parsing, audio/video codecs, etc, and you'll find thousands of relevant vulns.
Being able to add new features without greatly expanding the attack surface is important for security. When new components of Firefox have been written in Rust, there have empirically been far fewer security problems to shake out than when they've been written in C++.
Is Rust of more limited benefit when it's only being used for certain components as opposed to the entire codebase? Sure. But is it still worth using for new code? Frequently, yes.
> FIDL, our IPC system, is a joy. I often smile when debating designs, because whether or not something is in-process or out-of-process can sometimes feel like a small implementation detail to people.
It also applies, partially to COM/UWP, with the "small" difference that to this day, Borland/Codegear/Embarcadero tooling, is ages ahead of the notepad like experience for the IDL files.
Sometimes I wish the WinDev team would spend some time with IPC implementations on other platforms to understand what they are missing, regarding their Stockholm syndrome in COM tooling.
With Fuchsia there is yet another example to learn from.
Debating whether things should be in process or out of process just seems so 80s. Ideally everything would just be in a single process. Why are new systems exposing the ability to dereference raw pointers to user programs? There was a brief period in the 00s where it seemed like systems research was moving in the direction of only running safe code, obsoleting the need for hardware isolation. Rust proves it’s possible to do efficiently. One day we’ll get there.
Processes provide better security isolation than just language based security (I think that's what you're talking about). For example with Spectre vulnerabilities they only work within a process (at least the original ones).
There are OSes that use the language for safety and have a single process but having to use memory safe languages is a pretty big limitation! Plus I would imagine it is easier to prove process security than language security.
Security was not the goal for software-isolated processes, but performance and getting rid of task-switches. Who knows, maybe this is what MS recently described as "next windows" (naive hope).
You still have processes, they just are software isolated rather than hardware isolated. You write everything in a language that can guarantee no unsafe behavior then no hardware checks are needed or any task switching and most of the safety checks are done at compile time:
Microsoft and Oracle have removed CAS and JAAS from CLR/JVM respectively as they have failed to provided the software isolated model that was expected from them.
"The lowest-level x86 interrupt dispatch code is written in assembly language and C. Once this code has done its job, it invokes the kernel, which runtime system and garbage collector are written in Sing# (an extended version of Spec#, itself an extension of C#) and runs in unprotected mode. The hardware abstraction layer is written in C++ and runs in protected mode."
The point is that drivers and apps are managed code and can be software isolated, obviously some of the kernel is unsafe.
CAS was fined grain runtime model that added significant overhead and was never close to being verified, software isolation was coarse grained just like a hardware process and had much less overhead with the goal of verification, more here:
Software isolation can still be exploited thanks to logical bugs, this was the whole issue with security exploits in Java applets.
Hence why there is really no alternative to multi-processes for security critical code.
You refer to Singularity, yet after Singularity, Microsoft Research was went down the path of exploriing other alternatives.
If the security model was so great, they would have doubled down on it.
Instead, after so many interactions, the main researchers have come out with Azure Sphere, which uses C (!) with a custom Linux kernel, and all the security layers are enforced via hardware.
Hardware isolation can be exploited due to bugs too and have been quite a bit lately so the alternative doesn't solve that problem.
Conflating Java applet security with software isolated processes as implemented in Singularity is not doing a service to the research work done there. Much of the SIP design was in response to the lessons learned in Java and .Net.
The problem with SIP is everything to be isolated must be written in a managed language. That means everyone has to get on board with a managed runtime most likely GC'd. At one point MS thought they could get everyone on board to .Net but at some point they realized that wasn't going to happen and decided its better to go with the status quo of C/C++ development.
Maybe we will see a resurgence of SIP with Rust or Webassembly. The idea of verifiably safe code where all safety checks are done ahead of time by the compiler has many advantages that can't be ignored forever.
Indeed. The existence of Rust as a safe language without a GC shows that a system that can only run safe code can be as efficient as systems that expose raw hardware.
In 90s and 00s they did this with languages similar to Java because low level safe languages weren’t yet invented. If this idea ever resurfaces it will be done with languages similar to rust and will be competitive with and likely more efficient than unsafe systems. This will potentially be a nice convergence with power-efficient RISCV chips that don’t include an MMU or a supervisor mode.
I am more interested in papers proving that SIPs aren't able to be exploited via logic errors attacks, like it happened in Java Applets due to logic errors on the serialisation library.
SIPs don't do nothing for "We have learned that multiprocessing is the only safe path to program stability, ease of scaling across clusters and less security exploits."
Which was my whole point since the beggining.
Yeah SIPs are nice, hardly any different from other managed OSes since the 60's, with Burroughs being the first one.
Many of those actually were used in production, while Singularity never left the crib, it died on version 2.0 with nothing more than a CLI interface.
Then everyone moved into other projects, hardly any security assessement was done to the whole stack, so how can you assert SIP capabilities to withstand a black hat attack?
SIP are not new, those previous systems you mentioned are a form of them, many shared hardware address space across all processes and relied on the programming language being safe for isolation but they where built in a time when running untrusted code was not really thought about, it was more about not crashing the whole system by accident.
SIP are a form of multiprocess designed for safety and stability reducing exploits and potentially scaling across clusters. Singularity and Midori heavily relied on message passing for IPC which would have allowed cluster scale out.
These where research projects there was plenty the security work available some of it I linked. Much of the work revolved around verifiability of the code to guarantee no cross process breakout without hardware checks.
How do you assert hardware checks can withstand a black hat attack? They obviously haven't been lately.
By doing what Azure Sphere project has done, paying bounties to researches to actually exploit systems in production, not just theory.
JAAS and CAS were also secure until black hats started to look into them, and since they are beyond repair, have now been thrown away, replaced by OS containers and multiprocesses.
Because Singularity is hardly any different from something like SavaJe OS, in what concerns use of type safe programming language as security boundary.
It was abandoned in 2006 and SIP were never validated in production under real attacks.
Even Midori had more real use, having powered Asian Bing subnet for a while.
To worship it in 2021 is just theory without pratical results.
I have concerns about hardware as a security boundary as well, more so lately, as should you I hope and it's much more difficult to patch hardware.
Again Midori used SIP's so it was validated in production, right?
Projecting worship on to me is not productive, no need to be childish about this. All the points you bring up for software isolation exist with hardware isolation, formal verification could help both and there is a lot of work in that area.
I think software needs to become safe by default, the endless buffer overflows need to stop. If you then take the stance user land software must be written in a safe language then the need for hardware isolation might not need to exist in it's current form and thats very interesting to me, thats all.
First thing that comes to mind is the approach that Jurassic javascript [1] does using .Net IL generation, run that on a .Net based OS like Singularity [2].
The point is your jitting to an intermediate language thats verified and a system level jit actually turns that into machine code, you are never allowed to write machine code directly to memory and execute.
V8 basically does this internally with its bytecode, if your whole system was built on that bytecode then you can do software isolation rather than hardware.
I don't really agree, as someone who advocates for Rust usage.
> Ideally everything would just be in a single process.
Processes are an amazing boundary, probably the most powerful abstraction in existence.
> Why are new systems exposing the ability to dereference raw pointers to user programs?
This shouldn't be possible today - SMAP/SMEP are standard now.
> obsoleting the need for hardware isolation
I think if the last 3 years (sidechannel frenzy) has taught us anything, it's that software-isolation is extremely difficult. Even hardare isolation is difficult.
> It's backed by D.J. Bernstein's excellent Salsa20 with seeding from hardware
I wonder why they stuck with Salsa20 instead of ChaCha20. The latter has got better performance characteristics and is by now more popular, having found its way e.g. into TLSv1.3.
The only thing which scares me terribly is that I'm NOT seeing any discussion of real-time problems.
Android has an absolutely shit audio stack. iOS simply destroys Android on this front.
Fixing that requires that you architect the entire kernel around making sure that you can service the audio stack in real-time with low latency.
The fact that I haven't seen even the slightest mention of this makes me worry that the Fuchsia folks haven't really learned any lessons from Android and are simply repeating the same mistakes.
Fuchsia is designed with real time tasks in mind. The scheduler operates both fair scheduling algorithm as well as a deadline scheduling algorithm. The audio subsystem has it's threads scheduled using the deadline constraints. The official docs seems to be missing some content on this unfortunately. https://fuchsia.dev/reference/fidl/fuchsia.scheduler?hl=en#P... describes the API which provides the deadline scheduling capability.
As far as I know, Android’s less than great audio stack is not due to OS constraint — linux itself has both Jack and the now new pipewire, which doesn’t have any problem with low-latency.
> This document proposes a mechanism for running unmodified Linux programs on Fuchsia. The programs are run in a userspace process whose system interface is compatible with the Linux ABI. Rather than using the Linux kernel to implement this interface, we will implement the interface in a Fuchsia userspace program, called starnix. Largely, starnix will serve as a compatibility layer, translating requests from the Linux client program to the appropriate Fuchsia subsystem.
So they will be able to run unmodified Linux binaries which should allow for a smooth transition from Android/Linux.
I'm not even sure that they'll rebrand Android at that point, as kernel for users is invisible. My guess is they'll just replace Linux with Fuchsia.
Hell, even their approach is to retrofit Linux binaries rather than trying to modify Android to run on Fuchsia. Android is a different beast itself, I'd rather see a better, reworked Android runtime and HAL that is idiomatic to Fuchsia rather than this one. The same could be said for ChromeOS too.
But that seems to be exactly what they are doing. They're surely porting the Android runtime to Fuchsia, or even already ported it.
Remember there are Android APKs containing native Linux banaries (made with Android's NDK) that you should be able to run. That is the whole main point of Fuchsia' starnix. I think this is even mentioned on starnix's RFC.
Both. If CastOS (which is Linux-based) for the Google Nest Hub was already replaced via an update, it is completely obvious that Fuchsia is not only here to stay, but will definitely replace Android AND ChromeOS.
I think Unix has had its day and its quite refreshing to see a new OS that is designed for security and is actually optimised for modern hardware and CPU architectures from the start.
Downvoters: So Google did NOT just replace the Linux-based CastOS on the latest Google Nest Hub update with Fuchsia and it is still an 'experiment' that some were still hoping for? [0] Are you prepared to present evidence against my answer to the parent's question?
There is no need to be scared of change, especially when Google inevitably replaces Android and ChromeOS with Fuchsia.
I'm not scared of "change", I'm scared of Google. I'm scared of losing power over my computers. Android is already a huge step back - only its Linux underpinnings allow it to act as a "real computer" in a pinch, and Google is rapidly encroaching on that too [0]. If Google is permitted to execute, unfettered, its vision for what end-user computing should look like, I do not expect it to be terribly empowering or respectful of my ideals, however technically impressive it may be.
Keep in mind that there are many teams at Google and they will need to be convinced. The Fuchsia may have broader ambitions but they seem to be starting with Google’s lesser-known OSes on embedded devices. This is in itself a win at a big company since there are fewer obscure OSes to maintain. That doesn’t necessarily mean that the Android or ChromeOS teams decided to commit just yet, as far as we know.
To be a good kernel. One imagines that, insomuch as a good kernel is part of an OS, that it could 'replace' Android and Chrome OS. But the days of platforms getting sunset are a day of the past, the Android runtime survives and will survive for years.
Maybe. I think they’ll continue rolling it out to simple devices for a few years while they do extensive testing and fix any issues that arise but I do believe their long term goal is for it to be a replacement for all their various Linux based OSs (since it is supposed to be able to run on simple devices all the way up to smart phones and PCs.
Treble already provides that stable ABI, since Android 8 Linux drivers are considered legacy on Android and all newer drivers must use Treble IPC for certification.
PS 4 uses an open source OS, how much have you seen of it?
Obviously this is a reason otherwise Fuchsia would have the same licence. However Google probably doesn't care that much about having an option to keep it closed source. Antagonism to the GPL comes mainly from Hardware vendors.
They cannot make anything of Fuchsia, which has been released under a permissive license retroactively open source. If Fuchsia replaces Linux on a large scale, they cannot take that away. What Google could do is having a closed source branch or release custom builds of Fuchsia, which have closed source in it. While there are certainly some szenarios which could be worrysome, it depends on the open source community on how much Google can get away with that. If only the open source parts are used, closed source version would just not have much meaning.
And of course, there would be some very legit reasons not to use GPL - like embedding ZFS into the kernel. Which would be no license issue for Fuchsia.
I'll be surprised if it's still around (as in not killed by Google) in a few years. Besides, I don't see it ever becoming popular enough to be an alternative to Linux outside of Google.
Why? It was just officially rolled out to its first devices less than two weeks ago. It isn’t like a google service that can fail/not get enough traction fuchsia already has a bunch of users (anybody with a google or android device) that it could be rolled out to. I imagine it will not replace android in the next five years but simpler devices it’s almost guaranteed to be the main OS shipped going forward.
> Our kernel, Zircon, is not in Rust. Not yet anyway. But it is in a nice, lean subset of C++ which I consider a vast improvement over C.
A few thoughts on this.
1. Why is it that Chrome hasn't had more Rust adoption? It's so obvious - Chrome constantly has to make the tradeoff of security vs performance with regards to things like parsing in sandboxed code. There's so much potential - Firefox started with the obvious place, encoding and decoding, which are too dangerous for C/C++ but too performance sensitive for OOP.
Feels like it's gotta be organizational, and somehow Fuschia has managed to break off from the part of Google that's still clutching to C++.
2. I'd be interested in hearing more about the design of Fuschia. Rust in userland is cool, or kernel modules too, but a memory unsafe kernel is an unfortunate thing. Even just the teaser of "not yet at least" is changing my view on the OS (I find the idea of another memory unsafe kernel/userland more depressing than exciting, even if there are neat things like capabilities).
3. Also curious to hear about their experience with a strict subset of C++. Again, looking at Chrome, we now see common ITW exploits - and Chrome is perhaps the single most fuzzed piece of software in the world, and leverages many modern C++ techniques (or the Google equivalent of them).
As for the rest, verified execution seems interesting - is this like what iOS does, where it's basically impossible to RWX (except with an escape hatch for JITs, but behind a permission boundary)?
TBH if Google pivots Fuschia to do something interesting like bottom-up memory safety with novel isolation mechanisms, I'll bite. Until then, I'll wallow in this awful pit we continue to dig ourselves into.