
Fuchsia is not Linux - navigaid
https://fuchsia.googlesource.com/docs/+/master/the-book/
======
naasking
Some problems I see from skimming the docs:

> Calls which have no limitations, of which there are only a very few, for
> example zx_clock_get() and zx_nanosleep() may be called by any thread.

Having the clock be an ambient authority leaves the system open to easy timing
attacks via implicit covert channels. I'm glad these kinds of timing attacks
have gotten more attention with Spectre and Meltdown. Capability security
folks have been pointing these out for decades.

> Calls which create new Objects but do not take a Handle, such as
> zx_event_create() and zx_channel_create(). Access to these (and limitations
> upon them) is controlled by the Job in which the calling Process is
> contained.

I'm hesitant to endorse any system calls with ambient authority, even if it's
scoped by context like these. It's far too easy to introduce subtle
vulnerabilities. For instance, these calls seem to permit a Confused Deputy
attack as long as two processes are running in the same Job.

Other notes on the kernel:

* The focus on handles overall is good though. Some capability security lessons have finally seeped into common knowledge!

* I'm not sure why they went with C++. You shouldn't need dispatching or template metaprogramming in a microkernel, as code reuse is minimal since all primitives are supposed to be orthogonal to each other. That's the whole point of a microkernel. Shapiro learned this from building the the early versions of EROS in C++, then switching to C. C also has modelling and formal analysis tools, like Frama-C.

* I don't see any reification of scheduling as a handle or an object. Perhaps they haven't gotten that far.

Looks like they'll also support private namespacing ala Plan 9, which is
great. I hope we can get a robust OS to replace existing antiquated systems
with Google's resources. This looks like a good start.

~~~
kllrnohj
C++ has far more to offer over C than just template metaprogramming.

Basic memory management and error handling, for example, are radically easier
and less error prone in C++ than in C. Less reliance on macros and goto's
should be pretty obvious wins.

There's really very little reason to ever use C over C++ with modern
toolchains.

~~~
naasking
> Basic memory management and error handling, for example, are radically
> easier and less error prone in C++ than in C.

Microkernels don't need memory management. Dynamic memory management in a
kernel is a denial of service attack vector. Fuschia is built on a
microkernel, so I expect they will follow the property of every microkernel
since the mid 90s: no dynamic memory allocation in the kernel, all memory
needed is allocated at boot.

Furthermore, you don't want exceptions in kernel code. That carries huge and
surprising runtime execution and space costs.

Simply put, there is no reason to choose C++ for a microkernel, and many, many
reasons not to.

~~~
kllrnohj
> Microkernels don't need memory management.

Of course they do. It takes memory to hold metadata about a process. It takes
memory to hold resources about other services. It takes memory to pass data
between them.

Just because that memory is reserved at boot doesn't mean it suddenly has no
lifecycle of any kind.

> Furthermore, you don't want exceptions in kernel code.

Nobody said anything about C++ throw/catch exceptions.

> Simply put, there is no reason to choose C++ for a microkernel, and many,
> many reasons not to.

If you want to avoid C++ that's great, but to argue for C over it is insanity
rooted in nostalgia.

~~~
bb88
> If you want to avoid C++ that's great, but to argue for C over it is
> insanity rooted in nostalgia.

Did you know that code in C++ can run outside of main()?

I used to be a C++ believer, and advocated for C++ over our companies use of
Java.

One day, they decided they wanted to "optimize" the build, by compiling and
linking objects in alphabetical order. The compile and link worked great, the
program crashed when it ran. I was brought in to figure it out.

It turned out to be the C++ "static order initialization fiasco":

[https://yosefk.com/c++fqa/ctors.html#fqa-10.12](https://yosefk.com/c++fqa/ctors.html#fqa-10.12)

If you've ever seen it, C++ crashes before main(). Why? Because ctors are
getting run before main(), but before other dependent statics have been
constructed.

Changing the linking order of the binary objects fixed it. Remember nothing
else failed. No compiler or linker errors/warnings at the time, no nothing.
But one was a valid C++ program and one was not.

You might think that is inflammatory, but I considered that behavior insane,
because main() hadn't yet even run, and the program cored leaving me with
trying to figure out what went wrong.

>> Furthermore, you don't want exceptions in kernel code.

>Nobody said anything about C++ throw/catch exceptions.

I'd like to add that if you're finding yourself restricting primary language
features (e.g. templates, statics ctors, operator overloading, etc.) because
the implementation of those features are bad, maybe using that language is the
wrong choice for the project you're working on.

After I read the C++ FAQ lite [1] and the C++ FQA [2], I realized the
determinism that C provides is kind of a beautiful thing. And yes. For a
kernel, I'd argue C over C++ for that reason.

[1] C++ FAQ Lite: [http://www.dietmar-
kuehl.de/mirror/c++-faq/](http://www.dietmar-kuehl.de/mirror/c++-faq/)

[2] C++ Frequently Questioned Answers:
[https://yosefk.com/c++fqa/](https://yosefk.com/c++fqa/)

~~~
gmueckl
Well, if your main argument against C++ is undefined order of static
initialization amd that it caught you by surprise, then I'd counter that by
saying that you do not know the language very well. This is very well known
behaviour.

I think that there are stronger arguments against C++: the continued presence
of the complete C preprocessor restricting the effectiveness of automatic
refactoring, the sometimes extremely cumbersome template syntax, SFINAE as a
feature, no modules (yet!)...

Still, C++ hits a sweet spot between allowing nasty hardware-related
programming hacks and useful abstractions in the program design.

~~~
bb88
> ...then I'd counter that by saying that you do not know the language very
> well. This is very well known behaviour.

So parsing your sentence. I'm right, and you're blaming me for not knowing a
language as expertly as you. I can live with that.

Edited to add:

I admit it's a little snarky perhaps, but the c++ standard is 1300 pages long.
It took my browser in 2018 1 minute to open it.

[http://www.open-
std.org/jtc1/sc22/wg21/docs/papers/2013/n379...](http://www.open-
std.org/jtc1/sc22/wg21/docs/papers/2013/n3797.pdf)

I really do not have time to read a document like that to figure out whether
or not that behavior is spelled out in the standard. So yes, I'll let you be
the expert on this.

~~~
gmueckl
Sorry if the statement offended you. It came from the experience that I so far
haven't encountered anyone who seriously uses C++ and does not know about the
undefined order of static initialization. Also, I haven't yet had a situation
where this was a big deal.

There are worse pitfalls than unstable order with static initializers
specifically. If you dynamically load shared libraries at runtme on Linux, you
risk static initializers being run multiple times for the same library. This
is platform specific behavior that is AFAIK present on other UNIX systems as
well and I'm certain that you won't find that in the standard.

~~~
bb88
> Sorry if the statement offended you. It came from the experience that I so
> far haven't encountered anyone who seriously uses C++ and does not know
> about the undefined order of static initialization.

Water under the bridge.

While I did say I was brought in to fix it, what I didn't say was that the
group's management thought that Java coders could code in C++. D'oh.

------
lerax
To people which don't understand the overall decision to create another
system, I'll talk about at least one benefit to create a system that is not
Linux: make software more simple and efficient. Do you really think that Linux
is so great? Linux is a bloat system [1], POSIX is not so great as well (do
you really read the WHOLE POSIX spec?).

It's important standards, it's important sometimes (SOMETIMES) compatibility.
But not all this stuff defined in POSIX it's important. POSIX sucks sometimes
[2], only GNU can be worse about being bloated [3].

Only users which don't touch in code can think that Linux, POSIX and GNU are
entities following principles based in simplicity. Linux following Unix
guidelines? This only can be a joke of Linus.

Creating custom software, maintaining and other stuff on things THAT YOU DON'T
UNDERSTAND has a massive cost. As well, the cost to understand complex things,
it's even worse.

Sometimes it's even more simple re-inventing the wheel than understand why a
wheel was build with a fractal design [4].

[1] Linux LOC overtime
[https://www.linuxcounter.net/statistics/kernel](https://www.linuxcounter.net/statistics/kernel)

[2] POSIX has become outdated
[http://www.cs.columbia.edu/~vatlidak/resources/POSIXmagazine...](http://www.cs.columbia.edu/~vatlidak/resources/POSIXmagazine.pdf)

[3] Code inflation about /usr/bin/true
[https://pdfs.semanticscholar.org/a417/055105f9b3486c2ae7aec2...](https://pdfs.semanticscholar.org/a417/055105f9b3486c2ae7aec22a7dbd57e1ba3c.pdf)

[4] The Linux Programming Interface
[https://doc.lagout.org/programmation/unix/The%20Linux%20Prog...](https://doc.lagout.org/programmation/unix/The%20Linux%20Programming%20Interface.pdf)
(that cover of book has a reason and yes: it is what you think)

~~~
jpfr
A huge portion of Linux is drivers and support for different processor
architectures. Yes, development was chaotic in the nineties and the code
showed. But a lot of engineering effort went into making the core really nice.

[https://unix.stackexchange.com/a/223763](https://unix.stackexchange.com/a/223763)

With regards to POSIX, it is amazing how well this API is holding up. There
are quite a few implementions from GNU, BSDs, Microsoft (at least partial
support in MSVC) and a few others (e.g. musl). So POSIX support is a given on
most systems. Why replace it with something that breaks existing code?

[https://www.musl-libc.org/faq.html](https://www.musl-libc.org/faq.html)

Not to say there is no bloat. But some bloat is the patina that all succesful
systems take on over time. Is the bloat small enough to be managed and/or
contained? I say yes.

~~~
derefr
> So POSIX support is a given on most systems. Why replace it with something
> that breaks existing code?

You're not necessarily breaking existing code. Both macOS and Windows are
built on non-POSIX primitives that have POSIX compatibility layers.

It seems that the conclusion most of industry has reached is that, whether or
not POSIX is a _useful_ API for your average piece of software, there are
still better _base-layer semantics_ to architect your kernel, IPC mechanisms,
etc. in terms of than the POSIX ones. You can always support a POSIX "flavor"
or "branded zone" or "compatibility subsystem" or whatever you want to call
it, to run _other_ people's code, after you've written all _your_ code against
the nicer set of primitives.

An potentially-enlightening analogy: POSIX is like OpenGL. Why do people want
Vulkan if OpenGL exists? Well, because Vulkan is a more flexible base-layer
with better semantics for high-efficiency use-cases. And if you start with
Vulkan, the OpenGL APIs can still be implemented (efficiently!) in terms of
them; whereas if you start with an OpenGL-based graphics driver, you can't
"get to" (efficient) Vulkan support from there.

All that aside, though, I would expect that the real argument is: Fuchsia is
for ChromeOS. Google are happy to be the sole maintainers of ChromeOS's kernel
_and_ all of its system services, so _why not_ rewrite them all to take
advantage of better system-primitive semantics? And Google doesn't have to
worry about what apps can run on a Fuchsia-based ChromeOS, because the two
ways apps currently run on ChromeOS are "as web-apps in Chrome", or "as Linux
ELF executables inside a Linux ABI (or now also Android ABI) sandbox." There
_is_ no "ChromeOS software" that needs to be _ported_ to Fuchsia, other than
Chrome itself, and the container daemon.

~~~
emn13
Total speculation: but I seriously doubt that Fuchsia is specifically for
chromeOS. The whole point of decent, efficient, simple, non-bug-prone APIs is
that you probably want to implement pretty much everything on it. Simplicity
and low-overhead allow for generality and flexibility.

If all you wanted to do was support chromeOS - well, typically you can add
hacks even to a messy codebase to support specific usecases. And there are a
_bunch_ of linux and ?BSD distros that demonstrate that you can adapt such a
system to even very small devices; small enough that there's not much niche
left below. Moore's Law/Denard scaling may be comatose on the high-end; but
lot's of long-tail stuff is generations behind; which implies that even really
low-power IoT stuff that linux is currently ill-suited for will likely be able
to run linux without too many tradeoffs. I mean; the original raspberry pi was
a 65nm chip@700MHz - that's clearly overkill; and even if chip development
never has a breakthrough again, there's clearly a lot of room for those kind
of devices to catch up, and a lot of "spare silicon" even in really tiny stuff
once you get to small process nodes.

But "being able to run linux" doesn't mean it'll be ideal or easy. And
efficiency may not be the only issue; security; cost; reliable low latency...
there are a whole bunch of things where improvements may be possible.

I'm guessing Fuchsia is going to be _worse_ than linux for ChromeOS - in the
sense that if ChromeOS really was what google wants it for, they could have
gotten better results with linux than they'll be able to get with Fuchsia in
the next few years and at a fraction of the cost. Linux just isn't that bad;
and a whole new OS including all the interop and user-space and re-education
pain is a huge price to pay. But the thing is: if they take that route they
may end up with a well tuned linux, but that's it.

So my bet is that you'd only ever invest in something like Fuchsia if you're
in it for the long run. They're _not_ doing this "for" ChromeOS, even if that
may be the first high-profile usage. They're doing this to be enable future
savings and quality increases for use cases they probably don't even know they
have, yet. In essence: it's a gamble that might pay off in the long run, with
some applicability in the medium term - but the medium term alone just doesn't
warrant the investment (and risk).

~~~
derefr
I guess I left a bit too much implicit about my prediction on what Google's
going to do: I have a strong suspicion that Google sees the Linux/POSIX basis
of Android as an albatross around its neck. And ChromeOS—with its near-perfect
app isolation from the underlying OS—seems to be a way of getting free of
that.

ChromeOS has already gained the ability to run containerized Android apps; and
is expecting to begin allowing developers to publish such containerized
Android apps to the Chrome Web Store as ChromeOS apps. This means that Android
apps will continue to run on ChromeOS, without depending on any of the
architectural details _of_ ChromeOS. Android-apps-on-Android prevent Android
from getting away from legacy decisions (like being Linux-based); Android-
apps-on-ChromeOS have no such effect.

I _suspect_ that in the near term, you'll see Google introducing a Chrome Web
Store _for Android_ , allowing these containerized, CWS-packaged Android apps
to be run on Android itself; and then, soon after that, deprecating the Play
Store altogether in favor of the Chrome Web Store. At that point, all Android
apps will actually "be" ChromeOS apps. Just, ones that contain Android object
files.

At that point, Google can take a Fuchsia-based ChromeOS and put it on the more
powerful mobile devices as "the new Android", where the Android apps will run
through Linux ABI translation. But in this new Android (i.e. rebranded
ChromeOS), you'll now also have the rest of the Chrome Web Store of apps
available.

Google will, along with the "new Android", introduce a new "Android Native
SDK" that uses the semantics of Fuchsia. Google will also build a _Fuchsia ABI
layer for Linux_ —to serve as a simulator for development, yes, but more
importantly to allow people to install these new Fuchsia-SDK-based apps to run
on their older Android devices. They'll run... if slowly.

Then, Google will wait a phone generation or two. Let the old Android devices
rot away. Let people get mad as the apps written for the new SDK make their
phones seem slow.

And then, after people are fed up, they'll just deprecate the old Android ABI
on the Chrome Web Store, and require that all new (native) apps published to
the CWS have to use the Fuchsia-based SDK.

And, two years after _that_ , it'll begin to make sense again to run "the new
Android" on low-end mobile devices, since now all the native apps in the CWS
will be optimized for Fuchsia, which will—presumably—have better performance
than native Android apps had on Android.

~~~
notriddle
From a branding perspective, that would be terrible. They've already invested
a bunch in Google Play brand that isn't Android Apps (Play Music, Play Books,
etc).

Seems more likely they'll allow HTML apps into the Play Store, eventually
getting rid of the Web Store entirely. They've already done the WebAPK stuff
to glue HTML apps into Android.

~~~
derefr
If, as I suspect, they'd be willing to rename ChromeOS to be "just what
Android is now" (like how Mac OS9 was succeeded by NeXTStep branded as Mac
OSX), then I don't see why they wouldn't also be willing to rebrand the Chrome
Web Store as "what the Google Play Store is now." Of course, they'd keep the
music, books, etc.; those are just associated by name, not by backend or by
team.

But they _wouldn 't_ keep the current content of the Play (Software) Store.
The fact that every Android store—even including Google's own—are festering
pits of malware and phishing attempts, is a sore spot for Google. And, given
their "automated analysis first; hiring human analysts never (or only when
legally mandated)" service scaling philosophy, they can't exactly fix it with
manual curation. But they _would_ dearly love to fix it.

Resetting the Android software catalogue entirely, with a new generation of
"apps" consisting of only web-apps and much-more-heavily-containerized native
apps (that can no longer do nearly the number of things to the OS that old
native apps can do!) allows Google to move toward a more iOS-App-Store-like
level of "preventing users from hurting themselves" without much effort on
their part, and without the backlash they'd receive if they did so as an end
unto itself. (Contrast: the backlash when Microsoft tried that in Windows 8
with an app store containing only Metro apps.)

I expect that the user experience would be that, on Fuchsia-based devices,
you'd have to either click into a "More..." link in the CWS-branded-as-Play-
Store, or even turn on some setting, to get access to the "legacy" Play Store,
once they deprecate it. It'd still _be_ there—goodness knows people would
still need certain abandonware things from it, and be mad if it was just gone
entirely; and it'd always need to stick around to serve the devices stuck on
"old Android"—but it'd be rather out-of-the-way, with the New apps (of which
old Chrome Apps from the CWS would likely be considered just as "new" as
newly-published Fuchsia apps upon the store's launch) made front and centre.

> Seems more likely they'll allow HTML apps into the Play Store, eventually
> getting rid of the Web Store entirely.

I would agree if this was Apple we were talking about (who is of a "native
apps uber alles" bent) but this is Google. Google _want_ everyone to be making
web-apps rather than native apps, because Google can (with enough cleverness
repurposed from Chrome's renderer) spider and analyze web-apps, in a way it
can't spider and analyze native apps. Android native apps are to Google as
those "home-screen HTML5 bookmark apps" are to Apple: something they wish they
could take back, because it really doesn't fit their modern business model.

~~~
muro
> The fact that every Android store—even including Google's own—are festering
> pits of malware and phishing attempts, is a sore spot for Google.

Lol, citation needed.

------
pm90
I see a lot of negative comments about this project here. Let me just say
that: it doesn't need to be a POSIX compliant system, doesn't need to be user
friendly or even provide something different from what can already be done
with Linux or other OS's that we have today.

Google spends a lot of money on research. One thing about research is that a
lot of stuff you do ends up completely useless in the short term even if you
cover all your bases initially. Even if this project fails, I hope something
good can be learned from why it failed; maybe someone in the future can learn
from those mistakes and try again.

I'm certainly no fan of Google nor of the way they make money. But I am very
happy they use that money for stuff like this.

~~~
flyingcircus3
Hacker News commenters are not the ones making baseless claims about how their
product is better than the current market dominator. Hitchens Razor is working
just fine in these comments.

It reminds me of NFL fans. No one talks trash about the crappy quarterbacks of
their opponents teams. Everyone talks trash about Tom Brady, and Peyton
Manning, Cam Newton, etc.

This is also a general negotiating tactic. Start with a bold initial offer,
and then negotiate back to what you wanted in the first place.

~~~
lambda
> Hacker News commenters are not the ones making baseless claims about how
> their product is better than the current market dominator.

Where did anyone say that Fuchsia was better than Linux?

The title of this document, "Fuchsia is not Linux", is a play on the "GNU's
not Unix" backronym for the GNU project, as well as being a way of pointing
out that unlike Android, Fuchsia is an entirely different kernel.

I mean, obviously they would write it because they thought it would be better
for certain applications than Linux, or better for trying out new ideas, or
the like, but I don't see anything claiming that Fuchsia is better.

In fact, Fuchsia has been done as a pretty low-key project for a while, slowly
opening up parts but without much fanfare, just repositories being available,
and slowly posting more documentation like the link in the OP. I don't really
see very much marketing about it, just low key releases of code and more
technical information to give people a taste and show the direction they're
trying to go in.

------
Roritharr
I'm enthusiastic about Fuchsia, i really think there is a lot to gain by
breaking with the old conventions, especially when you look at what is
hindering true realtime computation approaches.

As a nice byproduct Google has a hedge against Linus dying and his replacement
being incompetent at managing the community.

~~~
jandrese
Right now it kind of reminds me of BeOS, which could do absolutely incredible
concurrent realtime low latency media processing but was absolute torture to
get a proper Web Browser working.

The problem with legacy support is that it drags in the braindamage you were
trying to avoid by rewriting the OS in the first place. But without legacy
support it's almost impossible to grow beyond the toy OS stage. It's the whole
"So, what do I do with it?" factor.

~~~
contextfree
from comments from people who worked on it (e.g.,
[https://twitter.com/MCSpaceCadet/status/968666523425386497](https://twitter.com/MCSpaceCadet/status/968666523425386497)
[https://twitter.com/slava_oks/status/958908471801294850](https://twitter.com/slava_oks/status/958908471801294850)
), Microsoft seems to have reached this stage with their Midori project. This
was a ground up OS project based on the usual suspects from the research world
i.e. object-capability security, microkernel architecture, a new lightweight
process model, memory-safe systems language, zero-copy IO etc.; the project
lasted 9 years and occupied over 100 senior engineers at its peak. They tried
various strategies to run it on top of Windows, run it on top of Linux, run
Windows on top of it, run it on Hyper-V, etc etc. before eventually giving up.

~~~
pjmlp
They did not gave up, management pulled the cord on the team.

Joe mentioned at his Rustconf talk how hard it was to convince the Windows
team, even with the system running in front of them.

When people are religiously against something, no amount of technical
achievements is going to change their mind.

~~~
Roritharr
That is incredibly disheartening to hear. I wonder if Satya knows about this
point of view.

~~~
pjmlp
It was on the closing Keynote

RustConf 2017 - Closing Keynote: Safe Systems Software and the Future of
Computing by Joe Duffy

[https://www.youtube.com/watch?v=EVm938gMWl0](https://www.youtube.com/watch?v=EVm938gMWl0)

You can get more glimpses of how things went when reading between the lines on
his Midori postmortem, InfoQ content or occasional twitter comments.

------
0x0
I find it interesting to note that the core fuchsia OS comes with "magma",
"escher" and "scenic", which seems to be core OS services for composing one 3D
scene across multiple processes ("shadows can be cast on another process
without it knowing about it")

Is that a hint that Fuchsia is a VR-first operating system?

~~~
whowouldathunk
Not really, that's how Windows works too since Vista.

[https://msdn.microsoft.com/en-
us/library/windows/desktop/aa9...](https://msdn.microsoft.com/en-
us/library/windows/desktop/aa969540\(v=vs.85\).aspx)

And it's what enables effects like this in Windows 10:

[https://docs.microsoft.com/en-
us/windows/uwp/design/style/re...](https://docs.microsoft.com/en-
us/windows/uwp/design/style/reveal)

Disclaimer: I work for Microsoft

~~~
teraflop
I don't think what Windows provides is really comparable. By my understanding
(and the documentation you linked seems to back this up) WDM is responsible
for taking 2D framebuffers from applications, and compositing them into a
(possibly 3D) scene. Fuchsia's Scenic operates uses a 3D scene graph as its
input.

------
linuxftw
I have yet to see anything useful come from Fuchsia. There are tons of 'press
release' type blogs, but nothing functional. The bundled steps to run Fuchsia
inside qemu didn't work (and even shipped their own version of qemu in the
scripts!)

I'm assuming the Fuchsia development is 100% about not having to use any GPL
software. Look how hobbled the Android and ChromiumOS communities are compared
to the Linux world at large.

As soon as someone outside of Google produces anything of any novelty around
Fuchsia, I might change my mind, but for now I'm viewing it as a going-nowhere
software project that's all hype and will never be 100% Free and Open Source.

~~~
djsumdog
> I'm assuming the Fuchsia development is 100% about not having to use any GPL
> software.

This. Everyone wants to distance themselves from the GPL. The reality is that
the ideas the 90s open source movements were founded on are far from what we
see today. We don't see OSS end user applications; at least not a lot in
mainstream use. Instead, we just see OSS middleware.

In the early 2000s, people though one day we'd see Gimp be on par with
Photoshop and StarOffice/Libreoffice take on Word and Excel. We've come a long
way, but those ideas were never realized.

~~~
wiz21c
>>> the ideas the 90s open source movements were founded on are far from what
we see today

Are they ? GPL is about protecting users' freedom. It's still a valid aim to
me, probably even more so.

What has changed is that the web is much bigger than the desktop and so the
GPL has less ground to grow on, so its effect may have been weakened. But only
the effect, not the goal.

(I 100% admit I am more an idealist than a pragmatic)

~~~
Nomentatus
By users' freedom you don't mean users who own patents, since they stand to
lose their property under GPL restrictions. (Google Implied Patent Grant if
that doesn't ring a bell.)

So most important corporations, etc. Copyleft was a very clever idea; but
deciding to go to war against all intellectual property was a step too far.
Immense billions have been spent already replacing GPL software with truly
free software under a freer, more liberal license.

------
valarauca1
Zircon is very much in the legacy of linux.

The biggest sin of Linux API remains ioctl (and its variants). Zircon commits
the same mistake with its `object_get_prop` [1] and `object_get_info` [2]. If
you pretend to be type safe (have different getters for different obj-types),
you can in the long run replace these calls with in-userland static calls
where possible to accelerate performance (like linux does for futex and time).

Instead you get his "It does A if you give it B, it does C if you give it D"
this is pretty bad API design as it _NEEDS_ a void pointer. I'd rather see _a
lot_ of simple with numbers related to the call. You have 4 million of them
FFS (if you care about 32bit compatibility).

It just leaves a bad taste in my mouth. The API design is extremely nice
otherwise, and these methods feel like such an after thought.

\---

To be clear I really don't care about POSIX compatibility, its easy to shoe
horn in after you have a solid OS. The Windows-NT kernel has done it twice now
(NT4.0 and Windows10).

[1]
[https://fuchsia.googlesource.com/zircon/+/master/docs/syscal...](https://fuchsia.googlesource.com/zircon/+/master/docs/syscalls/object_get_property.md)

[2]
[https://fuchsia.googlesource.com/zircon/+/master/docs/syscal...](https://fuchsia.googlesource.com/zircon/+/master/docs/syscalls/object_get_info.md)

~~~
JdeBP
Actually, three times. There were _two_ POSIX subsystems.

------
btilly
What do they mean by "capability based"?

A lot of people say "capability based" and really mean some very fine grained
access control system. (A confusion encouraged by POSIX "capabilities".) What
I hope that they mean is the one that solves the
[https://en.wikipedia.org/wiki/Confused_deputy_problem](https://en.wikipedia.org/wiki/Confused_deputy_problem)

There are two VERY different meanings of the phrase. The one that I'm hoping
for can be thought of like this.

~~~
abecedarius
Sounds like the real deal: [https://www.linkedin.com/pulse/capabilities-os-
security-goog...](https://www.linkedin.com/pulse/capabilities-os-security-
google-fuchsia-lorens-kockum) (I think this article made a small error in that
an open-source KeyKOS release exists now, I think only for obsolete hardware.)

OTOH the OP's table of contents gives me the impression that Fuchsia is more
complex than the older capability OSes (KeyKOS family, seL4).

------
Khaine
One thing I don't see addressed in the README is why? Why do we need Fuchsia?
What problem are we trying to solve? Why should I use/develop for it instead
of Windows/Linux/macOS?

Or is this just a research operating system designed to test new ideas out?

~~~
akavel
The main keywords are "capability based" and "microkernel". Those ideas bring
numerous advantages over monolithic kernels (including Linux, Windows, macOS),
especially humongous boost to protection against vulnerabilities, also better
reliability and modularity. They are quite well researched already AFAIU, and
apparently the time has come for them to start breaking through to
"mainstream" (besides Fuchsia, see e.g.
[https://genode.org](https://genode.org), [https://redox-
os.org](https://redox-os.org))

Other than that, for Google this would obviously bring total control over the
codebase, allowing them to do whatever they want, and super quickly, not
needing to convince Linus or anybody else.

~~~
cm2187
And does a microkernel have anything to do with the ultimate capabilities of
the machine or is it specifically targeted at embedded / smartphones /
hypervisors, and not for running a regular server or desktop OS?

~~~
akavel
I'm afraid I don't fully understand the question; would you care to try
rephrasing? Anyway, as to what I seem to understand:

\- "capabilities": ok, I think I get the misunderstanding. The "capabilities"
here are completely unrelated to "hardware capabilities" or "machine
capabilities", a.k.a "what features does my phone have". The word has a
totally different meaning in the technical jargon of OS development. It's a
security architecture concept; as a first approximation, I'd say
"capabilities" are somewhat akin to "permissions" on Android/iOS. You could
maybe call them "permission lease": as an app, if you got some permission
("capability token"), you can choose to sublease/share/extend it to another
app you run. See: [https://en.wikipedia.org/wiki/Capability-
based_security](https://en.wikipedia.org/wiki/Capability-based_security)

\- microkernels are a general architecture approach; they can perfectly well
be used for regular server/desktop OS. It's just that writing a new OS for an
embedded system is easier as a "first step", because you can start smaller. A
full-blown "general purpose" server/desktop OS is much more complex; enough to
say that it must have very wide driver support for shitloads of different
hardware existing in the world. (But there are also other challenges, like
multiple user management.) Microkernels were long believed to have worse
performance than monolithic kernels in popular perception, thus their
historical unpopularity. However, this notion was challenged by the L4 kernel
([https://en.wikipedia.org/wiki/L4_kernel](https://en.wikipedia.org/wiki/L4_kernel)),
which was I believe the reason for the revived interest. L4 was apparently
published ~1993, and there was QNX long before that; I'm not actually sure why
it hasn't become more popular earlier. Maybe unfamliarity to programmers? I
believe microkernels enforce somewhat stricter development standards than
monolithic kernels (where it's probably easier to "just hack around" and duct-
tape a new feature), thus probably raising the perceived development costs
somewhat (I believe it's similar as if we compared e.g. Rust vs. C/C++).

 _edit:_ ah, a good example could be the L4-based L4Linux
([https://en.wikipedia.org/wiki/L4Linux](https://en.wikipedia.org/wiki/L4Linux))
kernel, which is said to be a more or less "drop-in" replacement for the
"classical" monolithic Linux kernel, so you should be able to run any Linux
distro on it. Though I personally never tried it (yet).

The GenodeOS folks are also working towards building a usable "general
purpose" desktop OS based on a microkernel, see e.g. chronologically:

\-
[https://github.com/genodelabs/genode/issues/1552](https://github.com/genodelabs/genode/issues/1552)

\-
[https://github.com/genodelabs/genode/issues/2018](https://github.com/genodelabs/genode/issues/2018)

\- [https://genode.org/documentation/articles/sculpt-
ea](https://genode.org/documentation/articles/sculpt-ea)

~~~
euyyn
A capability in this sense would be analogous to a bearer token in OAuth2,
correct?

~~~
akavel
Sorry, can't say, I can never wrap my head around OAuth :( I'm not a security
guy, just interested in some stuff, by the way.

------
svag
As I can see, they use a microkernel architecture for the kernel[0]. I wonder
why they need to create another microkernel OS and not re-use an existing one
like MINIX 3 or QNX? What are the advantages of the Zircon Kernel compared to
the MINIX 3 or QNX?

[0] [https://fuchsia.googlesource.com/docs/+/master/the-
book/#Zir...](https://fuchsia.googlesource.com/docs/+/master/the-book/#Zircon-
Kernel)

~~~
CodeArtisan
it seems to be closer to NTOS (handles based) than UNIX (everything is a
file).

~~~
monocasa
Eh, the NT handle table and Unix file table are very nearly the same thing
these days.

I'd say it's more like using the per process FD table for everything, rather
than having global tables like for PIDs. Which is really cool, IMO.
Containerization is all about adding indirection to the global tables, but of
there are no global tables, you get all of that for free.

~~~
CodeArtisan
Linux is still about opening, reading, and writing to files, it tries to keep
a common interface; it's all about read() and write(). Looking at fuchsia's
system calls, it seems to have a different interface for each types of
objects.

~~~
monocasa
Linux calls into object specific ioctls and system calls to do anything
interesting.

~~~
AnIdiotOnTheNet
Turns out you actually can't just treat everything like a file because, oddly,
not everything is actually a file.

~~~
monocasa
Plan 9 makes a strong argument that you can, just Linux doesn't.

------
squarefoot
Soo, given that all important device drivers in the Linux kernel used in
Android are closed, I'd be curious to hear from Google if their new Fuchsia is
going to solve that problem.

This may seem trivial, but closed device drivers make 100% impossible neither
to update them to more modern versions once the Android version is declared
obsolete, nor install natively a different operating system on the device.
This practice, security concerns aside, is responsible for a huge load of old
-otherwise perfectly usable- devices being scrapped in landfills.

So, dear Google, will you keep the lowest, smallest but most important layers
of the OS open or rather will prevent people to do what they want with the
devices they purchased even at the cost of contributing to more pollution?

~~~
stonogo
Google does not regard closed drivers as a problem. With Fuchsia, all the
important drivers will be closed, full stop. Google has no incentive to care
about your landfills. The permissive license of Fuchsia will make its
ecosystem _more_ appealing to IVI and mobile OEMs, not less.

~~~
squarefoot
E-waste is just one of the problems, probably the only one most users could
understand; but security is also a huge one. With apps requiring permission to
essentially everything (and users surrendering them blindly) security on any
mobile device as of today is a myth. Open Source apps could mitigate the
problem by swapping an unreliable layer with a trustworthy one, but we still
have closed device drivers which could contain whatever their manufacturer (or
its government) wants without any chance of being audited.

------
lambda
How about some discussion of Fuchsia itself, instead of "why reinvent the
wheel" or "Linux is bloated"?

From my reading so far, it looks like Fuchsia takes some of the better parts
of the POSIX model, the way file descriptors can be uses as capabilities, and
extends its usage more consistently over the API so that it is used in a lot
more places. In Fuchsia they are handles, which are arbitrary 32 bit integers,
but they act a lot like somewhat richer, more consistent file descriptors. You
can clone them, possibly with limited rights like read only, you can send them
over IPC channels, etc.

There are some differences, in that there's no absolute path handling directly
in the kernel; instead, you always open relative to some particular file
handle. A process may be given two that can be used to emulate POSIX style
path resolution, by being given a root directory handle and a current working
directory handle, though there may not be a guarantee that one contains the
other; it sounds like more commonly application will just be given handles for
the files or directories they are supposed to access, rather than having
everything go through path resolution.

Signal handling is done consistently with waiting for data on handles; handles
just have various states they can be in, like ready for reading, ready for
writing (for file handles), running or stopped (for process handles), etc, and
you can wait for changes of state on any of a variety of different handles.

Memory mapping can be done by allocating a virtual memory object, which you
could either not map (treat it as an anonymous temporary file), write to, and
then pass to another process, or you could map it into your process,
manipulate it, clone the handle, and pass that to another process. Basically
seems like a cleaner design for shared memory handling than POSIX, though
something a lot like it can be done in Linux these days with anonymous shared
memory and sealing.

Jobs, processes, and threads are also all handles. Jobs contain processes and
other jobs, and processes contain threads. Jobs group together resource
limitations (things like limits on numbers of handles, limits on total memory
used, bandwidth limits, etc), processes are separate address spaces, and
threads are separate threads of execution in one address space. The fact that
jobs and processes are all handles, instead of IDs, means that you don't have
to worry about all of the weird race conditions of trying to track things by
PID when that PID may no longer exist and could be reused in the future.

An interesting part is how program loading happens. In POSIX like OSes, you
fork your process, which creates a clone of the process, and then exec, which
asks the kernel to replace the running program with one loaded from another
file. You give the kernel the path to a file, and the kernel calls the dynamic
linker on that path to link the shared libraries together and then execute the
result. In Fuchsia, you just build the new address space in the parent
process, and then ask the kernel to start a new process in that address space,
with execution starting at a particular point in it and some parameters loaded
into particular registers. This basically means that the dynamic linker will
now be done by a library call in the parent process; which could be really
advantageous for those processes that fork the same executable as a subprocess
many times, as they can link the executable once into some read only pages,
and then very quickly spawn multiple processes from that same already linked
program. I'm sure that ld.so and friends on Linux and other POSIX-like OSs
have a lot of caching optimizations to make this faster, but it sounds to me
that the Fuchsia model of just having the parent process do the linking as a
library call could be a lot faster.

(edit to add: hmm, upon further reading, it looks like they expect process
creation to happen from a single central system process, rather than providing
the dynamic linker API, "launchpad", as a supported API; but for now it looks
like you can use the launchpad library)

It basically looks a lot like what you would wish the POSIX API worked like
with a lot of hindsight. A lot simpler and more consistent, and does a much
better job of "everything is a file" than the POSIX API ever did (of course,
it's "everything is a handle," but that's fine, the point is that there's one
consistent way to work with everything).

~~~
CodeArtisan
>A lot simpler and more consistent, and does a much better job of "everything
is a file" than the POSIX API ever did (of course, it's "everything is a
handle," but that's fine, the point is that there's one consistent way to work
with everything).

I am failing to see how this is more consistent. With UNIX, because everything
is like a file, you operate on them the same manner. A file, a socket, a pipe,
shared memory, ... you open them then you use the system calls for operating
on files: read(), write(), poll(), dup(),... which then allow you to use
operations built on these syscalls such as fprintf, fscanf,... but also all
the tools like cat, head, grep,... This is what i would call consistency.

If i implement a new feature as a file in linux, for example a virtual
filesystem like /proc/, all the cited operations would already be available
out of the box.

~~~
lambda
But this is how Fuchsia is as well; these handles are pretty much equivalent
to file descriptors, except for how they get numbered/allocated (though for C
library compatibility, there is a per-process file descriptor table to map
between file descriptors and handles).

Even on UNIX like systems, you can't read or write on every file; for
instance, you can open directory, but you can only readdir on that, not read
from it. But they are still file descriptors like everything else, so you can
call dup(), fstat(), pass them between processes on Unix sockets, etc.

There are plenty of other operations which can only be done on certain types
of files in UNIX-like systems; for instance, you can only recv() or recvmsg()
on a socket.

The difference is that in Fuchsia, more things have handles, and so more
things can be treated consistently. For instance, jobs, processes, and threads
all have such handles; so instead of getting a signal that you have to handle
in an extremely restrictive environment in a signal handler or having to call
wait4() to learn about the status of a child process, you can just wait on
signals to be asserted on the child process using zx_object_wait(), which is
the equivalent of select() or poll(). This means no more jumping through hoops
to get signal handling to work with an event loop; it just works.

Of course, the other difference in Fuchsia is that there is not a single
namespace. Every component in Fuchsia has its own namespace, with just the
things it needs access to; there is no "root" namespace. This is good for
isolation, both for security reasons and reducing accidental dependencies,
though I do wonder how much of a pain it would make debugging and
administering a system.

~~~
CodeArtisan
My point was that with UNIX, while you have specialized operations like
recvmsg, you still have read() and write() acting as an _universal_ interface.
If you look at Fuschia system calls, you would see

    
    
        vmo_read - read from a vmo
        vmo_write - write to a vmo
        fifo_read - read data from a fifo
        fifo_write - write data to a fifo
        socket_read - read data from a socket
        socket_write - write data to a socket
        channel_read - receive a message from a channe
        channel_write - write a message to a channel
        log_write - write log entry to log
        log_read - read log entries from log
    

_It is better to have 100 functions operate on one data structure than 10
functions on 10 data structures._

~~~
lambda
Hmm. On UNIX read() and write() are not universal; you can't use them on
directories, for instance, nor can you use them on various other things like
unconnected UDP sockets.

Treating everything like an undifferentiated sequence of bytes can cause
impedence mismatches; each of these types of handles has very different ways
that you work with it. For instance, a VMO is just a region in memory. A FIFO
is a very small queue for short equally sized messages between processes. A
socket is an undifferentiated stream of bytes. A channel is a datagram
oriented channel with the ability to pass handles. The log is for kernel and
low level device driver logging.

In fact, it looks like the Zircon kernel has no actual knowledge of filesystem
files or directories; they are actually channels that talk to the filesystem
driver (another userspace process) over a particular protocol.

The thing about having one single universal interface like read() and write()
to a lot of fairly different things is that they each actually support
different operations; you can't actually cat or echo to a socket (not without
piping into nc, which does that for you). Or you can't just echo data into
most device files and expect it to work; some of them you can, like block
devices, but others you need to manipulate with ioctls to configure properly.

What Fuchsia is doing here is acknowledging the different nature of the
different types of IPC mechanisms, and so giving them each APIs that better
matches what they represent. A VMO can be randomly read and written to; none
of the others can. A FIFO can only accept messages in an integral number of
equal size pieces that are smaller than the FIFO size, which is limited at a
maximum of 4096 bytes; it is used for very small signals to be used in
conjunction with other mechansims like VMOs. A socket provides the traditional
stream abstraction, like a pipe or SOCK_STREAM on UNIX, in which you can read
or write new data but can't seek at all. A channel provides datagram based
messages along with passing handles.

One of the big things that I think the Unix model makes hard is telling when
something is going to block; because read and write assume that the file is
one big undifferentiated blob of bytes, it can be hard to tell when it's safe
to do so without blocking. On the other hand, each of these is able to have
particular guarantees about what you can do when they report that there is
space available.

I admit that the log ones seem redundant; I would think they would make more
sense as just a particular protocol over channels. I don't see any reason for
that one to exist separately.

I wonder why you would think it would be better to have one interface that
isn't an exact match for a lot of different IPC types, than separate specific
interfaces that match them? They are all tied together by being handles, so
you can dup them, send them to other processes, and select on them just the
same, but the read and write operations behave quite differently on each so
having an API that reflects that seems reasonable.

If you like to think in object oriented terms, think of them as subclasses of
handle. If you like to think in terms of traits or interfaces, think of there
being one generic handle interface, plus specific operations for each type of
handle.

The "every thing is a file, and a file is an undifferentiated bag of bytes" is
in some ways a strength of UNIX, but in other ways a weakness. You then have
to build protocols and formats on top of that, kernel buffer boundaries don't
necessarily match up with the framing of the protocol on top, and so on.

And all it takes to give you the power to manipulate things in the shell is
appropriate adapter tools. Just like nc on UNIX allows you to pipe in to
something that will send the data out on a socket, you need some adapter
programs that can translate from one of these to another (and from filesystem
files, since those don't even exist at this abstraction level); of course, in
many cases, you're probably going to need some serialization format for things
like channel datagram boundaries, and there are some things that just can't be
translated from a plain text bag of bytes (like handles).

------
CodeArtisan
Looks like they are using Dart for developing applications. This seems to be
the dart sdk for fuchsia

[https://fuchsia.googlesource.com/topaz/+/master](https://fuchsia.googlesource.com/topaz/+/master)

There is no documentation at all but there is a few application examples in
/app/. The bindings for fuchsia and zircon (kernel) are in /public/dart-pkg/

~~~
gman83
It's Flutter: [https://flutter.io/](https://flutter.io/)

------
jeremy_wiebe
Somewhat tangential to the topic but one really cool project related to
Fuchsia is the Xi text editor project written in Rust.

The author (Raph Levine) has a very interesting series of posts on data
structures and algorithms used in the editor (like rope data structures)

[https://github.com/google/xi-editor/](https://github.com/google/xi-editor/)

~~~
trishume
You may be interested in the Xi CRDT documentation I wrote as part of my work
on integrating Xi into Fuchsia: [http://google.github.io/xi-editor/docs/crdt-
details.html](http://google.github.io/xi-editor/docs/crdt-details.html)

------
_bxg1
It seems reasonable to want to build a modern OS from scratch; Linux has
survived and adapted remarkably well, but it has many aspects that are rooted
in the past and just can't be shaken off. The world has changed a lot. I just
hope Fuchsia remains truly open-source and doesn't become a power-play by
Google.

------
oldandtired
It has been an interesting read of the many different points of view in
support of C or of C++. But there is an elephant in the room here.

The problem with most languages, including the ubiquitous C, C++, Java et al,
is that there are implementation defined behaviours and undefined behaviours
that are specifically placed in these languages.

A previous discussion, which I can't locate at the moment, did discuss this in
detail. Most programmers have a serious flaw in that they do not document.
They may produce documents but they do not document. Every assumption, every
trick and why it is used, every implementation defined behaviour, every
reasoning as to the use of specific algorithms should be documented and is
not.

I have seen incredibly detailed documents for programs that just miss some of
the basic essential assumptions because "everyone knows them".

In everyday communications, we use language in a dynamic way, meanings can be
changed subtly and we get around the errors. With the programming of machines,
there is no such leeway ever. Our languages should be defined completely so
that we will know that what we have written has actual meaning.

the reality, of course, is that this is a "pipe-dream" and won't happen. But
as programmers, we could start calling for such completeness of definition of
the languages we use.

------
netheril96
People here seem to be amazed that this project is in C++, rather than a
simpler language (C) or a more modern language (Rust). But you must notice
that this is a Google project, and Google writes many many projects, internal
and external in C++. It almost never writes in plain C, and has no penchant
for fancy new programming languages. You may disagree, but Google doesn't
care.

~~~
neolefty
C++ is of course a Swiss Army Knife of a language.

Important context is the Google C++ style guide:
[https://google.github.io/styleguide/cppguide.html](https://google.github.io/styleguide/cppguide.html)

------
markstos
Well, "GNU is Not Unix" and that worked out OK. On the page it says it is
"POSIX lite", so it likely be recognized a rather Unix-like, and and a number
of things may likely end being able to be compiled on it with few
modifications due to the POSIX-like environment. The `brew` project on macOS
would be a related example.

------
juhanima
Microkernel, huh? At least now we will have a chance to get some empirical
evidence to resolve the famous Torvalds - Tanenbaum debacle. It's going to be
interesting to see how Fuchsia pans out and in what kind of environments it
can be used.

~~~
tathougies
You have to be kidding right? Microkernels have been used in production for
decades in situations where safety, accuracy, and robustness are key. QNX is a
microkernel used in real-time systems to great avail and great stability.
Unlike linux, it's the sort of kernel you could really trust to run important
infrastructure, like automobiles, unmanned aircraft, high speed trains, and
robotic surgery.

Just because you don't consciously interact with them on a daily basis, does
not mean they do not exist. My guess is that QNX and kernels like it are the
reason you can take for granted such obvious things as your car braking
correctly, your train not derailing, and your robot surgeon not crashing.

In today's world, we're so used to software breaking that if it doesn't break,
we oftentimes think it must not exist. But, there is a whole world of reliable
software out there that can actually be trusted. You don't hear about it
often, because they work so well.

~~~
BurningCycles
Well, one of the things I hope we get to see with projects like Fuchsia and
Redox is what the performance difference is against a monolithic kernel like
Linux, if/when they have been well optimized.

~~~
Karunamon
I don't think this will be a fair comparison, since most of Android's
performance woes are at the feet of running on a JVM.

~~~
pjmlp
ART is a thing since Android 5.0 and it has become quite good.

Even the parts written in straight C and C++ code have issues in performance
woes on Android, as everyone that has wanted to do real time audio on Android
painfully knows.

The whole architecture has been a mess.

~~~
bitmapbrother
Speaking of performance issues and architecture messes, have you ever tried
real time audio on Windows Phone? The absence of real time audio apps speaks
volumes.

~~~
pjmlp
I was missing your blind Google advocacy and Microsoft hate.

Did they fired you?

If you had any Android developer experience, you would surely know that Google
had a few failed attempts at real time audio, needed help from Samsung to
implement them, and the final API was C only, with devs asking for a C++ one,
wich was later dumped on github as side project and isn't part of the official
NDK APIs, but alas you don't.

~~~
bitmapbrother
>I was missing your blind Google advocacy and Microsoft hate. Did they fired
you?

I'm just trying to correct all of the misinformation you like to post about
Google and Android.

>If you had any Android developer experience, you would surely know that
Google had a few failed attempts at real time audio, needed help from Samsung
to implement them.

Unfortunately, your lack of Android development experience and your lack of
exposure to development on Samsung devices has caused you to be disingenuous
once again. Samsung didn't help Google nor did they contribute any of their
code to AOSP. They implemented their own proprietary audio solution called
SAPA. Unfortunately, it was limited to their platform and the audio latency
wasn't very good in comparison to the iPhone.

>and the final API was C only with devs asking for a C++ one, wich was later
dumped on github as side project and isn't part of the official NDK APIs, but
alas you don't.

AAudio is indeed coded in C, but at least you can use the Oboe C++ wrapper.
What were the low latency audio solutions for Windows phone again? Oh that's
right, there weren't any. No wonder there were no low latency audio apps on
that platform.

~~~
pjmlp
Dear Google Developer Advocate, my Android experience goes all the way back to
Froyo.

~~~
bitmapbrother
Can you can link me to the Play store apps you've developed?

------
fooker
There seems to be system calls to create and manage Virtual CPUs.

What is this for? Is there a precedent?

~~~
noselasd
For managing a hypervisor, e.g. creating a vcpu for a guest VM

~~~
fooker
How does Linux do this without having system calls like this?

~~~
tmzt
there's an interface that looks something like this (including vcpu) as part
of KVM, accessed through the /dev/kvm device with ioctl's.

You can see part of it here:
[https://github.com/torvalds/linux/blob/master/arch/x86/inclu...](https://github.com/torvalds/linux/blob/master/arch/x86/include/uapi/asm/kvm.h)

------
akavel
Does anybody know how it compares to GenodeOS
([https://genode.org](https://genode.org))?

------
ProAm
Why use a new OS from a company with historically bad customer support, will
likely report everything you do back to google HQ for analytics and frequently
abandons projects once developers get tired of it. Sounds like a computing
nightmare, I'd be very hesitant to voluntarily use it.

~~~
thethimble
And yet, Android is the single most used operating system in the world and
Chrome is the single most used browser...

~~~
ProAm
That's why I said voluntarily. Realistically there are 2 cell phone OS's to
choose from, Android and iOS. Android is broken a lot of the time and people
are forced to either find workarounds or do without functionality. I dont use
Chrome but Firefox. But the masses will use it if its forced on them.

~~~
colemickens
>Android is broken a lot of the time and people are forced to either find
workarounds or do without functionality

Continuing to see this parroted is about as silly as the "Android OS includes
ads" which is another comment that lets me know the author hasn't actually
used the platform for themselves.

~~~
ProAm
Im speaking from personal experience. From not being able to write to the SD
card with updates to the Gmail Outlook/Exchange Sync being broken in Oreo, to
gmail messages being delivered hours or days late due to doze and GCM
messaging. It's not parroting when its true.

------
ryanlol
Tangentially related, has anyone tried using the netstack
([https://github.com/google/netstack](https://github.com/google/netstack)) in
their projects?

Currently looking to see if this would be suitable for our high performance
networking needs, haven't seen anyone else actually using it though.

~~~
0xFFC
I am really curious this too. Anybody?

------
Quequau
Has anyone found a how-to for installing this on something like an RPi or
maybe a VM or whatever?

------
AndriyKunitsyn
So, this is a microkernel.

Could anybody please explain me why microkernels are so great, when
practically they do nothing more but push the overhead of switching threads to
extreme? Basically, everything that program does requires waiting for
scheduler to execute whatever service we send messages to. Disk, sockets,
devices - everything. All of which is made in the name of memory safety.

On the other hand, unikernels that execute nothing but managed code (e.g. not
code of CPU, but code for some virtual machine such JVM or .Net, which is
forbidden to read other processes' memory on the syntax level) solve the same
problem of protecting system memory, while carrying much less overhead. I
guess this approach would be more preferable for creating a new mobile-
oriented OS that requires good performance and low power consumption, no?

------
Boulth
It's interesting that Google even registered fuchsia.com, used in examples.

------
tyingq
Assuming you want to build things yourself only when it's a differentiator,
not sure I get Fuchsia.

Any "bloat" or slowness on Android seems more likely to be Java, or something
other than the base OS.

Maybe I'm missing something?

~~~
kiriakasis
Well, one of most serious problem with android has nothing to do with bloat,
but with old non updated devices

------
lucasnichele
Obviously, this is the same compare banana with pineapple. As linus live,
linux will never be microkernel.

------
nailer
UX folks: Raph Levine works on Fuschia. Here's some video, alas I couldn't
find anything from this year.

[https://www.youtube.com/watch?v=HpBbbd8y2kM](https://www.youtube.com/watch?v=HpBbbd8y2kM)

~~~
berg01
Raph has a great and well-deserved reputation for excellence.

.. but where do you find any reference to him in that video?

~~~
gman83
Probably referring to this:

[https://www.recurse.com/events/localhost-raph-
levien](https://www.recurse.com/events/localhost-raph-levien)

~~~
berg01
Thanks!

------
peter_retief
It has taken decades to make linux stable and relatively bug free (Discounting
systemd) As much as it would be great to have a new OS I wonder what is it
based on and why?

~~~
naasking
> It has taken decades to make linux stable and relatively bug free

This is due to the development practices and tools employed, not intrinsic to
the construction of new operating systems. There are much better tools now.
For instance, Google could have built this on seL4, a verified microkernel
instead building their own from scratch and they would have hit the ground
running instead of the slower build up they're now going to face.

~~~
peter_retief
It will be interesting to see how it works out, as you say better tools and
knowledge, I am looking forward to use it

------
mortdeus
whats wrong with linux?

------
aidenn0
Did they write a completely new microkernel for this? If so, I'm curious why.

------
mankash666
Google clearly has a LOT of cash to play with. They could have refactored/re-
used the permissively licensed BSD core, but ended up re-inventing the wheel,
breaking a TON of POSIX based software.

For what benefit exactly?

~~~
mkozlows
The "wheel" is 40 years old. It's a great wheel, it's doing a hell of a job.
But with 40 years of perspective, maybe it actually can be re-invented better
without carrying around a pile of legacy back-compatibility needs.

~~~
imtringued
If it's so terrible why did microsoft develop the windows subsystem for linux?
Shouldn't they instead try to avoid the "pile of legacy" as much as possible?

~~~
pjmlp
Because they saw a market opening with UNIX devs no longer happy with the
hardware selection for using macOS as a pretty UNIX.

Also their goal is not to run 100% of POSIX or Linux specific software, rather
achieve a good enough compatibility to run majority of well known projects and
utilities.

------
jacksmith21006
It is hard to imagine this kernel being anywhere near as efficient as Linux.
What makes Chromebooks so great is peppy performance on cheap hardware that
you just could never achieve with Windows.

~~~
tracker1
Linux isn't particularly efficient... That it's often better than Windows,
doesn't make it the most efficient. And does that matter? ChromeOS' UI is
driven by Chrome (a web browser).

~~~
Valmar
Well, if you're comparing Linux and Windows' kernel specifically, they seem
about the same, if everything we know about Windows' kernel is true.

------
s2g
Fuchsia would be pretty cool, if it wasn't being developed by Google.

Not gonna use an OS from that company.

------
newnewpdro
More importantly, Fuchsia is not GPL.

~~~
mankash666
WHY should everything be GPL-ed? Capitalism is an ethical, meritorious system
rewarding hard-work and skill and GPL runs counter to everything capitalism
stands for.

Not to mention that the fat profits capitalism bestows on Google FUNDS fuchsia

~~~
msla
> GPL runs counter to everything capitalism stands for.

As a point of fact, it doesn't. Stallman himself sold GNU Emacs in the early
days, and it was GPL'd from the start.

~~~
quadrangle
Your statement is entirely true but is non-sequitur.

Capitalism is not "the economic system in which things are bought and sold
with money", it's a much more particular _subset_ of all those types of
economies (namely, capitalism is the type where the primary resources and
means of production are _privately_ owned).

It's incidentally TRUE that GPL is not anti-capitalist. But it's _also_ true
that you can consistently both sell things and _be_ anti-capitalist. Selling
things happens in _most_ economic systems, capitalist or not.

~~~
gnulinux
> Capitalism is not "the economic system in which things are bought and sold
> with money"

Well, at least according to Marx (c.f. _Capital_ ) capitalism is the system
where things are bought and sold with money. If you read first few chapters of
_Capital_ you'll see that Marx explains this in various ways such as
characterizing capitalism with division of labour where instead of producing
bunch of commodities, workers produce one commodity and sell it for money with
which they can buy other commodities. And according to, again, his definition
communism is the society after this type of commodity production, i.e. the
mechanism that produces value (money) is stopped in the sense that people
stopped exchanging things for money.

You may not care of Marx's definition, but I just wanted to note for
completeness.

~~~
quadrangle
Indeed, I don't defer to Marx. But that said, it's been a while since I read
him much. I suspect you're conflating Marx _describing_ Capitalism with
_defining_ it.

We can describe the idea of buying and selling with money as certainly being a
characteristic of Capitalism without asserting that it's a _defining_
characteristic that is absent in other systems.

To be blunt, buying and selling with money is FAR older than Capitalism.

[https://en.wikipedia.org/wiki/Capitalism](https://en.wikipedia.org/wiki/Capitalism)
is actually pretty good and neutral

~~~
gnulinux
I'm really sorry that I will not be able to provide any source to you at the
moment because I'm very busy, but for what it's worth I'll write you what I
remember from my readings (I studied Marxism extensively for a time out of
interest but I never had a formal education in sociology, so I'm not an
expert, I'm a regular software engineer). Also disclaimer, I tend to agree
with Marx on a lot of issues so my ideas might be biased. My terminology is
also a bit rusty.

Capital starts by explaining commodities. This is because: (1) Marx tries to
explain some bit of the terminology of his period's economic terminology so he
needs to do some ground work; (2) commodity production is an important aspect
of capitalism that he refers all throughout his works. My main point is that
the force that made capitalism possible and the force that sustains capitalism
is one, which is the accumulation of value. As Marx explains in later
chapters, a things will have different types of values. For Marx, nothing has
any intrinsic value and its money-value is determined _at the moment of
trade_. That is the force that generates value inside economy is the act of
selling commodities. The same force _causes_ the distinction between
bourgeousie and prolateriat (c.f. Marx's definitions of social classes) and
the same force _caused_ the transformation from earlier economic system to
capitalism (which answers your complaint). Now this brings us to the end of
capitalism, which Marx very insistantly argues that end of capitalism is the
seizure of value generation which is equivalent to saying society being
moneyless. E.g. one misconception people have is that Marx was also against
labor-vouchers, but this is not true, as explained in Critique of Gotha's
Programme, since labor-vouchers do not generate/accumulate value, their value
is not determined at the moment of trade. Anyway, this also relates to Marxist
criticism of anarchism. For anarchism capitalism --> communism is the seizure
of capitalist _state_. But Marxism thinks this is fundamentally wrong because
capitalist state is _generated_ by capitalist mode of production. So, you will
want to eliminate the capitalist mode of production instead of the state
itself, because as long as c.m.p exists there is no way to kill capitalism, so
state will revive. For Marxism you first need to eliminate the economic system
that makes capitalism possible, i.e. accumulation of value, and then
ultimately kill the State and class society as they're _caused by_ capitalism.

------
cmollis
So it’s....Mac OS X?

------
alokitr
I can't wait for this to be a huge bloated system that nobody comprehends in
its entirety. Oh, wait

------
mihaela
I don't have a problem with Google's resources to build something of that
magnitude successfully. My problem is with their execution. All their products
are in perpetual beta. And the users are forever testers. That's their
business model, and it doesn't call for great UI/UX.

~~~
jacksmith21006
Search is beta? Google WiFi? Gmail? Android? YouTube? Chromecast? Project
zero? Maps? Photos?

~~~
mihaela
GMail was in beta for a long time. Search is not a product but a service. Maps
was in beta for long as well.

~~~
dragonwriter
> GMail was in beta for a long time.

Which doesn't justify the universal, present-tense claim upthread.

------
flyingcircus3
"Senator, while I agree in the general sense that Fuchsia is not Linux, It
appears that in this specific case, its just Yet Another Linux."

How is this not the latest iteration of not invented here syndrome? Any system
like Linux or Python that has the "Benevolent Dictator For Life" holding the
reins is inherently saying that they favor quality over quantity. Its almost
like the US Senate, in that the very goal is to go only as fast as prudent.

------
geijoenr
I really don't get this coming up with a new OS's thing every now and then.

As I see it, is all about driver support; just because is the bigger effort.
That is why vendors (and the community) focus in only one or two options
(Windows/Linux).

Anybody can come up with new fancy OSs, as a matter of fact many people does.
Problem is, there is no incentive for vendors to produce specific drivers for
those, and the communities are just to small to cope with the huge amount of
hardware support to make them useful.

I just don't see the point of coming up with new OSs as long as Windows/Linux
just work as intended.

~~~
tracker1
A network host/container OS that is lighter than Linux/Docker, Solaris and
FreeBSD.

~~~
pm90
One can dream ;).

I do like the minimization that a lot of OS/Systems are undergoing though. I
still remember when VMWare VM's came along and my mind was blown. Similar
feeling on seeing Docker, but it was tempered somewhat until kubernetes came
out. Very excited to see what comes next in the future.

