Hacker News new | past | comments | ask | show | jobs | submit | page 2 login
Fuchsia OS Introduction (bzdww.com)
580 points by Symmetry 8 months ago | hide | past | web | favorite | 394 comments



I would not be surprised if Google decides to never launch this. They currently have two consumer facing operating systems and linux (presumably) on their servers. On the server side they use containerized services, are involved with Kubernetes, etc. None of that stuff probably is even close to working on Fuchsia and I don't think server side is a priority with that.

On chrome os, the most significant feature is the newly added ability to run linux applications via crostini. Containerization is of apps is a big topic in general in the linux world with e.g. things like snap.

On Android, they have an extremely large installed base, enormously slow upgrade cycles, and a huge legacy development ecosystem. This ecosystem overlaps with ChromeOS as well since it can run Android apps. And Android can do linux apps as well (though that mostly makes less sense).

Fuchsia would need to play nice in that ecosystem or it will face an empty room problem where all of the key apps have not been ported yet, they have a tiny marketshare initially (thus removing the incentives for fixing that), and OEMs will likely vote with their feet by sticking to the old platforms. Part of that is supporting linux based stuff; which especially on Chrome seems increasingly important.

So, kind of pessimistic that Google has a strategy for making this work. This makes sense as a research project but not as a product strategy currently. Yes Android has issues. Maybe fixing those might be easier than replacing it entirely?


"are involved with Kubernetes, etc. None of that stuff probably is even close to working on Fuchsia and I don't think server side is a priority with that."

actually the networking stack in Fuschia, or more particularly zircon the kernel it is optimized around, is written in Go.

Kubernetes is written in go, which is essentially a system for implicitly defining scaled linux boxes on a giant network, and now in the most recent release is allowing for declarative architectures as well, much like how user space services for graphics drivers in Zircon will enable more declarative and optimized use of expensive resources as well (in this case, modular hardware architecture for mobile operating systems, which is what Fuschia was originally envisioned for.)

I'm also pretty sure Google also has experience with mobile operating systems (android), and Fuschia is literally a response of now over a decade of trying to interface android on POSIX in conjunction with multiple chains of monopolized and closed source hardware architectures, being intimately familiar with the evolution of trends in the mobile hardware architectural space, and trying to rewrite a kernel to optimize for that.

"Fuschia will need to play nice...Part of that is supporting linux based stuff; which especially on Chrome..." Chrome had to have an entire team dedicated to resandboxing their tabs to mitigate for spectre and meltdown, which are both results from the unquestioned but growing obscurity due to unquestioned implementations between hardware and Linux integration, which is something that Fuschia attempts to take a step back from, and by doing so simultaneously make it easier for open source development on hardware architectures while optimizing for them.

Still, total site isolation is still an advanced option in chrome that results in a 10% degradation in memory performance so people don't turn it on, if they even know to check for it/what that means (most people don't).


Crucially kubernetes uses docker, which uses a lot of low level linux and posix stuff. Kind of an issue if that stuff isn't there. Kernel virtualization on fuchsia could of course become a thing (like it is on Windows and OSX) but so far I've not seen much on that topic. WASM on chromium might become a way out as well of course.

But the bottom line is: Empty room. beautiful OS with an empty app store is kind of a non starter in the current market. Windows Phone found that out the hard way. Also, several other mobile operating systems that did not quite make it or continue to struggle. Sailfish, WebOS, Firefox Mobile, Ubuntu Mobile, etc. Even ChromeOS struggled until they added android support and Crostini.


ART will run on top of Fuchsia, it is already being ported, you can check the merge commits.

As we always need to remember Linux fans, the fact that Android uses a Linux kernel is irrelevant to userspace.


I have no idea what ART is but I doubt it addresses all concerns I listed above. On Android, there are plenty of native libraries and apps these days as well. I'm pretty sure these don't work as is without a compatibility layer that essentially replicates a lot of linux/posix stuff, which Fuchsia does not implement.

In any case, it wouldn't be the first time that Google walks away after putting lots of development in something. In general, I think they are closer to merging Android and Chrome OS than they are to replacing either with Fuchsia (not to mention convincing OEMs like Samsung to actually use it).


ART is the Android RunTime, basically the large majority of Android's userspace.

Those native libraries are forbidden to use Linux specific features, only these APIs are sancioned.

https://developer.android.com/ndk/guides/stable_apis

Since Android 7, Google has been claping down NDK users that try to used anything that isn't part of that list.

Since Android 8, APKs are only allowed to reach for their own internal filesystem and use SAF for anything else. Something that will be further enforced on Android Q, so no luck trying to peek into /dev, /share and similar.


As for virtualization, Fuchsia already has its own KVM equivalent called Machina which so far can run Debian on top of Zircon and with several compatibility changes for supporting the ART runtime in Zircon already merged in, it should be also possible to run Android apps with this.

But perhaps the reason Fuchsia won't struggle unlike the other OSes you mention, is that it is possible that it will be compatible to run all the Android apps in the play store from day one, thanks to Machina; allowing a smoother transition, a process similar to what Apple did with the PowerPC to X86 switch but in Google's case, its for a completely different OS.


there is also rkt, and anyways, while docker containers are great, they are just an abstraction of cgroups and namespaces, yet you forget that cgroups are a relatively recent concept in Linux and docker containers didn't even have namespaces in its first, second of third iteration, yet you act like docker relies on the immutable principles of posix.

Anyways, docker is a good example of how current linux systems are not optimized for modular sandboxing and containerziation. Still, people are so uneducated even in tech on how important this idea of only working with bare bones (I started in C so allocated bytes as I needed them and always considered how not to use them first, is a far cry from npming an express server and seeing the endless train of dependencies that are invoked) that still they do not secure their containers, and the number of ubuntu18.04 std base images I see running a docker container that simply contains a python app or something equally trivial, live in production at some of the top tech companies, which you can google and download a rootkit for, with no linux hardening whatsoever, is the terrifying norm of centralized web application companies today. I really am not going to buy into this idea that docker contiainers baring full replicas of the operating systems they sit on top of are a justification for POSIX.

If you want increased modularity for security, sandboxing and running different application, look at QubesOS, which is already far along and has it's own baremetal hypervisor, which is much like how docker works in userspace but optimized all the way to bare metal hardware. Fuschia takes a similiar approach when looking at optimizing modularity in mobile computing hardware architecture.

" Also, several other mobile operating systems that did not quite make it or continue to struggle."

This is true, but this is coming from the same company who has experience designing both software and mobile hardware architecture. Just because something is not already popular and widely adopted isn't a reason not to do it. I'm always an anti monopoly person myself, especially in the world of technology.

You can read my other comments and see the justifications around the need for this. As someone coming from the hardware architecture design space for qualcomm snapdragons all the way to 14nm iphone architecture, there is a need to remodularize kernel for advanced execution and increased competition in this space. POSIX is not sustainable looking 10-20-40 years into the future of hardware computing, particularly in the next ten, and android game developers who make a living off of candy crush do not really seem to care about this impending doom, only that they will have to traverse yet another learning curve if the platform gains adoption or become competitive in the space, which sucks, but it's not as bad as you'd think.

Besides, forcing people to continue to traverse learning curves keep the market competitive and keeps people from becoming to religiously entrenched that google's current android API is an unattested god. Yeh, its hard to make money sure, it's competitive sure, but we so easily forget android was a first stab and open source response to the iphone. The first motorola android phone came out my sophomore year in college. Now people can't imagine how we would survive without their 6yr olds going to school without an iphone 7. We often assume we need things to be the way they already are, and are not accustomed to change, but I can tell you Fuschia is needed in the space, operating system competition is needed in the space, and in the next decade we will be evolving to think different about excessive use of memory, dependencies and latency and be looking for something like this, luckily it will be about a decade into development at that point, about what android is now, and people say it's unreasonable to consider anything else..


Google is well known for killing projects :P https://killedbygoogle.com/. But who knows. It's still to early to judge in this case.


I don't know if it's just me, but lately the amount of interesting stuff (articles, software) i come across that sadly is only available in chinese keeps increasing.

If it wasn't for these groups of volunteers that translate them to english (and the other way around, also useful) these projects would be completely unaccessible.


Guess we the westerners now experience what other parts of the world experienced for decades.


English is much simpler than Chinese.


Reading and writing the alphabet, yes. The grammar and pronunciation rules for speakers (not the pronunciation itself)? English is quite a bit more difficult than most people realize.

You can learn Pinyin in a few days, which is sufficient for you to pronounce any Mandarin Chinese romanized into Pinyin. Chinese grammar is fairly simple compared to Latin languages. Whereas English is a rats nest of special-need-to-memorize pronunciation rules.

Most of the real difficulty people experience with Chinese comes from the shock of learning the characters.


Coming from Dutch, I found English easier than German. Some people say German is so close, it's mutually intelligible with Dutch: not only is it not, I even find it much harder than English.

At least from my perspective, someone who knows a language very close to another and still found English easier than this other language, English is easy.

Pronunciation is indeed quite a lot of learning by heart, but it's not hard to get used to if you got the rest down. Of course, this might be much harder for Chinese, I wouldn't know.


Two things: 1. As an English-native-speaker who also learned German I find that I can mostly understand spoken and written Dutch through my German. I think part of your difficulty there is that it is closer to your native language where you have very tight internal rules about how the language should be, so get caught up in those differences. On my side my internal rules on German are relatively loose, so Dutch mostly fits in there.

2. One good-sized chunk of the messiness of English pronunciation (vs. written words) is actually the fault of the Dutch. Prior to the printing press spelling was more an art than a science, and the same transcribed book could be found with different spellings (especially true in different regions). Since the Dutch were the first ones with the printing press, many English books of the time were printed there, and the Dutch printers made some pretty arbitrary decisions on how they thought English should be spelled (e.g. the 'h' in 'ghost'). There was a similar effect from French scribes when they invaded England.

Now my no means were they they only ones to mess up English (read up on "The Great Vowel Shift" if you want a headache), but they did their part.


What an amazing illustration of how humor can be lost across language boundaries. You have to have experience in both English and Chinese to know how hilarious that statement is. It's genius to write it in English and deliver it so deadpan to readers who won't know to laugh out loud when they see it.


I definitely didn't intend it to be humorous.

I'm not a native English speaker and have made an attempt to learn Chinese.

To me memorizing how hieroglyphs look was harder than any part of English. One need to have a good visual memory to find that easy.


To take a few examples from your comment:

"Definitely" has 19 "strokes"

"Speaker" has 14 strokes

"Visual" has 10 strokes

"Made" has 9 strokes

9 strokes is a lot in Chinese, but relatively few in English. And despite what anyone says, you can't learn English using phonics or any other method except bulk memorization. In English, every word and every one of its derivatives has to be memorized. Past tense, possessive, plural, everything has to be memorized. Nobody bats an eye at this. Learning 8 strokes for "tree" is easy, but the 4 strokes for the Chinese equivalent is hard. 12 strokes for "water" is easy. 5 strokes in Chinese is hard. Oh, and throw on 4 more strokes in English if you turn it into a verb and use the past tense, "watered". Unless you write "did water" instead and split them up with a noun in between: "did you water". What a disaster.

I am a native English speaker. I like to think I am decent at it. Better than most of the people with whom I speak in person. But did I just use "whom" correctly? And did I just start a sentence with "but"? And "and"? And leave out the subject? Is this even proper English?


It's an incentive for improving machine translation, no?


Yeah, this touches on something i believed for a long time but every time i mentioned it i got (at best) push back: we make the software way too localization friendly, when we shouldn't do that because what we end up doing is enabling people create intellectual silos - we enable the division inherent in using different languages.

If you had to use software in English, regardless of your native language (and mine is not English, it does not even have a latin alphabet) you'd need to have at least some proficiency with being able to read it which sooner or later make you able to write it.

I've seen that first hand: i am 99% self-taught when it comes to English and the only reason for that is that when i started using computers and got interested in programming, everything was in English. I had to learn it, otherwise i wouldn't be able to use my computers.

(and FWIW i wouldn't be surprised if a major reason that Japanese programmers are both bad at English and they tend to use their own and/or different software than the rest of the world today does is that they got translated documentation, software and computers from the earlier days of computing - essentially creating an unnecessary big barrier between themselves and the rest of the world)


What you are saying goes beyond software, should all books be in English? Should all content on the Internet be written in English?

In my opinion it would create a really sad mono-culture... Each language carry part of its history and culture that can't be translated at all.

It would also prevent a big part of the population to access any content. Even if you apply it to software only, it means that a big part of the world wouldn't be able to use any of those.

Now, if the software that you need to use to be able to learn English are in English, that becomes even more of an issue...

Also, why English in particular? For a huge amount of the world population it requires to learn a new alphabet. It becomes a much more complex requirement.


Language division prevents a big part of the population to access content even more.

Is everyone supposed to learn Chinese to be able to read interesting articles ? What about Korean ? What about Wolof ?

Sharing a common language bring a lot to mankind as a whole, and is more effective to create bridges between people than most other means. Local languages are of course welcome and should be kept around for culture, but I do think that refusing to use a common language is a loss.

As for why English, well it's a bit late to ask the question imho : English has been picked as a common language quite a few years ago. There was French in Europe before, latin before that, but now it's English. I'm not especially happy about it either, since I'm not a native speaker, but here we are, and it's kind of pointless to fight against it when a big and diverse part of the world population already speak English (diverse is important, having 2 billion of people speaking a language in only one country is not really helpful for a common language).


> having 2 billion of people speaking a language in only one country is not really helpful for a common language

Apparently it is, otherwise we wouldn't be in thread discussing more and more interesting content only being available in Chinese. Users will learn whatever language provides content they want to consume. For me that was English. My parents learned Russian. I'd not be surprised if the next generation learns Chinese, though I'd prefer not having to do so (parents english isn't great).

Your arguments for English keeping its spot are kind of weak. It was surprising to read how few people speaking english. Wiki "English-speaking world" states just 500m-1b global and first language is less than 360m, 2/3 of those in the US. Class differences might ensure there is still more content produced in english, but china might catch up quickly and having just half as many speakers doesn't give a good head start. It being close to European languages might help, at doesn't require to learn a totally new alphabet. I would have expected English to get a nice boost due to geopolitics, but am not sure that is the cast past 2016. The US is still highly influence, due to media exports and probably most importantly dominance on the internet. China's firewall/separated Internet might save English. Though it might also protect their own language long enough to start to produce enough content to export...

Actually, I just realized how few recent popular Chinese films I've heard of. Guess I'll choose some to download, now.


> I'm not especially happy about it either, since I'm not a native speaker, but here we are

Many of us native speakers aren't happy about it either, English is a terrible language. I have a 6-year old daughter, and across the years we have spent teaching her language skills I have learned just how messy English actually is. Watching DVDs and reading books about pronunciation rules that quite frankly even I never quite grasped until teaching my own child, it's enough to make your head spin.


> pronunciation rules

Rules? Hah!

https://en.wikipedia.org/wiki/The_Chaos


> What you are saying goes beyond software, should all books be in English? Should all content on the Internet be written in English?

Yes.

> In my opinion it would create a really sad mono-culture... Each language carry part of its history and culture that can't be translated at all.

That history and culture is only relevant to those who are interested in history and cultures, not to everyone. I am not against this, nor against other languages. People can still use English as a second language, i am talking about having a common language so we can communicate with each other, not a single language.

> It would also prevent a big part of the population to access any content. Even if you apply it to software only, it means that a big part of the world wouldn't be able to use any of those.

My point is that that population should learn to at least read English so that not only they will be able to use the software, but also read the texts from other people outside the silo that their own native language creates.

> Now, if the software that you need to use to be able to learn English are in English, that becomes even more of an issue...

No, the software to learn English can be in any language. In fact i'd rather see all the effort that goes into internationalization and localization to be placed instead into creating better software, tutorials, documentation and other materials for software that teaches people English.

> Also, why English in particular? For a huge amount of the world population it requires to learn a new alphabet. It becomes a much more complex requirement.

Because it is already used as i mention by a very large number of people, especially around computers.

The alphabet isn't a big issue, as i wrote above, i had to learn it myself and English has one of the most compact and easy to learn alphabets.


In the future perhaps humans will move towards a more common language but if you answer "yes" to the questions "Should all books be in English?" and "Should all content on the Internet be written in English?" then it isn't any different than variations of cultural chauvinism throughout documented human history.

Nobody is obligated to go through the hassle of learning a new language just because someone else wanted that knowledge (volunteering to share, however, is perfectly fine). Otherwise it is a blatant infringement of personal freedom. Multilingualism should also be multi-directional instead of converging towards a single language, but that is obviously something unnoticed by a lot of Anglos who for one reason or another have decided their language and culture should trump over all else.


I'm not interested in whatever cultural issues some might have, especially since people put too much irrational emotion in their language.

Languages are primarily a tool for communication and i am interested in the practicality of improving that communication - if two people speak different languages and without any common one, they simply cannot communicate.

I do not care if it would be English, Chinese, French or whatever else, that choice is irrelevant. I only said Yes to English because of its popularity, widespread use among many different countries and its position when it comes to computers (which was my original point above).


Sure, but we'll bring software to more people if we localize it in Chinese and have westerners learn Chinese. Less effort.

English isn't even in top 4 of world spoken languages, so I have no idea why you chose is as the OnlyRightLanguageToBeUsed(tm)?


English has 982 million speakers, 2nd most right behind Chinese. Turns out, a lot of people in Europe and India speak it fluently as a second language.


300 million Chinese has trouble speaking Chinese, I think you know


There are people who speak Chinese outside of China to make up for it, for 910 million native speakers and an additional 200 million L2 fluent speakers. See https://en.wikipedia.org/wiki/Mandarin_Chinese


> English isn't even in top 4 of world spoken languages

By what measurement? Wikipedia gives Mandarin > Spanish > English > Hindi for native speakers[0] and English > Mandarin > Hindi > Spanish for all speakers.[1]

[0] https://en.wikipedia.org/wiki/List_of_languages_by_number_of...

[1] https://en.wikipedia.org/wiki/List_of_languages_by_total_num...


>> Sure, but we'll bring software to more people if we localize it in Chinese and have westerners learn Chinese.

How do those two go together?


Almost 1/5 of all humans are chinese. Maybe it's time to learn the language. Of course there are tonnes of smart people there doing interesting things.


About 1/5 of all economic activity is US. Add the other English-speaking countries, and it is probably more than 1/4.


> About 1/5 of all economic activity is US

Perhaps, but how much is in China? And how much 30 years from now?


Oh, yes, this is a very carefully thought out project and I love it. I have a hard time trusting Google, but this is clearly the result of some very straightforward, competent, well-directed thinking, and I hope it succeeds.


What type of device would you use this in?


This architecture sounds like one I would want to use for all of my personal computing devices. I believe this because I did a bunch of experimental hacking three years ago with the objective of designing an operating system with usability and security characteristics that would suit me better than Linux or Android currently do. I didn't make a huge amount of progress toward a working system (you can see what I did here: https://github.com/marssaxman/fleet), but I spent a lot of time reading and thinking as I tinkered. The people working on Fuchsia have clearly thought deeply in similar directions, gone a lot further in the work, and resolved a lot of the uncertainties I encountered.

I won't let Google manage my data for me, but if they can keep their fingers out of the pie I would gladly use a Fuchsia-based system as described here for my laptop, phone, and home desktop.


Medical devices. There isn't a lot of FOSS RTOS/micro kernels in the field.


Wonder if the permissions model will make it harder to circumvent DRM that takes away functionality like screenshots. If a super user can't imbue child processes with their capabilities then control won't ever be fully in the owner's hands.


Security coprocessors are doing most of that enforcement and are not reliant on the host microcontroller to enforce anything.


Would the sole "solution" to that be recompiling the kernel yourself with that functionality enabled/coded in?


We have Android devices that can't be unlocked today. They can carry over the same mechanisms.


That is a different beast that even the linux kernel doesn't touch since the kernel is GPL v2. So even the exact same license as the kernel would not have changed the status quo; only something like GPL v3 would have, which obviously a company like google would not touch with a 10 foot pole.


"The Unix successor, Plan 9, released the last version in 2002, and its waste heat has been integrated with Go."

I'm not sure if this is a translation gaffe or not, but I'm using it as a new sick burn for sputtering projects.


I'm not familiar with Go or Plan 9 to know if there is any connection or not but I am curious.


It's my understanding that the Plan 9 compilers were ported to linux and used to compile early versions of Go. Past that, the only connection is that one of the Go project leads used to work for Bell Labs.


Plan9 was a network operating system, and with a focus on concurrency a new language Alef was designed with coroutines based on CSP formalism (Hoare's communicating sequential processes). That was redesigned in Limbo language on Dis garbage collected virtual machine in plan9's replacement Inferno OS. Go's goroutines and other choices are next incarnation of these ideas designed by many the same Bell Labs people, not just one incidental lead. Shopping for a light compiler to bootstrap from is peripheral to the design story here.

Fuchsia doesn't share as much of the story but does pick up where in Dante's Inferno the original Unix people tried to abandon the root user to redeem themselves almost 30 years ago. Combined with capability based model last seen deployed in the wild with OS/400 and Burroughs machines before that, it would be the first truly new OS in decades.


Not only one developer. Rob Pike, Ken Thompson and Russ Cox were fundamental in Plan 9. The early versions of Go's standard library were very based on ideas taken from Plan 9 and Inferno, and even the compilers were very similar.

Although it would be wrong to consider Go as a derived product of Plan 9, I think it is right to call it its intellectual successor (although not the only one).


Can someone with knowledge on the matter explain how Fuschia compares against other microkernels such as RedoxOS and QNX?


A lot of performance problems will magically go away with enough money backing it. A comparison with NT (hybrid but shares a lot of similarities) is more apt.


I meant structurally, not in terms of performance


NT is pretty similar to Fuchsia in a lot ways


The big difference is that it's capability based, whereas those two aren't.

Perf isn't the best comparison right now, as I've heard that kernel perf is a non goal at them moment. I think they want to build out enough of the OS to know where the best bang for their buck is when optimizing.


Speaking English as second language, which was a requirement to be a software engineer, i assumed i had some other 5years to learn Chinese before being left behind. Guess I am late already.


It's becoming more and more important, especially in the embedded world, because a lot of hardware vendors are based in China.


Thankfully it's not that hard.

Just requires that willpower to spend some time memorizing a whole bunch of words.


>File and file system becomes a partial concept (limited to each file system process), so there is no file in the process kernel data structure

>"/" -> root vfs service handle, "/dev" -> dev fs service handle, "/net/dns" -> DNS service handle

Just resurrect Plan 9 already?


This strikes me as an approach quite similar to that of Redox:

> "Everything is a URL" is a generalization of "Everything is a file", allowing broader use of this unified interface for schemes.

It makes sense to me - not every resource can reasonably be shoehorned into the file-folder abstraction.

[1] https://doc.redox-os.org/book/design/urls_schemes_resources....


When you have everything as a file - to add URL just to point to that file is a simple feature.


I suggest investigating Genode.


I'm not sure of this need to remove, or not support, OpenGL these days.

It's not like OpenGL is the greatest graphics API, it's just that it's been used for decades - I personally learned 3D graphics rendering in GL and I still think in GL when I'm compositing a scene in my head.

If I'm going to prototype something, I have historically loved that no matter what platform I'm on, I can pretty much just hit 'compile' and I'll have the same result.

Which leads me to my second complaint - Metal. When Vulkan has come out as the clear choice for next-gen multi-platform graphics technology, Apple, once again, decided to completely invent a proprietary system instead of using the low-hanging fruit that was already there.


With Android Q, they updated ANGLE to run on top of Vulkan. OpenGL's rapidly becoming a library, and that's a good thing.


What, you expected anything less from Apple? They are a vertical company, not going to use anything outside of their ecosystem unless if it costs too much. And with their cash they can afford to


VirtualBox bootable .iso would be nice.

Especially since I thought of trying it out and followed Fuchsia instructions[0] only to run into exactly same problem building it that others have reported >3 months back[1].

Also, instructions to download and install jiri with curl under Fuchsia docs don't work either[2] and curl returns '404 not found' instead. Got around this using instructions here [3] only to fail soon after as per above.

[0] https://fuchsia.googlesource.com/docs/+/a40928d45b43dbf72d5e... [1] https://www.reddit.com/r/Fuchsia/comments/a56rg7/issues_when... [2] https://fuchsia.googlesource.com/docs/+/a40928d45b43dbf72d5e... [3] https://fuchsia.googlesource.com/jiri/#Bootstrapping


> The Unix successor, Plan 9, released the last version in 2002, and its waste heat has been integrated with Go.

I feel attacked. Glad they're picking up seriously on the idea of namespaces.


This is some great detail, thanks for sharing.

> A channel has 2 handles, h1, h2, write messages from one end, and read messages from the other.

This seems like it's bound to be a critical section for many operations. So for one/some of the supported target architectures, what's an example of how channel write/reads look in detail? Do I need to trap or can I write "directly" into pages that the recipient can read? Can messages span cache lines?

I believe that Fuchsia sounds ambitious and it's only through the dedication of mega-tech corps like Google/etc that ambitious efforts like this can succeed. But the world in general, even the tech world, is loathe to accept change. Backwards-compatible layers to emulate the linux syscall layer would probably be a critical transition feature. This allows consumers to phase in the work required as they can stomach it, rather than suffer it all-at-once-or-nothing.

How hard is it to port Fuchsia to a new target? Can anyone point me to a series of commits for a target that does this?


The kernel switch for channel communication seems like a big performance hit just to stop "sharing memory". Why not allow shared memory with negotiation on the communication protocol? The OS software could share memory, but only use protocols that are strict, simple, secure, etc...


If it's anything like sel4 in this regard, you can share memory just fine, it's just that the first pass should use IPC through the kernel (which is very optimized).


I really like fuschia, Redox and the other microkernels. However, I am highly concerned about the license change.

About the only upside of the Android ecosystem was the kernel's license, forcing kernel drivers to be open-sourced. This allows mainlining back devices into the Linux tree, which enables a variety of user projects, such as postmarketos.

On the other hand, some of these drivers already acted as a shim to a userspace blob, and having an IPC would at least provide isolation from the blobs themselves, and allow tapping a clear IPC interface for reverse-engineering those. So all might not be lost.

On the sustainability of open source, since this seems to be debated much in this thread, I personally think (and it is a debatable viewpoint) that this ought to be incentivized at a government level (by taxing software/technology sales, and sponsoring open source projects with the proceeds). This would provide a much more robust base on which to provide (proprietary if you want) services upon.


Ah, ok, "introduction" as in "Introduction to Fuchsia OS". Which is interesting too, but I initially thought that Google is actually introducing Fuchsia OS now, i.e. presenting plans to replace Android with it. Wishful thinking ;)


Any ideas on why they use:

1. MessagePack (instead of, say, FlatBuffers – am I wrong to think that FlatBuffers are more efficient? (and the format was created by Google))

2. ELF binaries (instead of, say, taking a clean-sheet approach to executable formats)

Also, on ELF they say: "The loading position of the ELF is random and does not follow the v_addr specified in the ELF program header."

(They're talking about ASLR.) This just highlights for me that a more efficient format is possible, one that doesn't have a virtual address in its headers. Perhaps there are even bigger wins possible with a clean-sheet format.


For ELF i would say because its easear to reuse a lot of tools that already deal with the format. Debuggers, linkers, compilers, etc..

My guess is also because its easear to have a bridge with Linux executables, so you can run the same ELF and emulate the linux syscalls somewhere like FreeBSD and now Windows .

I bet they were thinking about running Android stuff more easily, and i dont think there will be much more to gain in designing a whole new executable format, when you compare how much of free tools, code and compatibility with Linux you will have in the end.


I had a question about:

"Global file system

In Unix, there is a global root file system"

Why night this be seen as a shortcoming, especially with mount namespaces now? Can anyone say?


See the top couple comments on this thread (from within the past month) by q3k:

https://news.ycombinator.com/item?id=19254828

That should be impossible in Fuchsia.


Also with mount namespaces ;)


I think the point is that things like namespaces are so good the OS should be built from the ground up with that in mind, not as a bolted on feature where a global root remains but is hidden.


This power management code could use some work: https://fuchsia.googlesource.com/zircon/+/refs/heads/master/...


It seems like the main point of this redesign is modularity for security's sake. Curious to hear who is asking for this OS, where Google will use it, and what type of system any of the HN crowd would consider putting Fuchsia OS into. Can't shake the feeling that it's somewhat...uncalled for by the market?


On the server platform..... On the desktop platform.... On the mobile platform.....

Pick one. Have we not learned from the past that there really is not a one size fits all operating system for all use cases. I am probably alone in thinking this, but bad decisions can be made when trying to get a server OS to run on a phone.


Hence the microkernel with pluggable modules. That's the whole point


This basically just came across as a GNU/linux hit piece. In my admittedly really limited knowledge wouldn't multiple vectors into the kernel introduce more surface area for attackers? The other problem with this is when is Google going to get bored with it and move on to something else? If they are willing to jettison Android they are willing to jettison anything. Also the line about GNU/linux only caring about servers seems to ignore the amazing amount of work that's gone into KDE, Gnome, Ubuntu, Kali Linux, Mint, etc. all of which are desktop focused and in the past few years have really started to catch up feature wise. Yes obviously there are still issues, and realistically there always will be issues. Windows and MacOS/iOS have issues with thousands of paid programmers why wouldn't a distro built by hundred of part time workers with only a portion of them paid to work just on the OS?


Microkernels are often used in high-security/high-criticality use cases because the kernel's facilities are very limited and the surface area is very small (thus easier to reason about and assure). Most complex functionality is performed in isolated processes/tasks with known IPC boundaries (i.e. a random thing can't clobber or inspect the memory of everything else).

We build on top of seL4 for these reasons (among others).


Sounds like a fun project!

Isn't seL4's multicore support either unverified or limited (i.e. shared memory is forbidden)? Is your platform single-threaded then?


> We build on top of seL4 for these reasons (among others).

Who's we? I too am down with OTP, especially OTP on seL4. ;- )


I believe they're PolySync [0], and I believe they might be referencing "fel4" a Rust-atop seL4 [1] that they've developed which is very cool stuff.

[0] https://polysync.io/

[1] https://github.com/PolySync/libsel4-sys


Formally of, yes. Now some folks are doing all new things.


Can you elaborate?



Site is just a contact form.


Indeed. We haven't invested in expanding our public-facing presence yet.

I was just answering the "Who's we?" question.

Probably the easiest frame to think about what we're doing is building engineering automation products for systems & robotics engineers.

We ultimately want to do for those classes of engineers what things like CAD, FEA, EDA, etc. did previously for mechanical & electrical engineers in terms of managing complexity and amplifying their capabilities.


Well thanks, I don't know what I was expecting when I clicked the link anyways.


But they haven't shown to be willing to jettison Android. In fact, they're bringing it to more platforms(ChromeOS) and almost certainly they would bring it to Fuchsia.


They are bringing ART to Fuchsia.


It didn't come across as a hit piece but rather as a comparison with overly colorful language, like that stuff about being in the boiler room.


Countdown until someone adds JavaScript support (~ Atwood's law)


You may be a little late with that countdown.

https://9to5google.com/2019/03/20/google-hiring-node-js-fuch...


it already has dart support so it does have js support in a round about way


Good to know. I had actually forgotten about Dart. Feels like Google should probably throw in the towel on it, but I haven't been following it.


Dart is the basis of the primary UI framework, Flutter. Flutter is also cross platform, and can be compiled for Android and iOS.

https://flutter.dev


Flutter is being downgraded to an UI Framework among many, after Scenic's instroduction.


I wonder how Fuchsia will manage in the regulated world's (medical, aviation).

A micro kernel is a big sell for regulators/QA. Also, being a RTOS.

This would disrupt many big players in the field, such us QNX.


I wonder if a big part of Linux success story comes from it's relatively simple to hack on code.


Does anyone have any good links to comparisons between Fuchsia and Linux (or other OSs?)


Anyone know what programming language the kernel is written in?


C++. If you want to read through a huge argument on HN regarding the merits of that decision, check out this thread: https://news.ycombinator.com/item?id=16813796


C++


How do you guys feel about kernel written in C++? Seems strange to me.


Seems like a wasted opportunity to write a new kernel and OS that might last another 30+ years in the mainstream in C++. Who knows how many million-man hours will be spent fixing memory bugs in this new OS because of this decision.


all part of the master plan to destroy unix and therefore steal control of computing from the people


[flagged]


We detached this subthread from https://news.ycombinator.com/item?id=19490633.


Then you should improve your reading skills.

The point is not having to deal with Java, one my favourite languages.

The point is the whole experience of how the NDK lacks debugging features, how after 10 years one still has to manually write JNI boilerplate by hand without any kind of binding generator integrated in Android Studio or better given Dalvik/ART why not something like gcj's CNI, how Gradle still isn't able to properly handle NDK dependencies in AAR files, how the ISO C/C++ libs still aren't fully ISO compliant.

And lets not forget some Android APIs, like Vulkan, real time audio and ML are NDK only, so eventually one needs to deal with these issues anyway.

It is also not possible to use C++ to fully write iOS or UWP applications, yet there is a much more productive and developer friendly integration between the managed layer and the bottom parts.


> It is also not possible to use C++ to fully write iOS or UWP applications...

UWP has native C++ APIs. So does Win32, right?

https://docs.microsoft.com/en-us/cpp/windows/universal-windo...


Kind of, XAML Controls aren't the best experience from C++/CX point of view vs C#/VB.NET/JS and C++/WinRT still is catching with what C++/CX offers.

https://www.google.com/amp/s/kennykerr.ca/2019/01/25/the-sta...

On the other hand, you do require C++ for DirectX.


And after all those years, you should kinda understand that this is not going to change. JNI is how Java communicates with native code and neither Oracle or Google are going to change that. What's the point of complaining for YEARS about this? Why are you losing energy over that, it makes no sense?


Again, improve your reading skills.

Android J++ as Java fork could pretty much use something else other than JNI, after all they are using another VM to start with.

Second, even with JNI, there is a big difference between forcing millions of developers to write boilerplate by hand, and offering an Android Studio based tool that would automate the grunt part of the work.

Finally, update yourself with the Java world, Oracle is indeed bringing forward a JNI replacement.

http://openjdk.java.net/projects/panama/

Which, naturally will never arrive to Android J++, but that is a discussion for another day.


Complaining is a good way to get companies to improve things. You may think that it's a lost cause, but others have the freedom to think otherwise.


nice example of an ad hominem


[flagged]


Even if that's true, I don't think it's the kind of comment I want to see here on HN. Silence is golden.


I'm confused, is this an OS for China? That alone doesn't inspire too much confidence, if it was "designed for China", because we all know what that could mean, especially with Google being so eager to please dictator Xi lately for a chance of a few search engine market points in China.


You couldn't have searched Fuchsia OS using a search engine before writing a comment shoehorning completely irrelevant polics?


> Zircon: microkernel, underlying service process (device manager, core device driver, libc, interprocess communication interface library fidl) Garnet: System-level system services: software installation, communication, media, graphics, package management, update systems, etc. Peridot: The infrastructure layer of the user experience: modules, users, storage services, etc. Topaz: The basic application of the system, Web, Dart, Flutter These names are from Steven Universe.

Nice!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: