On chrome os, the most significant feature is the newly added ability to run linux applications via crostini. Containerization is of apps is a big topic in general in the linux world with e.g. things like snap.
On Android, they have an extremely large installed base, enormously slow upgrade cycles, and a huge legacy development ecosystem. This ecosystem overlaps with ChromeOS as well since it can run Android apps. And Android can do linux apps as well (though that mostly makes less sense).
Fuchsia would need to play nice in that ecosystem or it will face an empty room problem where all of the key apps have not been ported yet, they have a tiny marketshare initially (thus removing the incentives for fixing that), and OEMs will likely vote with their feet by sticking to the old platforms. Part of that is supporting linux based stuff; which especially on Chrome seems increasingly important.
So, kind of pessimistic that Google has a strategy for making this work. This makes sense as a research project but not as a product strategy currently. Yes Android has issues. Maybe fixing those might be easier than replacing it entirely?
actually the networking stack in Fuschia, or more particularly zircon the kernel it is optimized around, is written in Go.
Kubernetes is written in go, which is essentially a system for implicitly defining scaled linux boxes on a giant network, and now in the most recent release is allowing for declarative architectures as well, much like how user space services for graphics drivers in Zircon will enable more declarative and optimized use of expensive resources as well (in this case, modular hardware architecture for mobile operating systems, which is what Fuschia was originally envisioned for.)
I'm also pretty sure Google also has experience with mobile operating systems (android), and Fuschia is literally a response of now over a decade of trying to interface android on POSIX in conjunction with multiple chains of monopolized and closed source hardware architectures, being intimately familiar with the evolution of trends in the mobile hardware architectural space, and trying to rewrite a kernel to optimize for that.
"Fuschia will need to play nice...Part of that is supporting linux based stuff; which especially on Chrome..." Chrome had to have an entire team dedicated to resandboxing their tabs to mitigate for spectre and meltdown, which are both results from the unquestioned but growing obscurity due to unquestioned implementations between hardware and Linux integration, which is something that Fuschia attempts to take a step back from, and by doing so simultaneously make it easier for open source development on hardware architectures while optimizing for them.
Still, total site isolation is still an advanced option in chrome that results in a 10% degradation in memory performance so people don't turn it on, if they even know to check for it/what that means (most people don't).
But the bottom line is: Empty room. beautiful OS with an empty app store is kind of a non starter in the current market. Windows Phone found that out the hard way. Also, several other mobile operating systems that did not quite make it or continue to struggle. Sailfish, WebOS, Firefox Mobile, Ubuntu Mobile, etc. Even ChromeOS struggled until they added android support and Crostini.
As we always need to remember Linux fans, the fact that Android uses a Linux kernel is irrelevant to userspace.
In any case, it wouldn't be the first time that Google walks away after putting lots of development in something. In general, I think they are closer to merging Android and Chrome OS than they are to replacing either with Fuchsia (not to mention convincing OEMs like Samsung to actually use it).
Those native libraries are forbidden to use Linux specific features, only these APIs are sancioned.
Since Android 7, Google has been claping down NDK users that try to used anything that isn't part of that list.
Since Android 8, APKs are only allowed to reach for their own internal filesystem and use SAF for anything else. Something that will be further enforced on Android Q, so no luck trying to peek into /dev, /share and similar.
But perhaps the reason Fuchsia won't struggle unlike the other OSes you mention, is that it is possible that it will be compatible to run all the Android apps in the play store from day one, thanks to Machina; allowing a smoother transition, a process similar to what Apple did with the PowerPC to X86 switch but in Google's case, its for a completely different OS.
Anyways, docker is a good example of how current linux systems are not optimized for modular sandboxing and containerziation. Still, people are so uneducated even in tech on how important this idea of only working with bare bones (I started in C so allocated bytes as I needed them and always considered how not to use them first, is a far cry from npming an express server and seeing the endless train of dependencies that are invoked) that still they do not secure their containers, and the number of ubuntu18.04 std base images I see running a docker container that simply contains a python app or something equally trivial, live in production at some of the top tech companies, which you can google and download a rootkit for, with no linux hardening whatsoever, is the terrifying norm of centralized web application companies today. I really am not going to buy into this idea that docker contiainers baring full replicas of the operating systems they sit on top of are a justification for POSIX.
If you want increased modularity for security, sandboxing and running different application, look at QubesOS, which is already far along and has it's own baremetal hypervisor, which is much like how docker works in userspace but optimized all the way to bare metal hardware. Fuschia takes a similiar approach when looking at optimizing modularity in mobile computing hardware architecture.
" Also, several other mobile operating systems that did not quite make it or continue to struggle."
This is true, but this is coming from the same company who has experience designing both software and mobile hardware architecture. Just because something is not already popular and widely adopted isn't a reason not to do it. I'm always an anti monopoly person myself, especially in the world of technology.
You can read my other comments and see the justifications around the need for this. As someone coming from the hardware architecture design space for qualcomm snapdragons all the way to 14nm iphone architecture, there is a need to remodularize kernel for advanced execution and increased competition in this space. POSIX is not sustainable looking 10-20-40 years into the future of hardware computing, particularly in the next ten, and android game developers who make a living off of candy crush do not really seem to care about this impending doom, only that they will have to traverse yet another learning curve if the platform gains adoption or become competitive in the space, which sucks, but it's not as bad as you'd think.
Besides, forcing people to continue to traverse learning curves keep the market competitive and keeps people from becoming to religiously entrenched that google's current android API is an unattested god. Yeh, its hard to make money sure, it's competitive sure, but we so easily forget android was a first stab and open source response to the iphone. The first motorola android phone came out my sophomore year in college. Now people can't imagine how we would survive without their 6yr olds going to school without an iphone 7. We often assume we need things to be the way they already are, and are not accustomed to change, but I can tell you Fuschia is needed in the space, operating system competition is needed in the space, and in the next decade we will be evolving to think different about excessive use of memory, dependencies and latency and be looking for something like this, luckily it will be about a decade into development at that point, about what android is now, and people say it's unreasonable to consider anything else..
If it wasn't for these groups of volunteers that translate them to english (and the other way around, also useful) these projects would be completely unaccessible.
You can learn Pinyin in a few days, which is sufficient for you to pronounce any Mandarin Chinese romanized into Pinyin. Chinese grammar is fairly simple compared to Latin languages. Whereas English is a rats nest of special-need-to-memorize pronunciation rules.
Most of the real difficulty people experience with Chinese comes from the shock of learning the characters.
At least from my perspective, someone who knows a language very close to another and still found English easier than this other language, English is easy.
Pronunciation is indeed quite a lot of learning by heart, but it's not hard to get used to if you got the rest down. Of course, this might be much harder for Chinese, I wouldn't know.
2. One good-sized chunk of the messiness of English pronunciation (vs. written words) is actually the fault of the Dutch. Prior to the printing press spelling was more an art than a science, and the same transcribed book could be found with different spellings (especially true in different regions). Since the Dutch were the first ones with the printing press, many English books of the time were printed there, and the Dutch printers made some pretty arbitrary decisions on how they thought English should be spelled (e.g. the 'h' in 'ghost'). There was a similar effect from French scribes when they invaded England.
Now my no means were they they only ones to mess up English (read up on "The Great Vowel Shift" if you want a headache), but they did their part.
I'm not a native English speaker and have made an attempt to learn Chinese.
To me memorizing how hieroglyphs look was harder than any part of English. One need to have a good visual memory to find that easy.
"Definitely" has 19 "strokes"
"Speaker" has 14 strokes
"Visual" has 10 strokes
"Made" has 9 strokes
9 strokes is a lot in Chinese, but relatively few in English. And despite what anyone says, you can't learn English using phonics or any other method except bulk memorization. In English, every word and every one of its derivatives has to be memorized. Past tense, possessive, plural, everything has to be memorized. Nobody bats an eye at this. Learning 8 strokes for "tree" is easy, but the 4 strokes for the Chinese equivalent is hard. 12 strokes for "water" is easy. 5 strokes in Chinese is hard. Oh, and throw on 4 more strokes in English if you turn it into a verb and use the past tense, "watered". Unless you write "did water" instead and split them up with a noun in between: "did you water". What a disaster.
I am a native English speaker. I like to think I am decent at it. Better than most of the people with whom I speak in person. But did I just use "whom" correctly? And did I just start a sentence with "but"? And "and"? And leave out the subject? Is this even proper English?
If you had to use software in English, regardless of your native language (and mine is not English, it does not even have a latin alphabet) you'd need to have at least some proficiency with being able to read it which sooner or later make you able to write it.
I've seen that first hand: i am 99% self-taught when it comes to English and the only reason for that is that when i started using computers and got interested in programming, everything was in English. I had to learn it, otherwise i wouldn't be able to use my computers.
(and FWIW i wouldn't be surprised if a major reason that Japanese programmers are both bad at English and they tend to use their own and/or different software than the rest of the world today does is that they got translated documentation, software and computers from the earlier days of computing - essentially creating an unnecessary big barrier between themselves and the rest of the world)
In my opinion it would create a really sad mono-culture...
Each language carry part of its history and culture that can't be translated at all.
It would also prevent a big part of the population to access any content. Even if you apply it to software only, it means that a big part of the world wouldn't be able to use any of those.
Now, if the software that you need to use to be able to learn English are in English, that becomes even more of an issue...
Also, why English in particular? For a huge amount of the world population it requires to learn a new alphabet. It becomes a much more complex requirement.
Is everyone supposed to learn Chinese to be able to read interesting articles ? What about Korean ? What about Wolof ?
Sharing a common language bring a lot to mankind as a whole, and is more effective to create bridges between people than most other means. Local languages are of course welcome and should be kept around for culture, but I do think that refusing to use a common language is a loss.
As for why English, well it's a bit late to ask the question imho : English has been picked as a common language quite a few years ago. There was French in Europe before, latin before that, but now it's English. I'm not especially happy about it either, since I'm not a native speaker, but here we are, and it's kind of pointless to fight against it when a big and diverse part of the world population already speak English (diverse is important, having 2 billion of people speaking a language in only one country is not really helpful for a common language).
Apparently it is, otherwise we wouldn't be in thread discussing more and more interesting content only being available in Chinese. Users will learn whatever language provides content they want to consume. For me that was English. My parents learned Russian. I'd not be surprised if the next generation learns Chinese, though I'd prefer not having to do so (parents english isn't great).
Your arguments for English keeping its spot are kind of weak. It was surprising to read how few people speaking english. Wiki "English-speaking world" states just 500m-1b global and first language is less than 360m, 2/3 of those in the US. Class differences might ensure there is still more content produced in english, but china might catch up quickly and having just half as many speakers doesn't give a good head start. It being close to European languages might help, at doesn't require to learn a totally new alphabet. I would have expected English to get a nice boost due to geopolitics, but am not sure that is the cast past 2016. The US is still highly influence, due to media exports and probably most importantly dominance on the internet. China's firewall/separated Internet might save English. Though it might also protect their own language long enough to start to produce enough content to export...
Actually, I just realized how few recent popular Chinese films I've heard of. Guess I'll choose some to download, now.
Many of us native speakers aren't happy about it either, English is a terrible language. I have a 6-year old daughter, and across the years we have spent teaching her language skills I have learned just how messy English actually is. Watching DVDs and reading books about pronunciation rules that quite frankly even I never quite grasped until teaching my own child, it's enough to make your head spin.
> In my opinion it would create a really sad mono-culture... Each language carry part of its history and culture that can't be translated at all.
That history and culture is only relevant to those who are interested in history and cultures, not to everyone. I am not against this, nor against other languages. People can still use English as a second language, i am talking about having a common language so we can communicate with each other, not a single language.
> It would also prevent a big part of the population to access any content. Even if you apply it to software only, it means that a big part of the world wouldn't be able to use any of those.
My point is that that population should learn to at least read English so that not only they will be able to use the software, but also read the texts from other people outside the silo that their own native language creates.
> Now, if the software that you need to use to be able to learn English are in English, that becomes even more of an issue...
No, the software to learn English can be in any language. In fact i'd rather see all the effort that goes into internationalization and localization to be placed instead into creating better software, tutorials, documentation and other materials for software that teaches people English.
> Also, why English in particular? For a huge amount of the world population it requires to learn a new alphabet. It becomes a much more complex requirement.
Because it is already used as i mention by a very large number of people, especially around computers.
The alphabet isn't a big issue, as i wrote above, i had to learn it myself and English has one of the most compact and easy to learn alphabets.
Nobody is obligated to go through the hassle of learning a new language just because someone else wanted that knowledge (volunteering to share, however, is perfectly fine). Otherwise it is a blatant infringement of personal freedom. Multilingualism should also be multi-directional instead of converging towards a single language, but that is obviously something unnoticed by a lot of Anglos who for one reason or another have decided their language and culture should trump over all else.
Languages are primarily a tool for communication and i am interested in the practicality of improving that communication - if two people speak different languages and without any common one, they simply cannot communicate.
I do not care if it would be English, Chinese, French or whatever else, that choice is irrelevant. I only said Yes to English because of its popularity, widespread use among many different countries and its position when it comes to computers (which was my original point above).
English isn't even in top 4 of world spoken languages, so I have no idea why you chose is as the OnlyRightLanguageToBeUsed(tm)?
By what measurement? Wikipedia gives Mandarin > Spanish > English > Hindi for native speakers and English > Mandarin > Hindi > Spanish for all speakers.
How do those two go together?
Perhaps, but how much is in China? And how much 30 years from now?
I won't let Google manage my data for me, but if they can keep their fingers out of the pie I would gladly use a Fuchsia-based system as described here for my laptop, phone, and home desktop.
I'm not sure if this is a translation gaffe or not, but I'm using it as a new sick burn for sputtering projects.
Fuchsia doesn't share as much of the story but does pick up where in Dante's Inferno the original Unix people tried to abandon the root user to redeem themselves almost 30 years ago. Combined with capability based model last seen deployed in the wild with OS/400 and Burroughs machines before that, it would be the first truly new OS in decades.
Although it would be wrong to consider Go as a derived product of Plan 9, I think it is right to call it its intellectual successor (although not the only one).
Perf isn't the best comparison right now, as I've heard that kernel perf is a non goal at them moment. I think they want to build out enough of the OS to know where the best bang for their buck is when optimizing.
Just requires that willpower to spend some time memorizing a whole bunch of words.
>"/" -> root vfs service handle, "/dev" -> dev fs service handle, "/net/dns" -> DNS service handle
Just resurrect Plan 9 already?
> "Everything is a URL" is a generalization of "Everything is a file", allowing broader use of this unified interface for schemes.
It makes sense to me - not every resource can reasonably be shoehorned into the file-folder abstraction.
It's not like OpenGL is the greatest graphics API, it's just that it's been used for decades - I personally learned 3D graphics rendering in GL and I still think in GL when I'm compositing a scene in my head.
If I'm going to prototype something, I have historically loved that no matter what platform I'm on, I can pretty much just hit 'compile' and I'll have the same result.
Which leads me to my second complaint - Metal. When Vulkan has come out as the clear choice for next-gen multi-platform graphics technology, Apple, once again, decided to completely invent a proprietary system instead of using the low-hanging fruit that was already there.
Especially since I thought of trying it out and followed Fuchsia instructions only to run into exactly same problem building it that others have reported >3 months back.
Also, instructions to download and install jiri with curl under Fuchsia docs don't work either and curl returns '404 not found' instead. Got around this using instructions here  only to fail soon after as per above.
I feel attacked. Glad they're picking up seriously on the idea of namespaces.
> A channel has 2 handles, h1, h2, write messages from one end, and read messages from the other.
This seems like it's bound to be a critical section for many operations. So for one/some of the supported target architectures, what's an example of how channel write/reads look in detail? Do I need to trap or can I write "directly" into pages that the recipient can read? Can messages span cache lines?
I believe that Fuchsia sounds ambitious and it's only through the dedication of mega-tech corps like Google/etc that ambitious efforts like this can succeed. But the world in general, even the tech world, is loathe to accept change. Backwards-compatible layers to emulate the linux syscall layer would probably be a critical transition feature. This allows consumers to phase in the work required as they can stomach it, rather than suffer it all-at-once-or-nothing.
How hard is it to port Fuchsia to a new target? Can anyone point me to a series of commits for a target that does this?
About the only upside of the Android ecosystem was the kernel's license, forcing kernel drivers to be open-sourced. This allows mainlining back devices into the Linux tree, which enables a variety of user projects, such as postmarketos.
On the other hand, some of these drivers already acted as a shim to a userspace blob, and having an IPC would at least provide isolation from the blobs themselves, and allow tapping a clear IPC interface for reverse-engineering those. So all might not be lost.
On the sustainability of open source, since this seems to be debated much in this thread, I personally think (and it is a debatable viewpoint) that this ought to be incentivized at a government level (by taxing software/technology sales, and sponsoring open source projects with the proceeds). This would provide a much more robust base on which to provide (proprietary if you want) services upon.
1. MessagePack (instead of, say, FlatBuffers – am I wrong to think that FlatBuffers are more efficient? (and the format was created by Google))
2. ELF binaries (instead of, say, taking a clean-sheet approach to executable formats)
Also, on ELF they say: "The loading position of the ELF is random and does not follow the v_addr specified in the ELF program header."
(They're talking about ASLR.) This just highlights for me that a more efficient format is possible, one that doesn't have a virtual address in its headers. Perhaps there are even bigger wins possible with a clean-sheet format.
My guess is also because its easear to have a bridge with Linux executables, so you can run the same ELF and emulate the linux syscalls somewhere like FreeBSD and now Windows .
I bet they were thinking about running Android stuff more easily, and i dont think there will be much more to gain in designing a whole new executable format, when you compare how much of free tools, code and compatibility with Linux you will have in the end.
"Global file system
In Unix, there is a global root file system"
Why night this be seen as a shortcoming, especially with mount namespaces now? Can anyone say?
That should be impossible in Fuchsia.
Pick one. Have we not learned from the past that there really is not a one size fits all operating system for all use cases. I am probably alone in thinking this, but bad decisions can be made when trying to get a server OS to run on a phone.
We build on top of seL4 for these reasons (among others).
Isn't seL4's multicore support either unverified or limited (i.e. shared memory is forbidden)? Is your platform single-threaded then?
Who's we? I too am down with OTP, especially OTP on seL4. ;- )
I was just answering the "Who's we?" question.
Probably the easiest frame to think about what we're doing is building engineering automation products for systems & robotics engineers.
We ultimately want to do for those classes of engineers what things like CAD, FEA, EDA, etc. did previously for mechanical & electrical engineers in terms of managing complexity and amplifying their capabilities.
A micro kernel is a big sell for regulators/QA. Also, being a RTOS.
This would disrupt many big players in the field, such us QNX.
The point is not having to deal with Java, one my favourite languages.
The point is the whole experience of how the NDK lacks debugging features, how after 10 years one still has to manually write JNI boilerplate by hand without any kind of binding generator integrated in Android Studio or better given Dalvik/ART why not something like gcj's CNI, how Gradle still isn't able to properly handle NDK dependencies in AAR files, how the ISO C/C++ libs still aren't fully ISO compliant.
And lets not forget some Android APIs, like Vulkan, real time audio and ML are NDK only, so eventually one needs to deal with these issues anyway.
It is also not possible to use C++ to fully write iOS or UWP applications, yet there is a much more productive and developer friendly integration between the managed layer and the bottom parts.
UWP has native C++ APIs. So does Win32, right?
On the other hand, you do require C++ for DirectX.
Android J++ as Java fork could pretty much use something else other than JNI, after all they are using another VM to start with.
Second, even with JNI, there is a big difference between forcing millions of developers to write boilerplate by hand, and offering an Android Studio based tool that would automate the grunt part of the work.
Finally, update yourself with the Java world, Oracle is indeed bringing forward a JNI replacement.
Which, naturally will never arrive to Android J++, but that is a discussion for another day.