Hacker News new | past | comments | ask | show | jobs | submit login
Fuchsia OS Introduction (bzdww.com)
580 points by Symmetry 30 days ago | hide | past | web | favorite | 394 comments



My impression as game developer:

- Vulkan first graphics interfaces. I mean, OK. Not that I'm in love with this overly complex API, but fine.

- No OpenGL support. I guess this is where world is moving

- No POSIX support. Quite a bit of game engines rely on it, oh well, when google cared about developers

- Nothing about sound (my personal thing)

As game developer working with Android was unpleasant to say the least. For example sound - android has like 4 sound systems, all of them (except simplest one available from Java side) are not fully implemented, and swarming with compatibility bugs across manufacturers and android versions. Oh, not to say that they introduce new one once in a while, with new version adoption slower than a sloth.

I get that Google engineers enjoy rewriting things they don't like, but, come on. Fix existing first. Don't change APIs on us - not everyone has extra million of $$$ to throw on a project to refactor it every time Google decided that they want shiny new thing, which as result is broken too but in different ways. OK. I admit, I exaggerate, but it seems like tough times for game engines (besides super hyped ones like Unity or Unreal, which also has no problem throwing tens of millions).

Note about OpenGL support: I'm pretty sure they'll drag it in by porting ANGLE library which is currently actively worked on. Compatibility layers for Vulkan are getting momentum, but I hope they'll join forces with MoltenGL/VK instead of making their own worse analog.


You may want to take a second impression.

- OpenGL should be a library. This is just going to make OpenGL development easier in the long run. Right now there are too many OpenGL implementations and the differences matter. Running one OpenGL library on top of N different Vulkan implementations is miles better than running on top of N different OpenGL implementations.

- POSIX can be a library too. It doesn’t have to be provided by the kernel. People have been strapping POSIX layers on top of things for ages. This was originally how Mach worked. You can still do weird things on iOS and macOS “below” the POSIX layer (although most POSIX syscalls are just provided by the kernel).

You talk about how hard it is to change APIs… and how much you hate to refactor things every time Google decides they want the shiny new thing. But POSIX is rooted in the 1970s. It sucks. It’s about time to try something new. The entire POSIX model is based around the idea that you have different people using the same computer and you don’t want them to accidentally delete each others’ files. There are a ton of APIs in POSIX which are straight-up shit, like wait(). Sandboxes are unportable and strapped-on, using arcane combinations of things like chroot and cgroups.

Let’s give people a chance to try out something new. Unix is somewhere around 50 years old now. OS research is dying out. Make it easier to run untrusted programs.


I agree with you, but I do not believe in Google. Google provided worst commercial platform I ever had experience developing for. I had a lot of different platforms in my past - pretty much all consoles, desktop and fringe, like smart TVs or various smartphones released since 99'.

Only Google changes rules of the game on developer. They do not realize that they are make •a platform•, not a product. Stable, comfortable platform that software runs on. I honestly don't give two cents about all their struggles to make "a better OS". OS is to launch apps, and making those apps is a miserable experience, where I have full time people just to keep up with Google 'improvements'. Somehow every other platform is fine. I hardly had to change code written 10 years ago for iOS, for consoles or Windows - not at all! But Google keeps making their own special PhD driven darlings.


My experience with android has been very similar. For example, last week I ran into a bug that seems like a simple omission that was reported back in 2010 (https://issuetracker.google.com/issues/36918490). It took until Feb 12, 2019 to get an official response:

> We're planning to implement a new more powerful network request interception API in a future version, which will be available to L+ devices with a recent enough WebView via AndroidX. Being able to read POST bodies is one of the features we're intending to include. Unfortunately we can't share schedule information about when this may be available, but we are aware of this use case and intend to support it.

Instead of fixing bugs and making incremental improvements, they seem to have a strong propensity for grand rewrites.

I got tired of the constant parade of new Google APIs, libraries, and technologies years ago and have been choosing alternative products whenever possible.


This sounds suspiciously like CADT, as described by JWZ: https://www.jwz.org/doc/cadt.html


Considering how that site handles links from HN, you might want to consider linking it through https://nullrefer.com or the like.

http://nullrefer.com/?https://www.jwz.org/doc/cadt.html


Oh yeah, I forgot about that. Thanks for the link!


Haha, so this is what happened? I had a big WTF moment.


Apparently the author of jwz.org doesn't like HN. This is him, by the way:

https://web.archive.org/web/20170125134508/http://sleepmode....


I had no idea he (JWZ) hadn't gone to College. Very impressive.


> But Google keeps making their own special PhD driven darlings.

I'm really sorry that you have this feeling, but this has nothing to do with PhDs. It's impossible to get promoted by bug fixes inside Google, and people are promotion driven.

There were many people who tried hard to fix a lot of bugs, but they usually burn out due to lack of recognition inside Google.


It really, really shows. That and documentation. And some periphery work related to larger efforts, especially on any platform that's not the Web. It's the only explanation for how an organization like that can produce so much broken and half-assed software—bad incentives, and probably a serious middle management problem, in that they don't have leverage or motivation to make ICs do "boring" and follow-through work.

Maybe it's working for them from a bottom-line perspective, but it's made their brand as an engineering company clownish.


The key question is the bottom-line, as you wrote. Management is really hard to scale, but right now innovation is a better prediction of growth than bug fixes, so I think it's not even clear how to improve the promotion process without hurting growth of the company.

I think Google is too big at this point already, but that's an orthogonal question.


A non-"rockstar" hiring process and parallel job track (with potential for cross-over) so they can get in people who are happy to do "boring" work on an interesting product might do it. Probably can't be in the Valley or anywhere else ultra-high COL. I think part of the problem is they (seem to?) only staff folks who both can and are inclined to leave quickly if they don't get to do the fun stuff.

Again, though, may not make sense for them from a $$$ perspective. Jank and rough edges galore may be something they're willing to live with.

[EDIT]

How this works in my head:

Manager: Could you take a look at the Android developer documentation? Some of it's badly outdated and have you actually tried using it? A lot of the advice is... kinda bad. Also maybe look at a few of these UI components we had the Summer interns make, they've got bizarre implementations and are difficult and inconsistent to customize. Oh and god have you looked at our issue tracker? Has anyone, actually? Like, ever?

Developer: Hey so have you heard of this place called Amazon?

Manager: Uh I mean how about we start a fourth greenfield instant messaging product instead?

Developer: That's better. Amazon? Who's Amazon?


Actually there is little thinking about "Amazon" or other competitor: internal mobility is easy. So replace "Amazon" by "this other team is doing a cool new project", and you can get transferred in a matter of weeks, most of the time without going through a heavy interview process.

And all of this isn't only motivated by "not doing boring tasks", but also because this is what drives promotions / career advancement. So "launch a feature/project, get rewarded, and switch team" seems like a trend, and it does not encourage to think about long term maintenance / etc. (It isn't a generality either, but it is a bit of a trend)


Note that while it is quite know and admitted at Google, is there a company where it is different? People are complaining about Radar tickets not being fixed while new (half-baked) features keep shipping on iOS as well, and it is somehow for the same reason inside Apple.

Working on improving the quality of the software is not always easy to measure in terms of impact, and hard to get recognition for in the same as shipping a new feature.

Where Google differs, maybe, is that its promotion system is well codified and rewards the "difficulty" of the deliverable. This is skewing the balance to reward adding more complexity in the system for the sake of solving it (you create difficult problems artificially, they aren't intrinsic to the nature of the user needs you should solve).


Yes, the typical boring enterprise job.

There are teams whose only role is to work down those tickets, one after the other.


> It's impossible to get promoted by bug fixes inside Google

This is why people job-hop for meaningful career advancement. Much easier than dealing w/ the broken promotions processes these companies employ.


The goal of job hopping is to get your salary match your market value (or even higher, as job hoppers are great at salary optimization), promotion processes have problems everywhere.


Exactly. An anecdote (not my own) https://mtlynch.io/why-i-quit-google/


My thoughts exactly. I have two apps out on Google Play. They are full featured, stable, and people still like them. I just want them to to continue to work. But they don't. I need to constantly take away from my time developing new apps to fix my newly broken old apps. How many times, Google, are you going to try to "fix" the problem of apps draining battery in the background? It never seems to end.

On the flip side, my first programs that I wrote for myself in the 90's still run. Yeah, I know that DOS is emulated now, but MS made them still work, and I didn't even have to re-compile.

I have this conspiracy theory that Google is purposefully trying to make things difficult to get rid of developers that don't have a lot of resources.


I have the same problem.

I think Hanlon’s Razor applies -- this isn’t intentional, it’s just unfortunate mistakes.

I think the main driving factors are:

- Many Android APIs were badly designed from the start;

- Android was not designed for easy OS updates, so many users have old OSes;

- Google like Apple’s approach of aggressively deprecating old APIs, and try to do the same. But unlike Apple, they still have to deal with old OS versions, and their fundamental OS design is not as sound. The end result is what we see, messy and buggy APIs.


>> I have this conspiracy theory that Google is purposefully trying to make things difficult to get rid of developers that don't have a lot of resources.

I agree and Apple does this on iOS as well. It's a way of clearing out old apps.


Same


> I agree with you, but I do not believe in Google. Google provided worst commercial platform I ever had experience developing for. I had a lot of different platforms in my past - pretty much all consoles, desktop and fringe, like smart TVs or various smartphones released since 99'.

I really find it hard to believe that Android is worse than proprietary, undocumented, Windows-only toolchains of Symbians, Tizens and bunch of other embedded crap, not to mention the horrors of what was PS3 toolchain at the begging.

I think you might be overdramatizing this a bit.


Please give me back Symbian C++ versus the experience of using the Android NDK.

And if you prefer a more modern example, Android is indeed worse to use than UWP or iOS.

When they release stable releases, they are actually betas, and updated documentation is scattered around Medium and G+ posts, alongside Google IO and DevBytes videos, why bother updating the official documentation.

After 10 years, the NDK still feels like a 20% job from a team that is forced to accept that Android should provide a bit more than just 100% Java.


> Please give me back Symbian C++ versus the experience of using the Android NDK.

Please no, Symbian C++ was the worst development environment i had the displeasure to touch. I'd rather hand write every Java binding for every single API call manually in NDK than even consider looking at Symbian C++ again.

(relevant experience from Nokia's Series 60 around the time of 6600, i don't know if Symbian improved after that, i lost all will to bother with the platform, but considering you still had to implement exceptions via macros by hand because of technical decisions made two decades earlier, despite the platform not being backwards compatible and thus could fix said decisions, i do not expect that things improved)


It did improve at the end, the last iteraction with Carbide(2nd Eclipse attempt), Qt and PIPS was much better than using the NDK.


And now they are shutting down G+, so all those posts and pieces of missing documentation will simply disappear! (Unless Archive.org ot somebody else saved a copy.)

Oh, the irony!


I started my first Android app recently, and decided to go with their "Jetpack" stack recommendations. I wasted hours trying to get Dagger 2 injection working, only to discover I had to write tons of glue code which obviated the purpose of the DI framework in the first place! I don't want to worry about transitive dependencies over an object graph and writing ViewModelProviderFactories (actual name) and persisting my data with Room.... I just want widgets, logic and layout.

Next I tried GKE and it was an absolute joy. I had a simple web app serving traffic an hour in. Probably because unlike Android, they have to make people like GKE to sell it.


Why are you bringing in Dagger 2, a non-platform separate dependency injection library here? :)

Do you think a proprietary Windows C++ compiler for an embedded platform will make your DI uses easier? :)


The Android Jetpack guidelines strongly recommend Dagger 2, and it's a Google product so I thought I should comply.


That's very surprising that you mention iOS as being stable. To me it's the worst offender. Most of the apps I worked on needed to be updated for every single iOS release because of deprecated APIs.


> That's very surprising that you mention iOS as being stable. To me it's the worst offender. Most of the apps I worked on needed to be updated for every single iOS release because of deprecated APIs.

As a user, I concur with this. I can install Android apps created in 2012 that still work in Android 9.

In iOS, I remember that each major update made half of my apps broken (in some cases, completely broken).


That will change with the new target requirements for Play Store.


People say this a lot, but I think their behavior makes more sense when you view the end user as the product. Who cares about API stability, the advertisers are getting exactly what they want.


It will increase stability in a very important way because they apparently want stability of drivers, which Linux does not offer.


> they apparently want stability of drivers

Strangely, the only Android drivers, that can be trivially ported between Android version, are those "unstable" bits in Linux kernel.

Take any postmarket smartphone firmware — if something does not work after porting to new Android version, you can bet, that it is a proprietary userspace blob. Who develops HAL for those blobs? — Google. Who controls it's API? — Google.

What "stability of drivers" are we talking about here? Certainly not the kind, helpful to users.


> Strangely, the only Android drivers, that can be trivially ported between Android version, are those "unstable" bits in Linux kernel.

That's only because the Linux kernel is actively hostile to the idea of maintaining a stable driver interface. Windows 7 drivers from 10 years ago still work in Windows 10.


ARM and the SoC Vendors don't like open sourcing their drivers. They prefer to just dump binary blobs and stop support after 2 years. The users couldn't care less because they buy a new phone anyway. (except me)


If the OS and APIs were stable, nobody would ever need to update a driver unless they wanted a bug fix.


Even Windows has a hard time keeping their OS APIs stable. Drivers written for Vista didn't necessarily work on Windows 7 and so on. This is one of the reasons why Linux highly encourages open sourcing the things so that the driver code can be updated when an API needs to change.


> Drivers written for Vista didn't necessarily work on Windows 7

This is patently, provably FALSE[1]

MS goes out of their way to NOT to break APIs. My ancient ATI netbook can run Windows Vista graphics drivers on Win 10. MS only breaks driver API when massive kernel/underlying APIs demand it. ex. 98->NT, XP->Visa.

Now contrast that with Linux. Linux kernel devs are openly hostile to binary blob drivers so they make no attempt to preserve ABI stability. I've see this happen multiple times with ATI binary drivers in GNU/Linux and when I was running cyanogenmod on my phone.

[1] https://www.techadvisor.co.uk/how-to/windows/how-get-drivers...


My understanding is that this is something that a microkernel design ought to be able to improve upon...


Forget about believing in Google, try at least for consistency within a single paragraph, as in: > They do not realise that they are make •a platform•, not a product and then: > OS is to launch apps

And I still can't stop laughing after reading: > Somehow every other platform is fine

To paraphrase JWZ's CADT article linked below "writing thoughtful critique" is not fun, "making useful suggestions to improve something" is not fun but "writing a snarky, resentful rant on HN" is fun.

ps. most developers on Android are not game developers, maybe time to think about seeing the world from other peoples point of view?


Android wasn't created by Google. They acquired it, and then had to deal with the legacy cruft that was already a part of it by then. The "platform-level" changes they've made since the acquisition—e.g. replacing Dalvik with ART—have mostly been sound engineering choices.

> They do not realize that they are make •a platform•, not a product.

Consider: maybe Fuchsia isn't a platform?

ChromeOS certainly isn't a platform: developers don't develop "for ChromeOS." App developers target the WebExtension ABI (or, more recently, the Android ABI), and ChromeOS just runs their apps using mysterious virtualization magic that doesn't matter to the developer. You don't target the OS; you just target a stable ABI. (Other examples of this: the Linux kernel ABI used by Docker on {Linux, macOS, Windows}; the Linux kernel ABI used by Illumos branded zones; the Linux userland ABI used by the Steam Runtime.) Essentially, you can think of ChromeOS not as an OS in the traditional sense, but as a hypervisor. The libs your apps depend on aren't part of the OS; they're part of your ABI's zonal environment, which is stabilized separately from the OS.

I would expect that Fuchsia is doing the same: being an OS but not a platform. The only people who will have to directly target Fuchsia are Google engineers.


Just to summarize my comment: I agree with you. I realize that eventually* having POSIX/Graphics/... APIs as a library would lead to better, more stable, less driver dependant platform. But I don't believe Google is interested in making better platform for developers, judging by my almost a decade of experience with ever-changing Android.

*eventually is a big significance here. If I wouldn't have my APIs available on launch of FucOS, or like a clear roadmap to them in half a year, I would have to start my own projects to work around of that "eventually it would work" promise. Because "eventually" is not good enough.


I've had the same experience, but I guess it's not cool to make comments based on your own experience on hackernews (and hence the downvotes).


>> I've had the same experience, but I guess it's not cool to make comments based on your own experience on hackernews (and hence the downvotes).

Some people here think your own experience is just an anecdote. They're wrong of course. Ones own experience is a data point (or even a collection of them).


As the saying goes "the plural of anecdote is not data". Unless you are collecting the anecdotes in a methodical way then it doesn't tell you anything, since you can't know what biases your data is affected by


I cured my X by doing Y.

Is that an anecdote or a data point? What if 150 people tell you the same thing? Having done exactly that, I don't really care how anyone else classifies it, it is my reality.


It's an anecdote. It doesn't matter if you hear it 150 times, obviously. If you hear 150 people tell you that vaccines gave their child autism, is that your reality?


> Running one OpenGL library on top of N different Vulkan implementations is miles better than running on top of N different OpenGL implementations.

The issue with running OpenGL on top of Vulkan is that Vulkan's API exposes a very rigid and static view of the GPU and OpenGL is the exact opposite, allowing arbitrary state changes at any time. While actual GPUs are not as dynamic as OpenGL, they are also not as static as Vulkan so by implementing it on top of Vulkan you are forcing state changes that would not be necessary for the underlying GPU.

Also OpenGL being a higher level API provides for more opportunities for optimization than Vulkan (a very common example would be display lists which thanks to their immutability and opaqueness can have a large number of optimizations applied to them - something that both Nvidia and AMD takes advantage of, especially Nvidia which performs very aggressive optimizations on them).


Google will be doing it for Android Q, most likely due to all devs that still aren't bothered to wrestle with Vulkan.


I'm not saying it isn't possible, ANGLE is a thing after all, but possible doesn't mean optimal.

Of course, given enough time, faster hardware will solve this.


Ah well-written OpenGL "emulation" on top of a Vulkan driver is most likely faster than than a badly maintained native GL driver, and you'll only have to worry about the bugs present in the one GL implementation you're linking against, not a variety of bugs across different drivers and driver versions.


Yes, the best scenario of the first case is better than the worst scenario of the second case, but personally i'm more interested on the best scenario of both cases - especially since we already have working OpenGL implementations that take advantage of how OpenGL is specified. I'd rather see a push to improve subpar implementations so they reach parity with the good implementations than throw all implementations out of the window because of the bad ones.


>Sandboxes are unportable and strapped-on

I contribute to an open source project called Torsocks, which is part of the Tor Project, and this comment really resonated with me. Creating a syscall sandbox that works across even a few, generally similar POSIX-compliant OS's is ridiculous.

FreeBSD and MacOS for example have a very similar system interface. But sandboxing on FreeBSD is via pledge, and MacOS uses the App Sandbox. Linux uses seccomp.

It's a mess.


FreeBSD uses Capsicum, OpenBSD uses pledge.


> Right now there are too many OpenGL implementations and the differences matter.

Vulkan is already going down the same path in spite of its youth.

https://vulkan.gpuinfo.org/listextensions.php


The extensions were always going to be in Vulkan. They embraced them even more than OpenGL, because you're not going to get around it. Hardware is just plain different from other hardware and PMs want a "value add". What Vulkan does differently is that you have to explicitly enable extensions, so you can't unknowingly be relying on an extension when trying to write cross platform code.

This is all a good thing.


Yep, polluting the code with multiple execution paths is a good thing.


You're... not required to use the extensions right? How else are you supposed to provide the _option_ to use _optional_ features? Feels like there's gonna be a branch in there somewhere. Lowest common denominator APIs are a non-starter for high performance graphics work.


Yep, hence given the size of a game engine, it is hardly any different having to deal with multiple flavours of OpenGL/Vulkan, or just use the best API in each platform.

Ergo middleware is the new cross-platform API.


Which code?


Game engine code, where testing for each extension and reacting accordingly leads to several if(){ } else { }, or a vendor agnostic interface layer, making the total development cost hardly much different than supporting multiple 3D API flavours.


The other option is waiting until all of the vendors have the ability that the extension provides and it has made it's way into the standard.

That option hasn't been taken away from you. And unlike OpenGL, you can't unknowingly be relying on an extension since they're opt-in. So what's the problem again?


Nothing, just false advertising on complexity improvement over OpenGL.

Anyway, middleware has won the battle of 3D graphics, what goes on the bottom layer is largely irrelevant to most devs.


It's not false advertising. Swapping the extension model to explicit opt-in is one of many pieces designed to help you manage complexity in a non trivial project for the reasons I've stated.


So far the amount of boilerplate to handle extension management and code paths in Vulkan samples shows otherwise.


Or, if you care that much, you can not have any of that and just not enable any extensions. Easy peasy


Or just use a middleware engine and profit from best 3D API provided by each platform owner, much better.


> POSIX can be a library too

I've the feeling that POSIX is an API that makes many assumptions about how things internally are implemented in the kernel. So POSIX as a library is often limited or inefficient.


I doubt anybody really writes 'POSIX' anymore, if that helps.

What they do, more typically, is 'Write Linux software'. If they care about POSIX they will try to ignore features they don't think was mentioned in some POSIX manual from 20 years ago. It's all very hand-wavy.

This is why you see the major operating systems advertise Linux software compatibility rather then paying for POSIX certifications. AIX, Solaris, Windows, etc. Sure it's not a official standard, but it's going to be pretty well defined because you can just model your compatibility on WWLD.

If push comes to shove then Fuchsia could just add some variation of 'usermode linux' as one of those userspace kernel services.


Actually most system software today is POSIX, with some very rare #ifdef to leverage Linux-specific syscalls.


"OS research is dying out" - this

I was learning about kernel dev around the same time as trying to understand how conditional execution and speculative execution worked as a result of really trying to understand every step that happens when a system call hands something to the kernel and the kernel does something with it and hands it back to the system.

I kept asking but alot of supposed linux nerds I spoke with couldn't tell me how the kernel and user space truly handed off data or negociated memory with eachother, leaving me drawing out trap handling routines on a posterboard penciling in gdb dissassembles of memory for system call source code, feeling dumb for not knowing, meanwhile we all find out about Spectre and Meltdown and that really there is not a secure handoff without significant performance degradation and/or increased sandboxing for things like the browser, etc.

And of course what is the root of the issue here? The root of the issue is that linux is too deeply integrated into monopolized hardware architectures, which is perhaps why AMD's stock price skyrocketed the day Spectre and Meltdown came out, when we found out the only mitigation for this legendary security vuln in the near future, will cost a 30% reduction in performance across the board with intel as opposed to much less with AMD given the AMD architecture was less prone to exploiting the vulnerabilties around speculative execution.

The more I learned about these things plus issues with other basic functions like wait() ot strcpy() or in general the lack of protections around C, the more I entertained the idea of looking for alternative operating systems. The networking stack in Fuschia is written in Go for example. While I don't know much about Go, can it be worse than C when it comes leaving it up to almost every developer to take care of their own garbage collection and what the performance and security implications of this are?

Magenta is designed to be modular enough in nature to withstand the waves of hardware architectural evolution coming and given we are approaching 5nm development (the theoretical limit of how small a transistor gate can be before we can no longer control interactions/flipping a transistor switch due to quantum interaction), and this is not far off, Intel already has 10nm in production and probably others now as well (its been a bit since I checked) then to quantum computing:

Because quantum computing (this is debatable and I know the least about this) is not ready for mass production, particularly on the mobile scale, my conjecture is once we reach the theoretical limit of how small a transistor will be, designs will turn to optimizing for performance in every other way we can without relying on powerful processors to accomodate for memory bloat or endless dependencies (yes I also pray this requires javascript modules to be better or die out but thats a long range dream).

Meanwhile AMD gains ground post spectre and meltdown. So, in summary, there are alot of other options to consider than just optimizing for POSIX forever.

Therefore, I am glad there is a push to explore alternatives. I feel as though anyone who thinks it's not potentially beneficial to explore POSIX alternatives based distros does not work with Unix based systems in any kind of depth on a daily basis, but if someone does, and you think Linux for example, is the best operating system in the world and can't be improved upon outside of its defining protocols, then I would love to hear from you on this thread. I am not nearly as experienced as most people who work with Linux, but I can say that most I have interacted with it view it as a love hate relationship for many of these very reasons.

You can also see this trend of unhappiness with Linux OS defaults out in the wild outside of google.

More and More and serious applications are looking to bypass userspace application development to be either more secure, customize, most often for the purpose of if not security, to optimize performance for the things we use to consider the std linux kernel somewhat good at.

Here are few varied examples I can think of off the top of my head anecdotally when trying to solve everyday problems for users with linux, but I am sure there are many more:

1. Dropbox bandaid attempts to customise network schedulers usually handled in kernelspace due to performance issues: https://blogs.dropbox.com/tech/2018/03/meet-bandaid-the-drop...

2. Wiregaurd is an example of a VPN where communication negociation is handled more and more in the kernel, because traditional vpn designs have left TLS handoffs in userspace (what is the point of userspace anymore for serious application development when this is the trending security default): https://www.wireguard.com/

3. Sysdig implements epf functionality to allow for sysadmins and devops engineers to customise and or secure in ways we don't trust or consider the default linux operating systems userspace/kernel space design to do anymore: https://dig.sysdig.com/c/pf-blog-introducing-sysdig-ebpf


> The more I learned about these things plus issues with other basic functions like wait() ot strcpy() or in general the lack of protections around C, the more I entertained the idea of looking for alternative operating systems.

Dig into the worlds of Burroughs B5500 (now Unisys ClearPath), IBM OS/360 (now IBM z), IBM OS/400 (now IBM i), and the now gone Mesa/Cedar, Oberon, Active Oberon, SPIN OS, Topaz OS, Mac OS/Lisa, Singularity, Midori, Inferno, ...


and yet you are still alive and not starving to death. But the banter I see on here is android video game developers complaining that a move away from android will be the end of them.

Google is not stupid, they are not going to deprecate android overnight and replace it with Fuschia, this operating system has been in the works open source, you can see the commits on github for atleast two years I think more, and there will clearly be many iterations of its development to come with increasing adoption each time as people make money on the platform, just like with Android which took years before it reached the threshold of 50% use compared to iphones and no iphone video games developers that I know of starved to death trying to adapt to this change. The drama on this thread about api changes are significant for sure and I understand Google redacts API's or suddenly starts charging for them in a way that makes small companies close up shop overnight (like google maps for example) but it is not a justification to pretend that objective limitations around Moore's Law and the need for competition in computer hardware is forcing companies who have experience in both spaces to reconsider kernel development at a more fundamental scale.


Android is being ported to run on top of Fuchsia.


which is why I'm confused about all of the top ranking comments complaining that android will change their API for this. Will this require a change for android app developers if this is the case? Regardless, this seems like a more fundamental layer of improvement.


They are mostly by folks that never did Android development and think they are free to use Linux code as is on the NDK.

Still, it will be a scenario similar to ChromeOS. How many people are buying ChromeOS devices to run Android apps?


Wow that's an interesting list. Might you be able to add some specific points of interest on some of these OS's to start with? Cheers.


Sure,

Burroughs B5500, first OS written in an high level systems language (ESPOL, later NEWP) in 1961, 8 years before C came into existence. Already used compiler instrics instead of Assembly, and the concept of unsafe code blocks.

IBM OS/360, famously introduced the concept of containers, alongside IBM OS/400, also has language environments, think common VM for multiple languages.

IBM OS/400, originally written in a mix of Assembly and PL/S, uses the concept of managed runtime with a kernel JIT called at installation time, and uses a database as filesystem.

Mesa/Cedar, system language developed at Xerox PARC, using the same IDE like experience similar to their Smalltalk and Interlisp-D workstations. Uses reference counting with a cycle collector.

Oberon and its descendants, Niklaus Wirth and his team approach to systems programming at ETHZ, after his 2nd sabaticall year at Xerox PARC.

Mac OS/Lisa, these first versions of Apple OSes were written in Object Pascal, designed in collaboration with Niklaus Wirth, whose extensions were later adopted by Borland for Turbo Pascal 5.5.

Singularity/Midori, the research OSes designed at MSR, largely based on .NET technologies.

Inferno, the actual end of Plan 9, using a managed language for userspace, Limbo.

SPIN OS/Topaz OS - Graphical workstation OSes for distributed computing developed in Modula-3


Thanks this is great! I'm looking forward to digging into the specifics of some these. Cheers.


>> OpenGL should be a library.

But it's not. Where is this OpenGL implementation that runs on Vulkan? I would argue that it should come as fully open source from Khronos group since they are the ones providing both standards. It's fine to create a new thing with a long term vision of what a better world looks like. But people won't follow if the pieces they need today are just a wish.


Besides Zink, there is GL ES support on top of Vulkan through GLOVE[1] as well as Google's own ANGLE[2].

[1]: https://github.com/Think-Silicon/GLOVE [2]: https://github.com/google/angle


Zink is an effort to write an OpenGL layer on top of Vulkan. Previous discussion:

https://news.ycombinator.com/item?id=18356179


>"There are a ton of APIs in POSIX which are straight-up shit, like wait()"

Could you or someone else elaborate on what is so loathsome about the wait() system call?


> Don't change APIs on us - not everyone has extra million of $$$ to throw on a project to refactor it every time Google decided that they want shiny new thing, which as result is broken too but in different ways.

This was what drove me out of my (brief) stint at Android development. Did some hobby development to learn the ropes, spent a lot of time trying to "do it right". It was a bit clunky but alright I guess. Left my project alone for a few months while my day job was busy. Came back to it to find that, in a few months, there had been not one but two generations of deprecated APIs between what I'd written and current 'best practices' and that a bunch of pretty fundamental stuff had been deprecated.

I'm not wasting my life chasing that particular Red Queen.


Yep, Android's best practices tend to last one Google IO.


Fuschia relies on the kernel Zicron which last time I checked used Magma as a framework to provide compositing and buffer sharing between the logical split of the application and system driver, which exist as user space services.

The fact that graphics drivers exist as user space services should reduce latency by minimizing the need for capabilities, the equivalent of a system call which requires an expensive system call/trap handling routine in std linux (for example).

This is presumably to support an architecture with direct access to a GPU where the main CPU scheduler doesn't have to schedule a round trip of data from main CPU to data bus to GPU and back, compared to std linux on std hardware architecture, for example which will support this overall design to decrease latency and advance open source graphics development in a user space setting.

- No OpenGL support. I guess this is where world is moving

Vulkan is still built off of OpenCL, which is kernel code designed explicitly for the GPU. While not technically OpenGL, it is a more direct interface architecturally with the hardware and there are plenty of engines working on vulkan support. Consider there are other advantages to using a graphics driver than just OpenGL (like parallel computing and abstracting large data sets into matrices that map nicely into GPUs) and optimize based on this assumption and it is not as unreasonable as it sounds.

- No POSIX support

"Full POSIX compatibility is not a goal for the Fuchsia project; enough POSIX compatibility is provided via the C library, which is a port of the musl project to Fuchsia. This helps when porting Linux programs over to Fuchsia, but complex programs that assume they are running on Linux will naturally require more effort." - https://lwn.net/Articles/718267/


That sounds like you want to continue using APIs and OS from 1970s, while Fuchsia is deliberately trying to break those old conventions.

OpenGL was never a serious contender for modern games and there's a good reason why all new 3D APIs are moving to more low-level representations (Vulcan, DX12, Metal) with libraries on top. OpenGL is a horrible implicit stateful machine which is terrible to multithread and still holds a model of a hard-wired 3D accelerator as a base, which isn't how new graphics cards work. Everything else is bolted on top of that out-of-date idea which makes is hugely unwieldly for new software.


> That sounds like you want to continue using APIs and OS from 1970s

If possible, I'd prefer to keep using APIs from 1960 instead.


Then perhaps you shouldn't be using an OS from 2020 ? Demands for POSIX compatibility really seem like cargo culting these days, considering none of the popular platforms will have apps use only POSIX APIs. It's a crutch that holds back API design white still demanding that you use OS specific syscalls in pretty much every software out there.


>> It's a crutch that holds back API design

When I read that I caught a hint at a problem. API design should not be an ongoing activity. People should design a new API and then we should all use it for a long time. If it's frustrating that POSIX is still used, you might want to consider that stability is the feature that keeps it around.

Perhaps it's time to reread Joel on "fire and motion":

https://www.joelonsoftware.com/2002/01/06/fire-and-motion/

I'm all for this Fuchsia thing, but they claim to have the experience to design something better so do that and let it stand. Regular software updates are actually a sign that you don't know what you're doing.


New APIs make sense but keeping the old ones for backwards compatibility does too.


Sure, but there's no reason you have to do that in the kernel.


Old doesn't have to mean bad. Folks are switching from MongoDB to PostgreSQL, or wish they could if it wasn't too late and expensive. Evaluate on merits, not age.


> No OpenGL support. I guess this is where world is moving > No POSIX support. Quite a bit of game engines rely on it, oh well, when google cared about developers

Well, tell that to game developers working for consoles. They don't have access to the same APIs and they made it work just fine. Most of the time, you don't make your own engine and rely on some other engine to support your platform, so it just works.

I've done enough Wii, PSP, PS3 or PS4 development and having different APIs was never a problem.


Console APIs are nice and clean compared to Vulkan though, since they are tailored to the underlying hardware. Vulkan is a weird compromise between a low-level API and covering fairly different GPU architectures.


The point was more about general APIs like threading or file access, anything covered by POSIX.

It is already expected to have to rewrite your GFX backend anyway using the specialized API for the platform, but people don't expect the same for general use APIs too.


When the goal is a clean slate OS design, we can't get up in arms about compatibility out of the box, can we?


I think first and only goal should be user experience.

OS is a tool to start apps, and if they can't make it comfortable to make and support apps, they failed before they started.

Google's problem is their academic goal to make a nice OS, IMO.


Why do you want to force your idea of what the goal should be upon Google?

If they want to test the waters in relatively unexplored waters let them do so.


>- No POSIX support. Quite a bit of game engines rely on it, oh well, when google cared about developers

This is caring about developers, just that you don't understand it.


It's not and eventually they will have to find ways to provide POSIX compatibility, it can happen through a separate compatiblity layer or a library. You don't want to break compatibility with millions of lines of already written code if you want widespread adoption. No one is going to rewrite everything from scratch, just because Google says so.


Which "millions of lines of code" are compatible with POSIX without having any Linux or macOS specific code? Or being compatible with Windows for that matter, which is POSIX compliant just in the name?

I think you're hugely overstating the importance of POSIX, not to mention downplaying the fact that POSIX really isn't sufficient requirement to not have to do any code porting.


> Which "millions of lines of code" are compatible with POSIX without having any Linux or macOS specific code?

Those layers would have to be reimplemented to retain compatibility.

> I think you're hugely overstating the importance of POSIX.

People generally understate importance of POSIX, just because it's old. It's impossible to get APIs perfect and you are throwing away decades of work in the name of getting APIs "right". No one is going to rewrite everything from scratch, just because the new APIs look shiny. For reference read about Unix wars[1].

[1] https://en.wikipedia.org/wiki/Unix_wars


If Fuchsia is wrong on this point, they can add more POSIX compatibility later.


Someone correct me if I'm wrong, but this is probably lower level than what you're speaking of. I doubt they'll be revamping all those "Android sound systems" you speak of, but rather rewire them to this new kernel.

It wouldn't make sense for the to completely throw away all Android apps ever and rewrite all the APIs from scratch.


Google is making Vulkan a required API in Android Q, and they are updating ANGLE also run on top of Vulkan.

As for Fuchsia, given that the team comes from a different background and the OS APIs, I am not expecting the straightjaket experience from NDK.


I tried to start android development couple of times and it was impossible for me.

The last time I created a new project from the Android Studio built-in template, it was broken from the start - UI editor throwing an error, googling didn't help, I didn't have so much time to debug it so gave up again. I expected built-in template to always work.

I hate java, I always hated that tree of subdirectories. What worse, there are some Maven repos and some proprieters Gradle binary which is randomly downloaded to my pc like a malware. Why is that? It is very difficult to configure the project, so many XMLs, documentation is not very good, build process takes a long time. Android studio is slow and takes much RAM as any java-based IDE.

Rename a project? Near impossible. You have to be a guru to know all the places where to change it, rename directories, etc.

In comparison, it is very straightforward to develop for iOS. It is possible to integrate C/C++ easily. Normal gdb/lldb. Cocoa UI is nice, nice frameworks with reasonable documentation. The only problem is the closed platform, requirement to pay for Development program, signing all binaries.


It's strange how my perception of Google and its software has changed over the years.

When Android was first released I've put it to the same category with Open Firmware & Debian. Now when I see Fuchsia, I reflexively categorize as a pseudo-open source software, where you can see the code but cannot do anything useful on the long run, because it's designed to be downloadable but unusable for casual/research purposes.

Maybe "we put out the source for you to see and start to limit its free use when it matures enough" model gets tiring for me.


It's interesting to see that when Android was released, it was fully Open Source.

But over the years they moved all the apps to closed-source, one after the other, even some core elements (play services).

Pretty soon it will be no more open source than iOS (the kernel is open source, yay!)


All of the framework is still open source.


It was more than clear that the FOSS for all wasn't sustainable long term, hence why we are slowly getting back to the freeware days.

Google just used good marketing to convince MS-haters that they were different.


> It was more than clear that the FOSS for all wasn't sustainable long term, hence why we are slowly getting back to the freeware days.

Someone should tell RedHat. And Debian. And, y'know, Linux. I'm sure Linus will be devastated to hear that the last couple of decades have just been a fluke.


Indeed, that is why Red-Hat gave up on selling distributions for desktop GNU/Linux, and is now part of IBM, selling support contracts to enteprises.

How many business are making money selling Debian based software?

And how many business critical software stacks are still based on GPL, without any kind of dual license of some sort?


>> How many business are making money selling Debian based software?

That is to miss the point. Nobody give a shit how much money can be made selling Debian. The users care about how much they can get done by using Debian. Software has essentially zero replication cost, so those who try to make a lot of money from it are rent seekers. The challenge then is how to create software in the fist place. Those that charge for software and provide continued high quality development will be OK, while those that buy RedHat just to make money will not (long term).


The people that contribute and decide to pursue other endeveours that better serve them and their familes, do give a shit.


Red Hat always sold support for their enterprise level software. A RH representative told me that they "get community software, get it to enterprise level, and sell its support. The code is always open". They also sold their technologies as services. They've never sold code or distributions per se.

Their business model allowed them to develop KVM and OpenShift.

Debian gave birth to the biggest number of derivative distros. They're the biggest and most mature base. Even Ubuntu is a Debian derivative.

While there are no enterprise software that I know (my domain doesn't overlap with "enterprise software"), I'm aware that some of the heavy hitters in the software world have their killer features based on open source software or algorithms (Photoshop's content aware fill comes to my mind which is based on resynthesizer of GIMP).


> They've never sold code or distributions per se.

I guess those Red-Hat boxes at my parents basement are a piece of imagination.

> Debian gave birth to the biggest number of derivative distros. They're the biggest and most mature base. Even Ubuntu is a Debian derivative.

Great, and how much money do they earn from it?


> I guess those Red-Hat boxes at my parents' basement are a piece of imagination.

No, they are not, but Linux distros were never small (except Slackware & Gentoo), so many of us were unable to download them (with 56kbps modems). We bought the complete CD sets, which were essentially repositories on CDs. I had SUSE 6.0 box set.

A generation has abused university networks to download ISO files and write them to discs. That's why most universities had local repositories for a considerable amount of time. Many still have. The place I work hosts country-wide official repositories of most Linux distributions.

> Great, and how much money do they earn from it?

Why not ask to the non-profits which govern the money and assets of the Debian and other projects? The page is here [0]. Looks like Google is also a donor in SPI which hosts Debian, PostgreSQL, Arduino/ArduPilot, Arch, etc.

BTW, most Debian and open source developers don't do it for the money. They build the software for fun and use their skills in their day-job to do their work. Also, there are many developers paid to work on open source code and implement the required features. Glibc's developer is (was?) an RH employee for example. Intel pays developers to implement their open source network and graphics drivers. AMD's both open and closed source driver teams (yes there are two, isolated teams) are on AMD's payroll.

Open Source software is sustainable. You pay for knowledge and the effort, not the code.

[0]: https://www.debian.org/donations


The growth in dual-licensing adoption, FOSDEM and Linux Summit talks regarding how to keep FOSS safe from EEE seems to prove otherwise, regarding GPL long term survival.


Maybe, on a smaller scale, we shall call OpenVPN, PuTTY, FileZilla, Zip and Tar users too.

They will be upset. We need to be prepared. /s


For the desktop/UI that end users use, for reasons I don't understand, seems to work better if it's maintained by a large commercial enterprise. That's why Windows works, Android works, Ubuntu worked a bit and obviously MacOS and iOS.

I see Fuchsia as a good compromise here, it's going to have more free and open layers than MacOS on top of Darwin has. It's going to avoid a lot of the security issues Windows had and even the Debian/Ubuntu OSes and can probably be closer to MacOS/iOS in that way. It's certainly going to be a lot more open than Windows.

However, in the end, it will also allow Google, hardware and graphics library makers to have more proprietary parts that they control the vision on around the end user experience. It can certainly be the Windows alternative we need and hopefully Microsoft will realise that open sourcing Windows and allowing forks will be the best way for the OS to survive.


How is it unusable for research purposes, it seems it can be compiled and run by anyone?


For how long?

Fuchsia is in its infancy now. A little prototype. with its stable API and other features, it smells like a "hardware-vendor friendly" (or vendor-first or oriented) platform.

It may replace Android, maybe Linux in Google's own data centers. Then, we'll have another platform like OpenSolaris. Open, but not. The primary reason I like Linux is its transparency, openness, freedoms it provides and the resulting flexibility, and I'd hate to see these freedoms are eradicated with a so-called Open platform controlled by another mega-corporation called Google.


The biggest problem with OpenSolaris was proprietary blobs required to build it. Do you know if Fuchsia has any of those?

If it doesn't, then just fork it.


What I'm trying to say it, it'll probably end like AOSP. Downloadable, buildable, but useless without vendor or closed source add-ons/drivers.

Forking is always easy, maintaining is hard. Also, we're not sure whether things will be made harder by Google when things get mature.


I think useless is going a bit far, isn't Fire OS a fully maintained fork? I believe there are some maintained forks in China as well but I know less about those


Yeah, useless a bit too far to generalize, you're right. But, in the end, it will won't be usable by general public or software hackers (in its original definition).

I don't have any Android devices, and I don't follow its forks, derivatives or general state too closely. If you have the money and the means, you can always make a fork of an open source software and make it work for you.

To be more accurate, maybe I can say it won't be very useful for the general public, but big companies can do whatever they want since they have the money and the man-hours.


Fire OS is a fully maintained fork, but it still requires proprietary blobs to be functional. As far as I know, there are no commercially-supported Android forks that have open-source drivers outside of the kernel.


A clean cut between a free core and the adtech layer should be a welcome property. Think Chrome/Chromium, not Android/AOSP. For AOSP, our perception is dominated by all the trouble that derives from hardware vendors forcing binaries into a kernel that actively rejects them.


I don't understand. Having a stable api is also good for researchers / amateurs as well?


API and ABI stability look nice on paper, and really useful in some circumstances, however, it creates two problems, so Linus opposes it (and I take Linus' side).

First of all, it allows hardware vendors to be lazy. If a driver compiles, unless there's a grave bug in the driver, the vendor won't do anything. Because it doesn't hurt their bottom line much (nVidia has some serious bugs with their closed source driver, and they play the deaf, but that's another issue). When the API is not stable, something will change/improve inevitably, and the driver will break. A non-compiling driver is guaranteed to wake the vendor up, and maybe they'll fix some of the smaller bugs while they are at it.

Second, it hampers kernel development. You have a killer feature you want to implement, and it needs new API calls to be a first-class implementation? No, you can't do that because the API is stable, you can't change it. You need to work around it, tuck it behind more generic calls, add overheads, etc.

Fuchsia's API stability mean that a vendor can write the driver once, and send it to integrators once. Unless someone complaints enough, they won't need to touch the code. When the ABI is stable, you can just distribute the .o files, and forget all about it. The driver is working somehow, the integrator is happy, the vendor is making money. This is the dream of commercial software. They can implement once, and deprecate once and for all. Hooray!


> First of all, it allows hardware vendors to be lazy.

This completely ignores the fact that most hardware vendors have no financial incentive to make driver updates for hardware that you have no choice to replace because it's a chip embedded in your phone and they get to choose between forcing you to pay them more money for a new one _or_ spending money for no reason to update the drivers for old hardware. Breaking compatibility very demonstrably does not get mobile device vendors to ship updates, but it _does_ fuck the device owners. Let the driver that works just fine keep working just fine! Don't make a billion people suffer on old kernels because you think that a hardware vendor should do extra work to support a chip that they already made their money from just because you decided to break something that was working just fine before you made your change.


Since the lines I want to quote are too long, I'll just snip the core parts.

> ...no choice to replace because it's a chip embedded in your phone...

My comment wasn't targeted at the embedded usage scenarios only. There are professional and consumer devices (hand-vein readers, printers from certain manufacturers, high end and high-speed scanners), which advertise Linux support on their boxes and documentation, but only provide binaries and modules for some archaic Linux distributions. This situation is hurting both sides of the equation. Some of these devices are very nice, expensive devices I'd (and many would) buy, but their non-existent Linux support keeps the users away from them. Their change in attitude will not probably change the stance in the long run due to trust issues.

The problems are not only at kernel level either. Some of these devices run translation layer binaries in userspace, and these binaries are written without following the standard coding practices in Linux. As a result, an ordinary library update breaks the binary for good. Again, the manufacturer does nothing because it has no incentive to do so, and my expensive device becomes a paperweight.

> Let the driver that works just fine keep working just fine!

Coming from this perspective shall we stop all development on Linux kernel and userspace, so everything works? Why Microsoft is constantly changing driver models ever so slightly, and every manufacturer is updating the drivers about the cards and chips they've already developed and embedded on my motherboard, NIC, sound card, etc.?

> ...just because you decided to break something that was working just fine before you made your change.

While Linux kernel API is officially unstable, there's a rule in its development: if userspace breaks after a kernel patch, the problem is in the patch. Kernel changes shall not break or change userspace behavior. Hence, nothing is broken on purpose in the kernel. The evolution makes things change, and we shouldn't stop this change if we want Linux to be a cutting-edge operating system and platform.


> My comment wasn't targeted at the embedded usage scenarios only.

I know, but I think that the Linux kernel developers should update their views on the impact of ignoring the embedded scenario.

> Coming from this perspective shall we stop all development on Linux kernel and userspace, so everything works?

No. Preserve the old methods so that things that used to work keep working. Microsoft does this. Windows userspace apps from 20 years ago still work in Windows 10 today despite the system having advanced significantly since then. Windows 7 hardware drivers from 10 years ago still work in Windows 10 too. New functionality doesn't need to break old functionality.

> Why Microsoft is constantly changing driver models ever so slightly

They mostly don't. See above.

> every manufacturer is updating the drivers about the cards and chips they've already developed and embedded on my motherboard, NIC, sound card, etc.?

Bug fixes. Almost never interface changes.

> While Linux kernel API is officially unstable, there's a rule in its development: if userspace breaks after a kernel patch, the problem is in the patch.

You just said userspace. What is userspace? If I can't update my kernel because it means that my camera will stop working because I only have the one binary blob for it, then the kernel has broken _my_ userspace. But that scenario is explicitly ignored because the userspace rule doesn't apply to drivers.

The Linux kernel pushes GPL virality further than it should go by effectively mandating that one of three things must happen:

1) you give them your driver code

or

2) you keep doing work to re-certify and re-release your driver (but you could just mainline your driver code, hint hint!)

or

3) billions of people go fuck themselves

Mobile device vendors choose 3 because it makes them more money by obsoleting the devices more quickly. It's time to recognize that fact.


I'll again snip the core parts that I want to reply.

> Preserve the old methods so that things that used to work keep working... I have many old applications which compile without modification, or work without re-compilation given all the libraries are available. In Linux, most of the libraries are backwards compatible. Newer versions of the required libraries works with the said application unless the application gets the version and requires a strict version. For these libraries, the packages are always available (e.g. Spotify required libssl-1.0.0 for some time, but they updated it so it works with 1.0.{0,1,2}). Most of the low level libraries are API/ABI stable in Linux world.

> Microsoft does this. Windows userspace apps from 20 years ago still work in Windows 10 today despite the system having advanced significantly since then.

With the help of WindowsOnWindows (WoW32,WoW16), a complete embedded Windows XP image, a great hack called Compatibility Layer and other tidbits which makes things much more complicated and convoluted. Modern Windows 10 contains at least three complete Windows subsystems (or abstract installations) inside. This is hardly a good solution. You can install a complete 32 bit Linux subsystem to achieve the same thing, which is neatly called as multi arch support, and it's much simpler: a set of libraries, nothing more.

> Windows 7 hardware drivers from 10 years ago still work in Windows 10 too.

From my experience 7 to 10 compatibility is not universal. Despite having the same "foundation", 2K drivers didn't work on XP. XP drivers didn't work on Vista albeit advertised as compatible. Same for Vista to 7. 7 to 10 mostly works, and it's better than ever, but it's hardly bulletproof. Hardware vendors used this incompatibilities as an excuse to be lazy and obsolete lots of hardware. Been there, experienced that.

> Bug fixes. Almost never interface changes.

Sound card & TV tuner drivers are notorious because of the timing and direct access they required. Interface and access capability changes killed Creative's hardware EAX in XP to Vista migration IIRC. The cards are buried behind expensive API calls with the expense of latency. Most of the effects have vanished (since the card access in vanished) overnight. I had many of the Creative cards (Live, Audigy, Audigy2), and felt the hurt.

> If I can't update my kernel because it means that my camera will stop working...

Actually, I have a lot of hardware which were deserted by their manufacturers by not updating their drivers and Microsoft refusing to provide even a basic driver. Nearly all of these devices work under Linux, with better performance compared to their official drivers.

I had a printer which had a translation layer binary, but no kernel space driver. This translation layer binary is used as an intermediate CUPS filter to convert the PDF/PCL to printer's own language. The binary didn't conform to POSIX/Linux standard coding practices, and if you didn't provide the exact (old) library it wanted, it forked continuously and didn't work. So your print button was rigged as a slow death button for your PC. I can accept a binary blob, but code it right.

Similarly we have ~1000 servers at work. Some of them are ~10 years old, but they work with most modern distros without complaining. Everything is available, and stable. Even closed source drivers. I don't think the problem is the unstable API/ABI.

BTW, an interface change doesn't affect all the modules. Your module is affected if the interface you talk to changes. You only need to recompile the .o or the source since ABI is not stable due to non-stable API. This is not a real world problem from my experience with said servers.

> The Linux kernel pushes GPL virality further than it should...

I think we need that vitality, but it's another discussion topic, so I'll leave it here.

> 1) you give them your driver code

No, You can provide an .o file if you want.

> you keep doing work to re-certify and re-release your driver

Again, you can just update the .o file if the interface you talk to changes. Not a big deal if you're serious about Linux support.

> (but you could just mainline your driver code, hint hint!)

Unless you implement your magic in the driver (like a WinModem), and you didn't license closed IP from someone, you can open source your drivers. Even Broadcom open sourced their drivers. So it's not something scary if you plan it right.

> 3) billions of people go fuck themselves

The mobile vendors' choice is business related. Even if the code is open, they will obsolete it nevertheless. So, open source driver availability is not a factor here.

I also want to add some more information about this.

- AMD has re-designed its silicon to allow fully open drivers. This is a big gesture towards Linux and Free Software.

- Many high end hardware manufacturers (like Mellanox) open source their software stack (OFED, Subnet Manager) because the code is worthless as an IP without their silicon and hardware.

- Open source drivers prevent obsoletion, but driver cannot manufacture the chip itself, so it has no role on product lifecycle. Upcycling embedded hardware is not trivial for everyday user.


I am sure the kernel has extended ad support with every 5th memory access. Exactly the same feelings toward the new OS. Not that there are too much alternatives.

But if I could chose to get Debian on my phone bundled with a crappy phone app that looks like it was developed in the last century, I would immediately pick it.


I really wonder about this kernel IPC hype. I watched a video on React+Redux by Facebook engineers about a week ago, and the reasoning behind creating the things in the first place. Feelings about React+Redux and how they are overused aside:

Redux came first and it happened because the "interaction graph" between various components on the website became virtually impossible to resolve cognitively. The engineers (especially on-boarding engineers) faced troubles understanding how everything worked in concert and, especially, the side-effects of a single change. The nail in the coffin for the engineer was Facebook chat and she was forced to use numbers, instead of lines, to depict interactions because the graph was too dense. Redux forced interaction in one direction, with "back interaction" happening one frame/tick later. Web developers have praised and embraced this architecture.

If you look at the very first image in the article, it moves from a very straightforward application -> VFS -> ext2 -> driver -> disk graph to a [potential] rats nest. It runs counter to the hard lessons that Facebook have learned with complex distributed systems.

Is this why nothing has really come from Mach/Hurd? Up until putting 1 and 1 together a few minutes ago, micro-kernels sounded so clean and elegant. They now seem unwieldy. Perhaps there is exists another layer of architecture that can tame the dependency/interaction graph; isolation really does sound like a good idea, but we need a better way of doing it.


> Is this why nothing has really come from Mach/Hurd?

Mach 3.0 was the kernel used in NeXT and the original Mac OS X release. It was quite a bit later replaced with a rewritten kernel XNU, that’s basically a hybrid. The reason as I understand it, message passing in the kernel is expensive when crossing user space so often.

I wouldn’t call that nothing coming of Mach, it did really well!

Also, I’d say that micro-kernels are closer to the paradigm of react/redux (redux the message bus, react the user space servers) than to a monolith design like Linux.


You're completely right, I shouldn't have mentioned Mach.


They all used hybrid kernels. Mach 2.5 in NEXTSTEP and OS X Server 1.0, Mk (3.0) in macOS.


I think instead of comparing this to a front end web framework, you should consider how Fuschia is designed using capabilities (similar to the concept of a system call, and how a system call handler works) around a fundamental restructuring of a kernel: originally named Magenta, now called Zircon, to fundamentally optimize for the evolving modular architecture of advanced mobile computing. Understanding how the OS design flows from the original kernel design is key to understanding Fuschia OS as a whole. This article from two years ago does a pretty job of justifying its' existence (in this article, Zicron is referred to as Magenta as it had not been renamed yet): https://lwn.net/Articles/718267/ but I would actually love to hear feedback/constructive criticism of the kernel architecture itself.

I remember where I was and what I was supposed to be doing when I was instead reading the HN article that posted this lwn article (above) about Magenta 2 yrs ago, after finding Eric Love's Guide to Linux Kernel Development, and found this a fascinating read to compare to, given my learnings in kernel dev at the time.


Redux's initial release was 2 years after React...

https://en.wikipedia.org/wiki/Redux_(JavaScript_library)

https://en.wikipedia.org/wiki/React_(JavaScript_library)

Perhaps you're talking about the Flux architecture? Not the same thing. Care to share the video you mentioned?


You're right, it was Flux. I'll have to dig around for it. Edit: https://www.youtube.com/watch?v=nYkdrAPrdcw


Oh wow, I remember watching that, it's crazy to revisit that talk after so many years.

Redux and Flow are somehow branded as alternatives to MVC, when they are precisely MVC (store=model, dispatcher/reducer=controller, etc). It seems that Facebook (and many others) implemented MVC very poorly and pretended to abandon it instead of admitting they just did it better on the next attempt.

If anything, the success of redux simply reinforces the fact that MVC is a very useful pattern for UI applications. But, it bothers me that newer developers might be persuaded by the anti-MVC branding.


Flux dispatcher are not really like controllers. They don't allow for any business logic. The stores are like combined controller/models, though.

A key design feature of Flux is that many unrelated (or related) stores can respond an event, and that there is a defined lifecycle for handling events. I don't believe those really map to MVC.


So glad I am not the only one!

What I found interesting was that the original developer(s) of react.js seems a lot more up-front about this:

https://omny.fm/shows/future-of-coding/1-1-how-reactjs-was-c...

Of course, the whole idea of, in-theory, re-creating the UI at every refresh cycle is not part of MVC, so that is new and interesting. Considering all the work that has to be done to undo that abstraction (as it doesn't really match reality), I am not sure how useful it actually is.

https://blog.metaobject.com/2018/12/uis-are-not-pure-functio...


Thanks for the follow up!


I guess Google engineers has some experience with those hard earned lessons before Facebook even existed. The diagram you mentioned is very typical of microkernels and subject of research for decades.


>Is this why nothing has really come from Mach/Hurd?

Because Mach. First generation microkernel. Slow. The microkernel world has moved way past that, whereas Hurd has stayed behind still using Mach. [0]

[0] https://blog.darknedgy.net/technology/2016/01/01/0/


Could you please name the most significant differences?


Mach is first generation, which should be seen as proof of concept (microkernel being the concept), and was slow.

The second generation was spawned by Liedtke's L4, which basically attacks the performance problem of the first generation, making it negligible.

The third generation makes capabilities a first class citizen, core to the design. Its main representative is seL4.

As an added note, multiserver is a better fit for SMP than locks and the lock contention they imply.


Thank you! How did they tackle the performance problems?


The paper "From L3 to seL4: What Have We Learnt in 20 Years of L4 Microkernels?"[0] covers most of this from a 20 years after perspective.

[0] http://sigops.org/s/conferences/sosp/2013/papers/p133-elphin...


Mach wasn't the first generation, there were the Aleph and Accent kernels that were predecessors to Mach.


Mach is first generation. It isn't THE first generation. Just one microkernel (Mach v3 that is) which is still part of that generation.


S


Can you please elaborate?


Ask the Symbian guys and gals :)


I've heard the following opinion on the internet: "The sole reason Fucsia OS exist is so google can release GPL-free android sometime in the future"

How do you feel, is there anything behind this claim?


Google has been a Linux user since before it even existed [1], and has been almost exclusively a Linux shop since: Linux servers, Linux desktops (Chromebooks, Goobuntu, Whatever debian testing derived distro they are using now is called), Linux based phone OS.

So I'd say for most of its business activities, Google is actually okay with the GPL, otherwise they'd have rewritten it a long time ago. Given the size of Google and how important Android is to them, I think they'd have prioritized Fuchsia OS far more if their bottom line were really affected.

There is a point with drivers, as Fuchsia has better support for proprietary drivers than Linux and IIRC one of the Android modifications was to add HALs to the kernel to make proprietary drivers easier. But it's not a killer argument, after all they figured it out for Android.

Their business intent with Fuchsia is definitely not clear so it's a honey pot for speculation. I don't know. Maybe they wanted to diversify their approaches and have something if the Android kernel fork gets more and more patches and eventually backporting becomes impossible and they need to maintain it on their own now. Maybe they just want to be upstream as it involves a great deal or power that you have. Maybe it's part of a deal with a hardware vendor who wants their driver to become proprietary in the future.

[1]: https://web.archive.org/web/19971210065425/http://backrub.st...


They have been removing GPL parts from Android with each release, the only thing left is the kernel, which after Project Treble looks more like a pseudo-microkernel than traditional Linux.


None of this is even slightly true.

The kernel is still on the "vendor" side of Project Treble, and still controlled by the OEM as a result (not that it should be but it is). This is not against "traditional linux" at all anyway. Upstream linux doesn't exactly like having a bunch of stuff rammed in as kernel modules either, that's why for example FUSE exists.

As for "removing GPL parts" - Android started with minimal GPL parts in the first place but they haven't been actively removing GPL, either. Just look through AOSP for MODULE_LICENSE_GPL - you'll find a ton of components are still (L)GPL.


Ah so I am lying that GCC has been removed from the NDK.

Where is the upstream support for HIDL IPC with kernel?


HIDL ipc is built on top of binder, which is a kernel driver in mainline Linux.


My question was about the whole deal.


Agree, android drivers/blobs were not much use in linux space => hence, the whole premise of android==linux, backporting features from android to linux was never an option. The ones who used rpi, or even cheap chinese android tables wo/ linux support can relate.

Yes, google have nothing against GPL, but still GPL/agpl/lesser gpl are licenses that place specific limitations on a company modifying the original work. There are alternative licenses that do not.


> Their business intent with Fuchsia is definitely not clear so it's a honey pot for speculation.

From what I've heard Fuchsia will feature a deeper integration of Google services than Android. Demo: https://www.youtube.com/watch?v=FhX6cANaJ6o


There's other reasons why fuchsia is arguably better, such as using a micro-kernel and being "capability-based." The license is liberal enough to allow relicensing it as GPL, if anyone cared to fork it.


yes, I'll try to sum up the article as i understand:

* better process isolation

* stable hw/driver api (this is a huge mistake in linux, agreed)

* less historical craziness(who likes fork()?)

* more micro-kernel

some of the things I personally cannot buy:

* vulkan native. android is not vulkan-native, it provides both opengl es and vulkan, for your choice

* flutter. why bundling and bringing opinions to the system becomes a benefit? it was a major problem with android that you have to interact with java if you want any native ui => I would prefer more ways/languages/bindings to interop with the UI

* fidl ipc. cool, again, why bundling something with the os?

* vDSO. why is even vdso on the slides - vdso is important, but it already with us for a decade.

This article is great because its one of the first that breaks down Fuchsia on a tech level. But at the same time, its not an article that explains all the reasons the fuchsia is better that what google have now.


> * vulkan native. android is not vulkan-native, it provides both opengl es and vulkan, for your choice

IMO vulkan-native is the right choice because opengl (or opengles, or anything else) can always be reimplemented on top of vulkan anyway. Layering opengl on top of vulkan means you're no longer at the mercy of hardware vendors to provide a bug-free opengl implementation. And if you're calling a userland opengl library running on top of vulkan, you can actually debug into graphics system and much more easily tune performance. (Debugging through into opengl calls is almost impossible when the opengl implementation is hardware-specific and part of the graphics driver itself).


Case in point. Google's ANGLE library has a in-progress Vulkan back-end. (Used for WebGL in Chrome and Firefox). ANGLE does only provide OpenGL ES, but then so does Android. I wouldn't be surprised if an upcoming android version required Vulkan suppport and Android shipped with ANGLE for OpenGL.

https://github.com/google/angle


In reality this is entirely moot and comes down purely to driver development cost. Nothing in the OS is designed around the idea of being "vulkan native" or not. That's just not a system architecture thing at all.

But what is a thing is whether or not the GPU driver bothers to have an OpenGL ES path or not. It's obviously simpler if the driver doesn't, but a quality driver implementation will also obviously always beat something like ANGLE in every way.

So take a hypothetical Nvidia Shield 5 running Fuchsia in 5 years or whatever. If games are still dominantly in OpenGL ES, then you can bet your ass Nvidia will be shipping a first-class GLES driver and not use ANGLE-on-Vulkan.


FIDL is used extensively through the OS, and looks like it will be the primary means of communication between processes, not too different from macOS's XPC (which is used for a lot of services as well as very low-level stuff like talking to the T2 processor, apparently). Fuchsia being a microkernel, it makes sense to make this part of the OS. The alternative of inventing ad-hoc RPC mechanisms for each area of the system would be worse.


If I remember correctly Android also has IPC in the kernel, using it to enforce permissions and the security model between apps.


Moreso after Project Treble, now Android Linux behaves a bit like a micro-kernel.


> stable hw/driver api (this is a huge mistake in linux, agreed)

Not to get into a flamewar over this, but there are real benefits to Linux's rolling driver ABI (the "API", however, is mostly stable, contrary to what you say). It encourages mainline collaboration, and greatly improves the quality and compactness of the drivers.

On Windows (and to some extent macOS), you'll find whole classes of driver where each driver has its own proprietary internal implementation of something that should have been in the API and ABI, but is not (or just was not when the driver was written). Linux has great, up to date drivers for the vast majority of devices you'd want to use†, and all of the best ones are built (compactly!) right in to the kernel, so it is more or less moot at this point whether a stable ABI is a good thing (at worst, it didn't matter in the long run; at best, it made the system better).

The level of integration and standardization in the Linux kernel is spectacular, and absolutely would not have been achieved if vendors were empowered to link ABI-compatible binary blobs into end user kernels. There is tremendous value in the fact that I can manage, consistently, every LED, every DVFS controller, every fan controller, every GPIO pin, etc. on each supported system in a consistent fashion. Left to their own devices, each vendor would use their pet interface (as seen with NVIDIA putting their foot down and demanding EGL surfaces be used with their driver on Wayland, despite the fact that no standard software supports it); NVIDIA insists that this is the way it's meant to be played, and I'm glad that the answer is a resounding no.

† Outside of the Android ecosystem, which, through what could maybe be called poor stewardship on the part of Google, inexplicably refuses to develop against the mainline kernel, so is constantly fighting to maintain literally thousands of vendor-, OEM-, model-, SKU-, country-, and carrier-specific kernel trees for absolutely no recognizable benefit, and against the clearest and most consistent advice of the highest experts on the subject.


> It encourages mainline collaboration

I feel like this is demonstrably a false promise by looking at the billions of mobile devices never getting any updates because of vendor neglect.

> and greatly improves the quality

Compared to what?

> and compactness of the drivers

Which no user has ever cared about.

> up to date drivers for the vast majority of devices you'd want to use†

I notice that you excepted billions of devices in order to make this claim.

> inexplicably refuses to develop against the mainline kernel

It's not inexplicable. It's easier and cheaper.


The Linux model definitely encourages mainline collaboration. It just happens that the Android vendors are fighing against this encouragement, and Google has caved in to their demands.

> Compared to what?

Compared to what happens on Windows and Android, where individual drivers have to reimplement common functionality, often in inconsistent ways.

> Which no user has ever cared about.

Users care that their drivers work and that they continue to be supported in the future. Less compact code is larger, and therefore buggier. Drivers which are provided as proprietary userspace blobs are invariably unmaintained and stop working with future kernel versions.


> feel like this is demonstrably a false promise by looking at the billions of mobile devices never getting any updates

Third-party firmware like CyanogenMod exists only because in-kernel GPL parts can be ported to new OS versions.

With Fuchisa there won't be anything like CyanogenMod anymore.


> Third-party firmware like CyanogenMod exists only because in-kernel GPL parts can be ported to new OS versions. With Fuchisa there won't be anything like CyanogenMod anymore.

I think you're confused. If the interface for the drivers stays stable, then you can just carry over the same exact binary blob and it will keep working perfectly, which is exactly what Microsoft does. Windows 7 drivers still work just fine in Windows 10.


If third parties can update the OS despite an unstable driver API then it all boils down to the laziness of the vendors.


> Which no user has ever cared about.

The user cares because it results in fewer bugs.


Which paper proves such correlation?



Thanks.


Android devices not running mainline kernels is a direct result of the Linux development process.

If an Android device launches with a new CPU or device, the OEM has to go through the process of up-streaming the changes to mainline before they can launch their phone? not going to happen.


> Android devices not running mainline kernels is a direct result of the Linux development process

That's a wishful thinking. Majority of Android drivers are userspace blobs. Linux kernel development process is completely irrelevant for them.

Device vendors don't update already sold devices because they don't care (and because average consumer also does not care).


> Device vendors don't update already sold devices because they don't care (and because average consumer also does not care).

You should consider why the hardware vendors are in the critical path for OS updates in the first place. Dell doesn't get to decide whether you can apply Windows updates.


Turning that notion on its head, Pinephone could be one to watch.

They more or less intend to supply the hardware only. This avoids the pretence of vendor updates entirely, by both the company (Pine64) and the ARM licensee (Allwinner).

The software stack is up to you, with mainline Linux support courtesy of the A64 [0] powering a number of 'RPi-killer' boards. Device updates will be supplied by the likes of Ubuntu Touch, LineageOS, postMarketOS et alia mobile distributions.

[0] http://linux-sunxi.org/A64


> Turning that notion on its head, Pinephone could be one to watch.

Maybe. Except that for me (and I know I'm not alone) the second most important part of my pocket computer after web searching is the camera, and they're unlikely to have anything great in there.

> Allwinner

Hah. The repeat GPL violator?


i think in the end, this whole divide boils down to a divide in philosophy between free software and corporations that support “open source” but don’t want it free (as in gpl)

thereby we get the constant tension between google and hardware makers that want to keep their jewels secret, and the linux/foss community that wants to keep their freedom to install their (open) software on any device... by keeping the drivers in with the kernel, linux can foster that end, but it causes no end of frustration to google et al...

i think that’s probably the reason we get fuchsia: google can be open source, and with the hw makers, keep the proprietary parts proprietary... the result being, we have open kernel and os bits but no guarantee (or much hope at least) we could ever run it on our devices...

i think that’s the biggest cultural divide: freedom to change and replace the software vs freedom to look at the code (and maybe run it in an emulator...


>NVIDIA insists that this is the way it's meant to be played, and I'm glad that the answer is a resounding no.

I have some bad news for you. They got their way.


This is so untrue.

There's a whole class of devices that will never be in linux mainline kernel - experimental filesystems(remember those years wher FUSE was a patch?), propietary hardware(embedded devices), kernel bypass hardware(onload), security hardening mechanisms(remember PaX?), commercial software who basically sell the driver(intel studio profiler).

All this breaks every time linux releases a major rewrite. If the vendor is out of business or no longer interested in a product - you are left with the drivers that do not work.

Moreover - in a case of major api breakage - its kernel contrubutors who are left with re-writing every driver for a new api, delaying the kernel releases.


Very well put.


Windows doesn't even have a standard driver-ABI anyway - if it did, we wouldn't have hardware losing driver support when switching the system to a different Windows version (XP/Vista/7/8+/10).


> Windows doesn't even have a standard driver-ABI anyway

It does to a very significant degree. Windows 7 drivers from 10 years ago still work today with Windows 10. Windows 95 user software from 23 years ago will still work today in Windows 10. Microsoft cares strongly about maintaining compatibility and only removes functionality when necessary.


I don't understand why they didn't want to base it on the formally-verified, capability-based micro-kernel called seL4.

Formal verification is the difficult but necessary next step for security-intensive applications.

Also, any new kernel should only be written in a memory-safe language. Rust would be the best, since it is also safe from data races, outside of unsafe blocks.


Well... it's non-trivial to use seL4 correctly. It has all of the facilities to build something great and very little rails to ensure that you actually do. CAmkES, the component framework to help you do that, is... challenging in all the wrong ways.

The proof evidence is very hard to extend meaningfully for new platforms, and you're limited to environments which have a sufficient MMU.

We're starting with seL4 and building a whole Rust-based userland around it, and creating a whole pile of compile-time assurances along the way to make sure we're not inadvertently doing something dangerous or stupid with the rope seL4 gives you.

It's worth it for us. However, it is a colossal pain in the ass. That said, in the coming months we're likely to open source some of our progress in making it easier and more reliable to work with, integrate with, and otherwise leverage seL4 for certain use cases.


I'm super interested in the work you're doing. I spent a couple of weekends building bindings and trampolines to get Rust binaries linked into Chibi/OS and FreeRTOS and it worked really well. I've always wanted to work with seL4, so I downloaded the source and started working through the tutorials and... oh dear. There ended my experiments trying to add Rust into the mix. So if there's any way I can subscribe or otherwise keep abreast of the work you're doing, I'd love to do that.

As an aside, I wonder just how much async/await in Rust will obviate the need for small kernels in deeply embedded applications. If I can do blocking IO with my peripherals and use the async state machines to do "task" switching, well... who needs tasks and capabilities? It'd land somewhere on the "language OS" side of an embedded OS almost.


At the moment we're light on marketing materials & outreach. We've been heads down bootstrapping and working with early customers.

When we open source things in the coming weeks & months they'll all land on GitHub, https://github.com/auxoncorp, so that's probably the lowest touch way of observing what we're up to as things mature.


async/await in embedded is actually an interesting thing, that i also evaluated. I think it can't be an alternative to preemptive RTOS kernels, since the cooperative nature doesn't allow to process things within guaranteed deadlines. Therefore I think async/await might end up more as an add-on to RTOS: All time-critical things are run within independent RTOS tasks. And all the remaining and less critical stuff could potentially be run inside a single-threaded async/await scheduler.

While Rust seems to be promising here, I'm not sure whether the current async/await implementation will really allow for this. I experimented a bit with it, and found lots of issues where async/await lead to severe memory bloat (composing 3 async functions that do not a lot things requiring a 20kB task allocation). The compatibility with Futures and the requirement to move async fns to compose them before using them (which requires extra stack - which again ends up as extra heap/task memory) plays a bit against it. But these things are also worked on, so let's seee how it works out.


>If I can do blocking IO with my peripherals and use the async state machines to do "task" switching, well... who needs tasks and capabilities? It'd land somewhere on the "language OS" side of an embedded OS almost.

Waiting directly on the resource like that would require exclusive access quite often though?


Deets would be appreciated. Sounds interesting.


Rust isn't safe from race conditions. It is not a guarantee. It is free from memory errors, that is true, though.


Rust is safe from race conditions, because race conditions are memory errors.

You should read [1] to understand how Rust achieves this.

[1]: https://doc.rust-lang.org/nomicon/send-and-sync.html


Quoting some more relevant stuff from the nomicon. The linked article is short and sweet.

A data race has Undefined Behavior, and is therefore impossible to perform in Safe Rust.

...

However Rust does not prevent general race conditions.

This is pretty fundamentally impossible, and probably honestly undesirable. Your hardware is racy, your OS is racy, the other programs on your computer are racy, and the world this all runs in is racy. Any system that could genuinely claim to prevent all race conditions would be pretty awful to use, if not just incorrect.

Source: https://doc.rust-lang.org/nomicon/races.html


not true - you can have a program that is free of data races but still contains race conditions. Rust prevents data races.


No, Rust isn't.

Maybe you should read a bit more on its docs [1]. I quote:

However Rust does not prevent general race conditions.

[1] https://doc.rust-lang.org/nomicon/races.html


Regarding Fuschia only having an option for vulkan and not opengl, it doesn't seem like that would necessarily stop opengl software from working on fuchsia, right? There are opengl implementations on top of Vulkan like MoltenVK. I don't have a great understanding of this stuff, but assuming that this is true, I imagine that it has upsides like eliminating potential driver bugs which have plagued opengl in the past.


MoltenVK implements vulkan itself, on top of metal, and its counterpart, moltengl is closed-source. There do exist projects to implement opengl on top of vulkan, but I don't think any of them are mature yet.


Google does not mix well with stable api. They love the shiny new stuffs and break things often.


How tied to the internet will it be? How many callbacks to google will it have? Will it rely on google DNS?


> Will it rely on google DNS? You bet!


As default, that makes a lot of sense. I guarantee there will be a lot of pushback from enterprises if they can't change the DNS servers to their own.


not


> capability-based

For anyone wondering what that means, here's a good place to start:

http://erights.org/elib/capability/overview.html

> Modern capability security theory comes mostly from the work of Norm Hardy, Charlie Landau, Bill Frantz and others...

I worked with Norm and Bill at Tymshare in the '70s. Alas, Norm passed on last year, but I got to see Bill at his memorial. If you're in the Bay Area and into ham radio, you might run into Bill on the N6NFI repeater.


IBM's OS/400 of 1988 was, I think, the first widespread capability-based OS. Certainly it was my introduction to the concept.

It's baffling that it has taken 30 years to trickle down.


I wonder how much it will take for the safe systems programming and TIMI part though.


The copyright owner can alway relicense no matter what license they use initially. Unless you mean others, in which case what license do they use that allows relicensing?


Linux has no singular copyright owner, so nobody can relicense it.

Fuschia has a highly permissive license that would allow anyone to fork it and release the fork under a completely difference license.


But you cannot change the copyright or license of the existing code. You can add GPL code that cannot be merged back, but nothing gives you permission to change the copyright of the original.


The MIT, BSD, and Apache licenses, which Fuchsia uses, have almost no restrictions aside from attribution. Anyone would be able to fork Fuchsia and re-license as GPL.


How exactly does that work, what stops anyone from maintaining a 1:1 branch, the "fork", with upstream and claiming it's GPL?


You can relicense MIT source to GPL. Nothing is stopping anyone from maintaining a relicensed fork like you described. The real question is, would anyone even use or contribute to that fork? (probably not)


That's not a correct statement.

Nothing in the MIT license says that you can claim credit, ownership, or restrict licensing terms on someone else's code.

What you can do, of course, is include the original MIT licensed work in a larger project with other GPL'd pieces. That does not "relicense" anything, and I'm only constrained by the GPL if I use the GPL'd part of the codebase.

I can download X11 from Redhat and still use it under the original terms from MIT.


What everybody needs to learn is the concept of 'derivative works'.

When you combine MIT code with GPL code you are creating a derivative work of involving at least 3 sets of copyrights. Your copyright (since I am assuming you did more then just copy paste), the MIT licensed copyright, and the GPL copyright.

Anybody using that code is subject to those 3 sets of copyrights combined. It's a derivative work of all 3 code bases so it has all 3 copyrights and depend on the original licenses for anybody else to legally copy.

Since the GPL is the most restrictive and disallows any additional restriction then the code base is _effectively_ GPL licensed.


You can view GPL as a bunch of restrictions on top, as I don't think the GPL grants any freedoms the MIT doesn't grant. So why shouldn't you be able to say "you can use my version, but only if you also adhere to the GPL terms"? After all the MIT gives me the right to distribute without any stipulations under which terms the distribution has to take place.


> Nothing in the MIT license says that you can [...] restrict licensing terms

The MIT license explicitly grants the right to sublicense.


That’s different from relicensing the code https://writing.kemitchell.com/2016/09/21/MIT-License-Line-b...


Seems people are talking past each other.

The world "relicensing" doesn't exist in copyright, nor in the linked article. Instead there are some similar situation being bunched together and faulty used interchangeably.

A copyright author can redistribute a work they own under new terms. This would be the case when they have offered a work under say MIT and change the copyright license for any future offer under the GPL. It is similar to a merchant changing the price, ie in the past the product cost X but now it cost Y. Naturally past customers don't suddenly have to pay more for something they already paid for, but buying more (or getting updates) applies the new price.

The copyright author can also dual license it. In this case they would offer the work but under two different licenses at the same time. This would be like a merchant offering a product under two payment plans, one where you pay up front and an other more expensive method where you pay a small amount each month for a bigger total. In both cases the product is the same but the condition of sale is different.

The MIT license itself also allows for something similar to resellers, ie people who take a copy and apply their own additional set of conditions when they distribute it. This doesn't change the original offer, but those getting a copy from the reseller has to honor the resellers term and condition in addition to the original conditions of the author. This is similar to going to a store where you pay both the product original price and the margin of the reseller. You can always go to the original manufacturer and only pay the original price, but you may miss out on features that the reseller include in their package.


It kind of defeats the purpose of GPL too. You can't keep future changes to the codebase public and open source since anyone who wants to do so will just not use your fork, so why bother?


Once somebody contributes to the fork the fork contains code that can't be backported to the MIT licenced version. The hope would presumably be that the GPL version grows faster since the GPL version can merge any change to the MIT version but not the other way around.

In practise it wouldn't work all that well unless you pay a development team to push the GPL version significantly ahead, but it's a possibility.


Nothing stops you. These licenses are philosophically very different from the GPL. They tend to be designed to provide literal freedom to do what you want with the code, basically as long as it doesn't interfere with the original use and provides acknowledgement. This includes relicensing it and including it in proprietary products.


As AOSP have shown us - having an opensource license is one thing but having a control over the course of the development, control over what features get implemented and merged - is something very different.

Any project can be opensource but still under direct control.


Almost all big open source projects are under direct control of a person or entity.


Anyone can fork. But with the way android works vendors will no longer have to release their kernels for their android versions. This is one of the main reasons projects like lineage can exist as you can build and modify a kernel for almost every device on the market.


Regardless of what you think of Google, the reality is Fuchsia and Android have little to nothing in common. While they would certainly rather have a permissive language from the bottom up, they certainly wouldn't have written a whole new OS with that as the primary goal.

Android's platform design would arguably have been considered modern in 2007 or 2008, but it is 2019, and the cracks in Android's capabilities have been widening for a while. Sometimes people still think of Android as "newer" in the operating system space, but it's a decade old!


> it's a decade old

Windows NT came out over 20 years ago, in 1993. The first public release of Linux was in 1991. And the BSDs trace back even longer, you could say back to the original Unix from 1969. So a decade is recent I'd say.


Windows NT development started in 1989. It was lead by Dave Cutler, who also lead VMS which started in 1975.


A decade isn't much for an operating system...


It is when you start thinking about our approach to security in 2019, versus our approach to security in 2009. And sometimes the baseline assumptions about how software is written makes it exceedingly hard to, in flight, rebuild for our new understanding of what is needed for good security.

Attempts to do so often entail more or less, tacking on a whole different environment, as Microsoft has attempted with UWP. To a certain degree, it'd be easier to just start over from scratch, as Google is attempting to do.


We mainly changed our focus from protecting multiple users from each other to protecting multiple applications from each other. But Android always had the latter model. Apart from that I don't think a lot has changed from 2009 to 2019?


given that capabilities were considered by many the future of security in the 1970s, you could say that the community understood full well how to provide a fine-grained policy machine with good enforcement guarantees.

either they or the organizations paying the bills just didnt care yet.


Microsoft is still at it, just from the other side, by merging Win32 and UWP containers.


It took me some time to form the argument. I dont really see how fuschia targets linux on the server or windows on the desktop, it is never mentioned to be the on the backend and no features are ground-breaking for the backend. All the usecases are for mobile platforms - the android. Ok it may be the case android and fucsia have no common code, no common architecture and no common people.

But that does not mean that they dont target the same devices, the same market and the same use-cases.


No arguing with that - yes, selinux/uid permissions were somewhat wonky. But google did the job and paid its price isolating the programs and it works right now. Can we consider the problem of android app isolation aldready solved?


If you're an android dev, you know google basically rewrite the whole SDK for every major OS release. Android 1.0 is unrecognizable compared to Android 28.0 to the point where supporting devices running 1.0 from a 28.0 is a tedious and time-expensive operation.


There aren't any API level 1 devices out there though. Supporting, say, Android 4 (level 14, from 2011 so almost eight years ago!) is very straightforward. The fragmentation argument doesn't hold water for most apps.


maybe if you have a simple text app, try implementing background sync, modern UIs even db stuff gets really convoluted really quickly. forget about camera/audio support

my current company supports down to only api level 19 and even then it's like having to maintain multiple apps to support apis <23


Did you mean SDK 28? There was no Android 28.0 as this was posted.


>Stable drive interface, hardware manufacturers can independently maintain hardware drivers (hardware)

For me this reason is enough to justify linux replacement. I think it's one of the main reasons for google to, they want to deal with vendors' blobs in a more stable and ease manner.


The point of being GPL free is you don't need to release source code.

If you combine this with the other point of Fuchsia, hardcore security, the final combined result is that nobody can audit what Google is actually doing on "your" devices. And that's why.


Assuming Google doesn't need to hide anything, being GPL free could be so that others (like device drivers) don't need to release source code.


I think it's a valid concern, look at how many originally open-source Android APIs and frameworks Google has pushed out in favor of their proprietary one's. Similar concerns I've seen voiced is there being a inseparable integration with Google services, like how Google has pushed Gapps dependency onto Android, but now baked into the OS-level.


>How do you feel, is there anything behind this claim?

A glance at the design will tell you that no, there's nothing to that claim.

Fuchsia's only competitor is Genode. Linux is basically obsoleted by its design.


It’s not that important to them I think.

It’s a great senior engineer retention project though.


Embrace-extend-extinguish.

Are people really dumb enough to fall for this trick over and over again?

(Don't answer, that was a rhetorical question.)


What exactly are they embracing by creating a new OS?


SoC vendors who write the device drivers in the end probably couldn't care less. Why should they opensource their drivers for a new OS?


There are plenty of kernels they could've started with, so I'm willing to say that the motivations are probably at least somewhat technical.


I'd say it's more likely someone had a hobby of making microkernels, and when they joined Google they managed to twist the arm of someone senior to assign a few people to their hobby full time.


A few? There are more than a hundred contributors in https://fuchsia.googlesource.com


The only GPL left on Android is the Linux kernel, everything else GPL related has been replaced.


That's very much my feeling. Linux has a history behind and some design decisions made for a wide range of devices a long time ago. Google has the kind of money necessary to build an OS from scratch in order to power their devices (Android, Chromebook, Google Home), whilst other mortal beings have to put up with one of the existing choices.


It is irrelevant. Running Android apps is no longer tied to AOSP. Chromebooks run Android apps.

There is an app ecosystem composed of Google Mobile Services and the Play store, today it runs on Android and on ChromeOS, and it seems that it will run on fuchsia too.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: