Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My impression as game developer:

- Vulkan first graphics interfaces. I mean, OK. Not that I'm in love with this overly complex API, but fine.

- No OpenGL support. I guess this is where world is moving

- No POSIX support. Quite a bit of game engines rely on it, oh well, when google cared about developers

- Nothing about sound (my personal thing)

As game developer working with Android was unpleasant to say the least. For example sound - android has like 4 sound systems, all of them (except simplest one available from Java side) are not fully implemented, and swarming with compatibility bugs across manufacturers and android versions. Oh, not to say that they introduce new one once in a while, with new version adoption slower than a sloth.

I get that Google engineers enjoy rewriting things they don't like, but, come on. Fix existing first. Don't change APIs on us - not everyone has extra million of $$$ to throw on a project to refactor it every time Google decided that they want shiny new thing, which as result is broken too but in different ways. OK. I admit, I exaggerate, but it seems like tough times for game engines (besides super hyped ones like Unity or Unreal, which also has no problem throwing tens of millions).

Note about OpenGL support: I'm pretty sure they'll drag it in by porting ANGLE library which is currently actively worked on. Compatibility layers for Vulkan are getting momentum, but I hope they'll join forces with MoltenGL/VK instead of making their own worse analog.



You may want to take a second impression.

- OpenGL should be a library. This is just going to make OpenGL development easier in the long run. Right now there are too many OpenGL implementations and the differences matter. Running one OpenGL library on top of N different Vulkan implementations is miles better than running on top of N different OpenGL implementations.

- POSIX can be a library too. It doesn’t have to be provided by the kernel. People have been strapping POSIX layers on top of things for ages. This was originally how Mach worked. You can still do weird things on iOS and macOS “below” the POSIX layer (although most POSIX syscalls are just provided by the kernel).

You talk about how hard it is to change APIs… and how much you hate to refactor things every time Google decides they want the shiny new thing. But POSIX is rooted in the 1970s. It sucks. It’s about time to try something new. The entire POSIX model is based around the idea that you have different people using the same computer and you don’t want them to accidentally delete each others’ files. There are a ton of APIs in POSIX which are straight-up shit, like wait(). Sandboxes are unportable and strapped-on, using arcane combinations of things like chroot and cgroups.

Let’s give people a chance to try out something new. Unix is somewhere around 50 years old now. OS research is dying out. Make it easier to run untrusted programs.


I agree with you, but I do not believe in Google. Google provided worst commercial platform I ever had experience developing for. I had a lot of different platforms in my past - pretty much all consoles, desktop and fringe, like smart TVs or various smartphones released since 99'.

Only Google changes rules of the game on developer. They do not realize that they are make •a platform•, not a product. Stable, comfortable platform that software runs on. I honestly don't give two cents about all their struggles to make "a better OS". OS is to launch apps, and making those apps is a miserable experience, where I have full time people just to keep up with Google 'improvements'. Somehow every other platform is fine. I hardly had to change code written 10 years ago for iOS, for consoles or Windows - not at all! But Google keeps making their own special PhD driven darlings.


My experience with android has been very similar. For example, last week I ran into a bug that seems like a simple omission that was reported back in 2010 (https://issuetracker.google.com/issues/36918490). It took until Feb 12, 2019 to get an official response:

> We're planning to implement a new more powerful network request interception API in a future version, which will be available to L+ devices with a recent enough WebView via AndroidX. Being able to read POST bodies is one of the features we're intending to include. Unfortunately we can't share schedule information about when this may be available, but we are aware of this use case and intend to support it.

Instead of fixing bugs and making incremental improvements, they seem to have a strong propensity for grand rewrites.

I got tired of the constant parade of new Google APIs, libraries, and technologies years ago and have been choosing alternative products whenever possible.


This sounds suspiciously like CADT, as described by JWZ: https://www.jwz.org/doc/cadt.html


Considering how that site handles links from HN, you might want to consider linking it through https://nullrefer.com or the like.

http://nullrefer.com/?https://www.jwz.org/doc/cadt.html


Oh yeah, I forgot about that. Thanks for the link!


Haha, so this is what happened? I had a big WTF moment.


Apparently the author of jwz.org doesn't like HN. This is him, by the way:

https://web.archive.org/web/20170125134508/http://sleepmode....


I had no idea he (JWZ) hadn't gone to College. Very impressive.


> But Google keeps making their own special PhD driven darlings.

I'm really sorry that you have this feeling, but this has nothing to do with PhDs. It's impossible to get promoted by bug fixes inside Google, and people are promotion driven.

There were many people who tried hard to fix a lot of bugs, but they usually burn out due to lack of recognition inside Google.


It really, really shows. That and documentation. And some periphery work related to larger efforts, especially on any platform that's not the Web. It's the only explanation for how an organization like that can produce so much broken and half-assed software—bad incentives, and probably a serious middle management problem, in that they don't have leverage or motivation to make ICs do "boring" and follow-through work.

Maybe it's working for them from a bottom-line perspective, but it's made their brand as an engineering company clownish.


The key question is the bottom-line, as you wrote. Management is really hard to scale, but right now innovation is a better prediction of growth than bug fixes, so I think it's not even clear how to improve the promotion process without hurting growth of the company.

I think Google is too big at this point already, but that's an orthogonal question.


A non-"rockstar" hiring process and parallel job track (with potential for cross-over) so they can get in people who are happy to do "boring" work on an interesting product might do it. Probably can't be in the Valley or anywhere else ultra-high COL. I think part of the problem is they (seem to?) only staff folks who both can and are inclined to leave quickly if they don't get to do the fun stuff.

Again, though, may not make sense for them from a $$$ perspective. Jank and rough edges galore may be something they're willing to live with.

[EDIT]

How this works in my head:

Manager: Could you take a look at the Android developer documentation? Some of it's badly outdated and have you actually tried using it? A lot of the advice is... kinda bad. Also maybe look at a few of these UI components we had the Summer interns make, they've got bizarre implementations and are difficult and inconsistent to customize. Oh and god have you looked at our issue tracker? Has anyone, actually? Like, ever?

Developer: Hey so have you heard of this place called Amazon?

Manager: Uh I mean how about we start a fourth greenfield instant messaging product instead?

Developer: That's better. Amazon? Who's Amazon?


Actually there is little thinking about "Amazon" or other competitor: internal mobility is easy. So replace "Amazon" by "this other team is doing a cool new project", and you can get transferred in a matter of weeks, most of the time without going through a heavy interview process.

And all of this isn't only motivated by "not doing boring tasks", but also because this is what drives promotions / career advancement. So "launch a feature/project, get rewarded, and switch team" seems like a trend, and it does not encourage to think about long term maintenance / etc. (It isn't a generality either, but it is a bit of a trend)


Note that while it is quite know and admitted at Google, is there a company where it is different? People are complaining about Radar tickets not being fixed while new (half-baked) features keep shipping on iOS as well, and it is somehow for the same reason inside Apple.

Working on improving the quality of the software is not always easy to measure in terms of impact, and hard to get recognition for in the same as shipping a new feature.

Where Google differs, maybe, is that its promotion system is well codified and rewards the "difficulty" of the deliverable. This is skewing the balance to reward adding more complexity in the system for the sake of solving it (you create difficult problems artificially, they aren't intrinsic to the nature of the user needs you should solve).


Yes, the typical boring enterprise job.

There are teams whose only role is to work down those tickets, one after the other.


> It's impossible to get promoted by bug fixes inside Google

This is why people job-hop for meaningful career advancement. Much easier than dealing w/ the broken promotions processes these companies employ.


The goal of job hopping is to get your salary match your market value (or even higher, as job hoppers are great at salary optimization), promotion processes have problems everywhere.


Exactly. An anecdote (not my own) https://mtlynch.io/why-i-quit-google/


My thoughts exactly. I have two apps out on Google Play. They are full featured, stable, and people still like them. I just want them to to continue to work. But they don't. I need to constantly take away from my time developing new apps to fix my newly broken old apps. How many times, Google, are you going to try to "fix" the problem of apps draining battery in the background? It never seems to end.

On the flip side, my first programs that I wrote for myself in the 90's still run. Yeah, I know that DOS is emulated now, but MS made them still work, and I didn't even have to re-compile.

I have this conspiracy theory that Google is purposefully trying to make things difficult to get rid of developers that don't have a lot of resources.


I have the same problem.

I think Hanlon’s Razor applies -- this isn’t intentional, it’s just unfortunate mistakes.

I think the main driving factors are:

- Many Android APIs were badly designed from the start;

- Android was not designed for easy OS updates, so many users have old OSes;

- Google like Apple’s approach of aggressively deprecating old APIs, and try to do the same. But unlike Apple, they still have to deal with old OS versions, and their fundamental OS design is not as sound. The end result is what we see, messy and buggy APIs.


>> I have this conspiracy theory that Google is purposefully trying to make things difficult to get rid of developers that don't have a lot of resources.

I agree and Apple does this on iOS as well. It's a way of clearing out old apps.


Same


> I agree with you, but I do not believe in Google. Google provided worst commercial platform I ever had experience developing for. I had a lot of different platforms in my past - pretty much all consoles, desktop and fringe, like smart TVs or various smartphones released since 99'.

I really find it hard to believe that Android is worse than proprietary, undocumented, Windows-only toolchains of Symbians, Tizens and bunch of other embedded crap, not to mention the horrors of what was PS3 toolchain at the begging.

I think you might be overdramatizing this a bit.


Please give me back Symbian C++ versus the experience of using the Android NDK.

And if you prefer a more modern example, Android is indeed worse to use than UWP or iOS.

When they release stable releases, they are actually betas, and updated documentation is scattered around Medium and G+ posts, alongside Google IO and DevBytes videos, why bother updating the official documentation.

After 10 years, the NDK still feels like a 20% job from a team that is forced to accept that Android should provide a bit more than just 100% Java.


> Please give me back Symbian C++ versus the experience of using the Android NDK.

Please no, Symbian C++ was the worst development environment i had the displeasure to touch. I'd rather hand write every Java binding for every single API call manually in NDK than even consider looking at Symbian C++ again.

(relevant experience from Nokia's Series 60 around the time of 6600, i don't know if Symbian improved after that, i lost all will to bother with the platform, but considering you still had to implement exceptions via macros by hand because of technical decisions made two decades earlier, despite the platform not being backwards compatible and thus could fix said decisions, i do not expect that things improved)


It did improve at the end, the last iteraction with Carbide(2nd Eclipse attempt), Qt and PIPS was much better than using the NDK.


And now they are shutting down G+, so all those posts and pieces of missing documentation will simply disappear! (Unless Archive.org ot somebody else saved a copy.)

Oh, the irony!


I started my first Android app recently, and decided to go with their "Jetpack" stack recommendations. I wasted hours trying to get Dagger 2 injection working, only to discover I had to write tons of glue code which obviated the purpose of the DI framework in the first place! I don't want to worry about transitive dependencies over an object graph and writing ViewModelProviderFactories (actual name) and persisting my data with Room.... I just want widgets, logic and layout.

Next I tried GKE and it was an absolute joy. I had a simple web app serving traffic an hour in. Probably because unlike Android, they have to make people like GKE to sell it.


Why are you bringing in Dagger 2, a non-platform separate dependency injection library here? :)

Do you think a proprietary Windows C++ compiler for an embedded platform will make your DI uses easier? :)


The Android Jetpack guidelines strongly recommend Dagger 2, and it's a Google product so I thought I should comply.


That's very surprising that you mention iOS as being stable. To me it's the worst offender. Most of the apps I worked on needed to be updated for every single iOS release because of deprecated APIs.


> That's very surprising that you mention iOS as being stable. To me it's the worst offender. Most of the apps I worked on needed to be updated for every single iOS release because of deprecated APIs.

As a user, I concur with this. I can install Android apps created in 2012 that still work in Android 9.

In iOS, I remember that each major update made half of my apps broken (in some cases, completely broken).


That will change with the new target requirements for Play Store.


People say this a lot, but I think their behavior makes more sense when you view the end user as the product. Who cares about API stability, the advertisers are getting exactly what they want.


It will increase stability in a very important way because they apparently want stability of drivers, which Linux does not offer.


> they apparently want stability of drivers

Strangely, the only Android drivers, that can be trivially ported between Android version, are those "unstable" bits in Linux kernel.

Take any postmarket smartphone firmware — if something does not work after porting to new Android version, you can bet, that it is a proprietary userspace blob. Who develops HAL for those blobs? — Google. Who controls it's API? — Google.

What "stability of drivers" are we talking about here? Certainly not the kind, helpful to users.


> Strangely, the only Android drivers, that can be trivially ported between Android version, are those "unstable" bits in Linux kernel.

That's only because the Linux kernel is actively hostile to the idea of maintaining a stable driver interface. Windows 7 drivers from 10 years ago still work in Windows 10.


ARM and the SoC Vendors don't like open sourcing their drivers. They prefer to just dump binary blobs and stop support after 2 years. The users couldn't care less because they buy a new phone anyway. (except me)


If the OS and APIs were stable, nobody would ever need to update a driver unless they wanted a bug fix.


Even Windows has a hard time keeping their OS APIs stable. Drivers written for Vista didn't necessarily work on Windows 7 and so on. This is one of the reasons why Linux highly encourages open sourcing the things so that the driver code can be updated when an API needs to change.


> Drivers written for Vista didn't necessarily work on Windows 7

This is patently, provably FALSE[1]

MS goes out of their way to NOT to break APIs. My ancient ATI netbook can run Windows Vista graphics drivers on Win 10. MS only breaks driver API when massive kernel/underlying APIs demand it. ex. 98->NT, XP->Visa.

Now contrast that with Linux. Linux kernel devs are openly hostile to binary blob drivers so they make no attempt to preserve ABI stability. I've see this happen multiple times with ATI binary drivers in GNU/Linux and when I was running cyanogenmod on my phone.

[1] https://www.techadvisor.co.uk/how-to/windows/how-get-drivers...


My understanding is that this is something that a microkernel design ought to be able to improve upon...


Forget about believing in Google, try at least for consistency within a single paragraph, as in: > They do not realise that they are make •a platform•, not a product and then: > OS is to launch apps

And I still can't stop laughing after reading: > Somehow every other platform is fine

To paraphrase JWZ's CADT article linked below "writing thoughtful critique" is not fun, "making useful suggestions to improve something" is not fun but "writing a snarky, resentful rant on HN" is fun.

ps. most developers on Android are not game developers, maybe time to think about seeing the world from other peoples point of view?


Android wasn't created by Google. They acquired it, and then had to deal with the legacy cruft that was already a part of it by then. The "platform-level" changes they've made since the acquisition—e.g. replacing Dalvik with ART—have mostly been sound engineering choices.

> They do not realize that they are make •a platform•, not a product.

Consider: maybe Fuchsia isn't a platform?

ChromeOS certainly isn't a platform: developers don't develop "for ChromeOS." App developers target the WebExtension ABI (or, more recently, the Android ABI), and ChromeOS just runs their apps using mysterious virtualization magic that doesn't matter to the developer. You don't target the OS; you just target a stable ABI. (Other examples of this: the Linux kernel ABI used by Docker on {Linux, macOS, Windows}; the Linux kernel ABI used by Illumos branded zones; the Linux userland ABI used by the Steam Runtime.) Essentially, you can think of ChromeOS not as an OS in the traditional sense, but as a hypervisor. The libs your apps depend on aren't part of the OS; they're part of your ABI's zonal environment, which is stabilized separately from the OS.

I would expect that Fuchsia is doing the same: being an OS but not a platform. The only people who will have to directly target Fuchsia are Google engineers.


Just to summarize my comment: I agree with you. I realize that eventually* having POSIX/Graphics/... APIs as a library would lead to better, more stable, less driver dependant platform. But I don't believe Google is interested in making better platform for developers, judging by my almost a decade of experience with ever-changing Android.

*eventually is a big significance here. If I wouldn't have my APIs available on launch of FucOS, or like a clear roadmap to them in half a year, I would have to start my own projects to work around of that "eventually it would work" promise. Because "eventually" is not good enough.


I've had the same experience, but I guess it's not cool to make comments based on your own experience on hackernews (and hence the downvotes).


>> I've had the same experience, but I guess it's not cool to make comments based on your own experience on hackernews (and hence the downvotes).

Some people here think your own experience is just an anecdote. They're wrong of course. Ones own experience is a data point (or even a collection of them).


As the saying goes "the plural of anecdote is not data". Unless you are collecting the anecdotes in a methodical way then it doesn't tell you anything, since you can't know what biases your data is affected by


I cured my X by doing Y.

Is that an anecdote or a data point? What if 150 people tell you the same thing? Having done exactly that, I don't really care how anyone else classifies it, it is my reality.


It's an anecdote. It doesn't matter if you hear it 150 times, obviously. If you hear 150 people tell you that vaccines gave their child autism, is that your reality?


> Running one OpenGL library on top of N different Vulkan implementations is miles better than running on top of N different OpenGL implementations.

The issue with running OpenGL on top of Vulkan is that Vulkan's API exposes a very rigid and static view of the GPU and OpenGL is the exact opposite, allowing arbitrary state changes at any time. While actual GPUs are not as dynamic as OpenGL, they are also not as static as Vulkan so by implementing it on top of Vulkan you are forcing state changes that would not be necessary for the underlying GPU.

Also OpenGL being a higher level API provides for more opportunities for optimization than Vulkan (a very common example would be display lists which thanks to their immutability and opaqueness can have a large number of optimizations applied to them - something that both Nvidia and AMD takes advantage of, especially Nvidia which performs very aggressive optimizations on them).


Google will be doing it for Android Q, most likely due to all devs that still aren't bothered to wrestle with Vulkan.


I'm not saying it isn't possible, ANGLE is a thing after all, but possible doesn't mean optimal.

Of course, given enough time, faster hardware will solve this.


Ah well-written OpenGL "emulation" on top of a Vulkan driver is most likely faster than than a badly maintained native GL driver, and you'll only have to worry about the bugs present in the one GL implementation you're linking against, not a variety of bugs across different drivers and driver versions.


Yes, the best scenario of the first case is better than the worst scenario of the second case, but personally i'm more interested on the best scenario of both cases - especially since we already have working OpenGL implementations that take advantage of how OpenGL is specified. I'd rather see a push to improve subpar implementations so they reach parity with the good implementations than throw all implementations out of the window because of the bad ones.


>Sandboxes are unportable and strapped-on

I contribute to an open source project called Torsocks, which is part of the Tor Project, and this comment really resonated with me. Creating a syscall sandbox that works across even a few, generally similar POSIX-compliant OS's is ridiculous.

FreeBSD and MacOS for example have a very similar system interface. But sandboxing on FreeBSD is via pledge, and MacOS uses the App Sandbox. Linux uses seccomp.

It's a mess.


FreeBSD uses Capsicum, OpenBSD uses pledge.


> Right now there are too many OpenGL implementations and the differences matter.

Vulkan is already going down the same path in spite of its youth.

https://vulkan.gpuinfo.org/listextensions.php


The extensions were always going to be in Vulkan. They embraced them even more than OpenGL, because you're not going to get around it. Hardware is just plain different from other hardware and PMs want a "value add". What Vulkan does differently is that you have to explicitly enable extensions, so you can't unknowingly be relying on an extension when trying to write cross platform code.

This is all a good thing.


Yep, polluting the code with multiple execution paths is a good thing.


You're... not required to use the extensions right? How else are you supposed to provide the _option_ to use _optional_ features? Feels like there's gonna be a branch in there somewhere. Lowest common denominator APIs are a non-starter for high performance graphics work.


Yep, hence given the size of a game engine, it is hardly any different having to deal with multiple flavours of OpenGL/Vulkan, or just use the best API in each platform.

Ergo middleware is the new cross-platform API.


Which code?


Game engine code, where testing for each extension and reacting accordingly leads to several if(){ } else { }, or a vendor agnostic interface layer, making the total development cost hardly much different than supporting multiple 3D API flavours.


The other option is waiting until all of the vendors have the ability that the extension provides and it has made it's way into the standard.

That option hasn't been taken away from you. And unlike OpenGL, you can't unknowingly be relying on an extension since they're opt-in. So what's the problem again?


Nothing, just false advertising on complexity improvement over OpenGL.

Anyway, middleware has won the battle of 3D graphics, what goes on the bottom layer is largely irrelevant to most devs.


It's not false advertising. Swapping the extension model to explicit opt-in is one of many pieces designed to help you manage complexity in a non trivial project for the reasons I've stated.


So far the amount of boilerplate to handle extension management and code paths in Vulkan samples shows otherwise.


Or, if you care that much, you can not have any of that and just not enable any extensions. Easy peasy


Or just use a middleware engine and profit from best 3D API provided by each platform owner, much better.


> POSIX can be a library too

I've the feeling that POSIX is an API that makes many assumptions about how things internally are implemented in the kernel. So POSIX as a library is often limited or inefficient.


I doubt anybody really writes 'POSIX' anymore, if that helps.

What they do, more typically, is 'Write Linux software'. If they care about POSIX they will try to ignore features they don't think was mentioned in some POSIX manual from 20 years ago. It's all very hand-wavy.

This is why you see the major operating systems advertise Linux software compatibility rather then paying for POSIX certifications. AIX, Solaris, Windows, etc. Sure it's not a official standard, but it's going to be pretty well defined because you can just model your compatibility on WWLD.

If push comes to shove then Fuchsia could just add some variation of 'usermode linux' as one of those userspace kernel services.


Actually most system software today is POSIX, with some very rare #ifdef to leverage Linux-specific syscalls.


"OS research is dying out" - this

I was learning about kernel dev around the same time as trying to understand how conditional execution and speculative execution worked as a result of really trying to understand every step that happens when a system call hands something to the kernel and the kernel does something with it and hands it back to the system.

I kept asking but alot of supposed linux nerds I spoke with couldn't tell me how the kernel and user space truly handed off data or negociated memory with eachother, leaving me drawing out trap handling routines on a posterboard penciling in gdb dissassembles of memory for system call source code, feeling dumb for not knowing, meanwhile we all find out about Spectre and Meltdown and that really there is not a secure handoff without significant performance degradation and/or increased sandboxing for things like the browser, etc.

And of course what is the root of the issue here? The root of the issue is that linux is too deeply integrated into monopolized hardware architectures, which is perhaps why AMD's stock price skyrocketed the day Spectre and Meltdown came out, when we found out the only mitigation for this legendary security vuln in the near future, will cost a 30% reduction in performance across the board with intel as opposed to much less with AMD given the AMD architecture was less prone to exploiting the vulnerabilties around speculative execution.

The more I learned about these things plus issues with other basic functions like wait() ot strcpy() or in general the lack of protections around C, the more I entertained the idea of looking for alternative operating systems. The networking stack in Fuschia is written in Go for example. While I don't know much about Go, can it be worse than C when it comes leaving it up to almost every developer to take care of their own garbage collection and what the performance and security implications of this are?

Magenta is designed to be modular enough in nature to withstand the waves of hardware architectural evolution coming and given we are approaching 5nm development (the theoretical limit of how small a transistor gate can be before we can no longer control interactions/flipping a transistor switch due to quantum interaction), and this is not far off, Intel already has 10nm in production and probably others now as well (its been a bit since I checked) then to quantum computing:

Because quantum computing (this is debatable and I know the least about this) is not ready for mass production, particularly on the mobile scale, my conjecture is once we reach the theoretical limit of how small a transistor will be, designs will turn to optimizing for performance in every other way we can without relying on powerful processors to accomodate for memory bloat or endless dependencies (yes I also pray this requires javascript modules to be better or die out but thats a long range dream).

Meanwhile AMD gains ground post spectre and meltdown. So, in summary, there are alot of other options to consider than just optimizing for POSIX forever.

Therefore, I am glad there is a push to explore alternatives. I feel as though anyone who thinks it's not potentially beneficial to explore POSIX alternatives based distros does not work with Unix based systems in any kind of depth on a daily basis, but if someone does, and you think Linux for example, is the best operating system in the world and can't be improved upon outside of its defining protocols, then I would love to hear from you on this thread. I am not nearly as experienced as most people who work with Linux, but I can say that most I have interacted with it view it as a love hate relationship for many of these very reasons.

You can also see this trend of unhappiness with Linux OS defaults out in the wild outside of google.

More and More and serious applications are looking to bypass userspace application development to be either more secure, customize, most often for the purpose of if not security, to optimize performance for the things we use to consider the std linux kernel somewhat good at.

Here are few varied examples I can think of off the top of my head anecdotally when trying to solve everyday problems for users with linux, but I am sure there are many more:

1. Dropbox bandaid attempts to customise network schedulers usually handled in kernelspace due to performance issues: https://blogs.dropbox.com/tech/2018/03/meet-bandaid-the-drop...

2. Wiregaurd is an example of a VPN where communication negociation is handled more and more in the kernel, because traditional vpn designs have left TLS handoffs in userspace (what is the point of userspace anymore for serious application development when this is the trending security default): https://www.wireguard.com/

3. Sysdig implements epf functionality to allow for sysadmins and devops engineers to customise and or secure in ways we don't trust or consider the default linux operating systems userspace/kernel space design to do anymore: https://dig.sysdig.com/c/pf-blog-introducing-sysdig-ebpf


> The more I learned about these things plus issues with other basic functions like wait() ot strcpy() or in general the lack of protections around C, the more I entertained the idea of looking for alternative operating systems.

Dig into the worlds of Burroughs B5500 (now Unisys ClearPath), IBM OS/360 (now IBM z), IBM OS/400 (now IBM i), and the now gone Mesa/Cedar, Oberon, Active Oberon, SPIN OS, Topaz OS, Mac OS/Lisa, Singularity, Midori, Inferno, ...


and yet you are still alive and not starving to death. But the banter I see on here is android video game developers complaining that a move away from android will be the end of them.

Google is not stupid, they are not going to deprecate android overnight and replace it with Fuschia, this operating system has been in the works open source, you can see the commits on github for atleast two years I think more, and there will clearly be many iterations of its development to come with increasing adoption each time as people make money on the platform, just like with Android which took years before it reached the threshold of 50% use compared to iphones and no iphone video games developers that I know of starved to death trying to adapt to this change. The drama on this thread about api changes are significant for sure and I understand Google redacts API's or suddenly starts charging for them in a way that makes small companies close up shop overnight (like google maps for example) but it is not a justification to pretend that objective limitations around Moore's Law and the need for competition in computer hardware is forcing companies who have experience in both spaces to reconsider kernel development at a more fundamental scale.


Android is being ported to run on top of Fuchsia.


which is why I'm confused about all of the top ranking comments complaining that android will change their API for this. Will this require a change for android app developers if this is the case? Regardless, this seems like a more fundamental layer of improvement.


They are mostly by folks that never did Android development and think they are free to use Linux code as is on the NDK.

Still, it will be a scenario similar to ChromeOS. How many people are buying ChromeOS devices to run Android apps?


Wow that's an interesting list. Might you be able to add some specific points of interest on some of these OS's to start with? Cheers.


Sure,

Burroughs B5500, first OS written in an high level systems language (ESPOL, later NEWP) in 1961, 8 years before C came into existence. Already used compiler instrics instead of Assembly, and the concept of unsafe code blocks.

IBM OS/360, famously introduced the concept of containers, alongside IBM OS/400, also has language environments, think common VM for multiple languages.

IBM OS/400, originally written in a mix of Assembly and PL/S, uses the concept of managed runtime with a kernel JIT called at installation time, and uses a database as filesystem.

Mesa/Cedar, system language developed at Xerox PARC, using the same IDE like experience similar to their Smalltalk and Interlisp-D workstations. Uses reference counting with a cycle collector.

Oberon and its descendants, Niklaus Wirth and his team approach to systems programming at ETHZ, after his 2nd sabaticall year at Xerox PARC.

Mac OS/Lisa, these first versions of Apple OSes were written in Object Pascal, designed in collaboration with Niklaus Wirth, whose extensions were later adopted by Borland for Turbo Pascal 5.5.

Singularity/Midori, the research OSes designed at MSR, largely based on .NET technologies.

Inferno, the actual end of Plan 9, using a managed language for userspace, Limbo.

SPIN OS/Topaz OS - Graphical workstation OSes for distributed computing developed in Modula-3


Thanks this is great! I'm looking forward to digging into the specifics of some these. Cheers.


>> OpenGL should be a library.

But it's not. Where is this OpenGL implementation that runs on Vulkan? I would argue that it should come as fully open source from Khronos group since they are the ones providing both standards. It's fine to create a new thing with a long term vision of what a better world looks like. But people won't follow if the pieces they need today are just a wish.


Besides Zink, there is GL ES support on top of Vulkan through GLOVE[1] as well as Google's own ANGLE[2].

[1]: https://github.com/Think-Silicon/GLOVE [2]: https://github.com/google/angle


Zink is an effort to write an OpenGL layer on top of Vulkan. Previous discussion:

https://news.ycombinator.com/item?id=18356179


>"There are a ton of APIs in POSIX which are straight-up shit, like wait()"

Could you or someone else elaborate on what is so loathsome about the wait() system call?


> Don't change APIs on us - not everyone has extra million of $$$ to throw on a project to refactor it every time Google decided that they want shiny new thing, which as result is broken too but in different ways.

This was what drove me out of my (brief) stint at Android development. Did some hobby development to learn the ropes, spent a lot of time trying to "do it right". It was a bit clunky but alright I guess. Left my project alone for a few months while my day job was busy. Came back to it to find that, in a few months, there had been not one but two generations of deprecated APIs between what I'd written and current 'best practices' and that a bunch of pretty fundamental stuff had been deprecated.

I'm not wasting my life chasing that particular Red Queen.


Yep, Android's best practices tend to last one Google IO.


Fuschia relies on the kernel Zicron which last time I checked used Magma as a framework to provide compositing and buffer sharing between the logical split of the application and system driver, which exist as user space services.

The fact that graphics drivers exist as user space services should reduce latency by minimizing the need for capabilities, the equivalent of a system call which requires an expensive system call/trap handling routine in std linux (for example).

This is presumably to support an architecture with direct access to a GPU where the main CPU scheduler doesn't have to schedule a round trip of data from main CPU to data bus to GPU and back, compared to std linux on std hardware architecture, for example which will support this overall design to decrease latency and advance open source graphics development in a user space setting.

- No OpenGL support. I guess this is where world is moving

Vulkan is still built off of OpenCL, which is kernel code designed explicitly for the GPU. While not technically OpenGL, it is a more direct interface architecturally with the hardware and there are plenty of engines working on vulkan support. Consider there are other advantages to using a graphics driver than just OpenGL (like parallel computing and abstracting large data sets into matrices that map nicely into GPUs) and optimize based on this assumption and it is not as unreasonable as it sounds.

- No POSIX support

"Full POSIX compatibility is not a goal for the Fuchsia project; enough POSIX compatibility is provided via the C library, which is a port of the musl project to Fuchsia. This helps when porting Linux programs over to Fuchsia, but complex programs that assume they are running on Linux will naturally require more effort." - https://lwn.net/Articles/718267/


That sounds like you want to continue using APIs and OS from 1970s, while Fuchsia is deliberately trying to break those old conventions.

OpenGL was never a serious contender for modern games and there's a good reason why all new 3D APIs are moving to more low-level representations (Vulcan, DX12, Metal) with libraries on top. OpenGL is a horrible implicit stateful machine which is terrible to multithread and still holds a model of a hard-wired 3D accelerator as a base, which isn't how new graphics cards work. Everything else is bolted on top of that out-of-date idea which makes is hugely unwieldly for new software.


> That sounds like you want to continue using APIs and OS from 1970s

If possible, I'd prefer to keep using APIs from 1960 instead.


Then perhaps you shouldn't be using an OS from 2020 ? Demands for POSIX compatibility really seem like cargo culting these days, considering none of the popular platforms will have apps use only POSIX APIs. It's a crutch that holds back API design white still demanding that you use OS specific syscalls in pretty much every software out there.


>> It's a crutch that holds back API design

When I read that I caught a hint at a problem. API design should not be an ongoing activity. People should design a new API and then we should all use it for a long time. If it's frustrating that POSIX is still used, you might want to consider that stability is the feature that keeps it around.

Perhaps it's time to reread Joel on "fire and motion":

https://www.joelonsoftware.com/2002/01/06/fire-and-motion/

I'm all for this Fuchsia thing, but they claim to have the experience to design something better so do that and let it stand. Regular software updates are actually a sign that you don't know what you're doing.


New APIs make sense but keeping the old ones for backwards compatibility does too.


Sure, but there's no reason you have to do that in the kernel.


Old doesn't have to mean bad. Folks are switching from MongoDB to PostgreSQL, or wish they could if it wasn't too late and expensive. Evaluate on merits, not age.


> No OpenGL support. I guess this is where world is moving > No POSIX support. Quite a bit of game engines rely on it, oh well, when google cared about developers

Well, tell that to game developers working for consoles. They don't have access to the same APIs and they made it work just fine. Most of the time, you don't make your own engine and rely on some other engine to support your platform, so it just works.

I've done enough Wii, PSP, PS3 or PS4 development and having different APIs was never a problem.


Console APIs are nice and clean compared to Vulkan though, since they are tailored to the underlying hardware. Vulkan is a weird compromise between a low-level API and covering fairly different GPU architectures.


The point was more about general APIs like threading or file access, anything covered by POSIX.

It is already expected to have to rewrite your GFX backend anyway using the specialized API for the platform, but people don't expect the same for general use APIs too.


When the goal is a clean slate OS design, we can't get up in arms about compatibility out of the box, can we?


I think first and only goal should be user experience.

OS is a tool to start apps, and if they can't make it comfortable to make and support apps, they failed before they started.

Google's problem is their academic goal to make a nice OS, IMO.


Why do you want to force your idea of what the goal should be upon Google?

If they want to test the waters in relatively unexplored waters let them do so.


>- No POSIX support. Quite a bit of game engines rely on it, oh well, when google cared about developers

This is caring about developers, just that you don't understand it.


It's not and eventually they will have to find ways to provide POSIX compatibility, it can happen through a separate compatiblity layer or a library. You don't want to break compatibility with millions of lines of already written code if you want widespread adoption. No one is going to rewrite everything from scratch, just because Google says so.


Which "millions of lines of code" are compatible with POSIX without having any Linux or macOS specific code? Or being compatible with Windows for that matter, which is POSIX compliant just in the name?

I think you're hugely overstating the importance of POSIX, not to mention downplaying the fact that POSIX really isn't sufficient requirement to not have to do any code porting.


> Which "millions of lines of code" are compatible with POSIX without having any Linux or macOS specific code?

Those layers would have to be reimplemented to retain compatibility.

> I think you're hugely overstating the importance of POSIX.

People generally understate importance of POSIX, just because it's old. It's impossible to get APIs perfect and you are throwing away decades of work in the name of getting APIs "right". No one is going to rewrite everything from scratch, just because the new APIs look shiny. For reference read about Unix wars[1].

[1] https://en.wikipedia.org/wiki/Unix_wars


If Fuchsia is wrong on this point, they can add more POSIX compatibility later.


Someone correct me if I'm wrong, but this is probably lower level than what you're speaking of. I doubt they'll be revamping all those "Android sound systems" you speak of, but rather rewire them to this new kernel.

It wouldn't make sense for the to completely throw away all Android apps ever and rewrite all the APIs from scratch.


Google is making Vulkan a required API in Android Q, and they are updating ANGLE also run on top of Vulkan.

As for Fuchsia, given that the team comes from a different background and the OS APIs, I am not expecting the straightjaket experience from NDK.


I tried to start android development couple of times and it was impossible for me.

The last time I created a new project from the Android Studio built-in template, it was broken from the start - UI editor throwing an error, googling didn't help, I didn't have so much time to debug it so gave up again. I expected built-in template to always work.

I hate java, I always hated that tree of subdirectories. What worse, there are some Maven repos and some proprieters Gradle binary which is randomly downloaded to my pc like a malware. Why is that? It is very difficult to configure the project, so many XMLs, documentation is not very good, build process takes a long time. Android studio is slow and takes much RAM as any java-based IDE.

Rename a project? Near impossible. You have to be a guru to know all the places where to change it, rename directories, etc.

In comparison, it is very straightforward to develop for iOS. It is possible to integrate C/C++ easily. Normal gdb/lldb. Cocoa UI is nice, nice frameworks with reasonable documentation. The only problem is the closed platform, requirement to pay for Development program, signing all binaries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: