"a post-API programming model"
But pressing on how somehow manages to blame the lack of updates to android phones on the modularity of the Linux kernel. The joke of course being that linux is monolithic and googles new OS is a microkernel ergo more modular.
The quote is "...however. I also have to imagine the Android update problem (a symptom of Linux’s modularity) will at last be solved by Andromeda"
Its hilarious that he can somehow defying all sanity ascribe androids update issue to an imagined defect in Linux. Android phones don't get updated because for the manufacturers ensuring their pile of hacks works with a newer version of android would represent a non trivial amount of work for the oem whom already has your money. The only way they can get more of your money is to sell you a new phone which they hope to do between 1-2 years from now.
In short offering an update for your current hardware would simultaneously annoy some users who fear change, add little to those who plan to upgrade to a new model anyway, decrease the chance that a minority would upgrade, and cost them money to implement.
Its not merely not a flaw in the underlying linux kernel its not a technical issue at all.
At the moment, phones include all sorts of custom drivers for very specific versions of the hardware. The OEMs ought to send these upstream, but don't want to. You can't build your own kernel and upgrade without breaking all the binary-only drivers.
Android falls between two stools. Google own the userland but the OEMs are responsible for updates and the SOC manufacturers (mostly Qualcomm and Mediatek) are responsible for closed-source drivers. Arguably the cleanest and least achievable way out of this is trying to have an OS-only phone.
RedHat Enterprise Linux actually solved this with their kABI. This allows vendors to ship binary driver RPMs for kernels in the same RHEL major version (eg, RHEL 4, RHEL 5, etc). However, this entails a major effort on the part of RHEL, as it forces them to carefully backport improvements from upstream in a slow and careful way, so as to avoid breaking binary compatibility.
The RHEL kABI model was quite nice to deal with as a 3rd party NIC vendor (and you had to do it, even if you upstreamed your drivers, as they were frequently not backported to RHEL/Centos at a rapid clip, so RHEL customers would be using ancient buggy drivers unless you put together a driver RPM for them).
However, the source changes to support all the backports were something else entirely (which you needed if you wanted support for newly backported features). For a 10GbE driver that was roughly 2000 lines of C, I had a roughly 1000 line hand-made configure script, roughly 1/2 which was checks made necessary due to RHEL's backports. Checks that could have been a simple check to see what the linux kernel version was were complex spaghetti, trying to detect how many arguments some function took. This is because EVERY kernel was 2.6.18 for RHEL5, even if it had backports from 10 or more versions higher.
I'm guessing that Google felt that it was better to invest in building an OS they could control from the ground up, rather than hiring a building full of people to backport upstream patches in a binary compatible way.
Obviously, you can't just take .28, apply a fix, and call it .29... But you have to do something.. ideally something that indicates binary ABI compat with upstream .28, (so software vendors can just decide, your .28, you get .28 features.
Maybe something like SemVer needs a model for versioning of forks? (I'm not a SemVer fan or hater, it's just the best known effort in this area..)
And you're right, if you always build on the base version, then you have greater compat. However, you also loose out on the backported features, many of which were important for performance.
This is just an excuse. The reality is the manufacturers still have the mentality that once something is sold their responsibility ends. We see the exact same results on certain OS's that do have stable ABI's. You've probably made a transaction recently on a device that uses an old and unpatched version of windows CE.
I'm still shipping software for WinCE4.2 devices. They're supposed to be on their own little LAN segment, not bridged to the internet, with a border PC managing them. The upgrade path would almost certainly both an expensive nightmare and possibly infeasible - they have exactly enough Flash for the OS they shipped with.
It's really a cost-benefit tradeoff; all these software updates cost money. Who pays for that, under what circumstances, and why?
Not my experience. I have three hardware devices that worked on Windows, Linux and FreeBSD 5-10 years ago, and now only work on Windows and FreeBSD. (Amusingly enough one of them is a windows CE device)
Thus, with what you describe, the argument can be made that the Linux driver model doesn't appropriately take into consideration the incentives and needs of hardware manufacturers.
Isn't this a rampant GPL violation? Why do we put up with this?
We are about to lose the war for general purpose computing due to insufficient GPL enforcement.
Isn't this a rampant GPL violation?
For example, an OEM can bash out low quality code which is hard to maintain, secure in the knowledge that they'll only have to maintain it for the year or two that phone is getting updates, and it won't have to survive any big kernel updates or be compatible with anything that isn't on the phone.
Upstream, on the other hand, wants to maintain device support forever so they want good quality code that they can get through kernel updates with a minimum of fuss, and they can't accept anything that breaks existing code.
So upstream won't accept the OEM's shitty patch, and the OEM won't pay for the engineering time to make patches that upstream will accept.
(Of course, some OEMs absolutely do violate the GPL, and then there's also the issue of binary drivers and blobs that violate the spirit but not the letter.)
Interesting, wonder where this puts Android's Dalvik VM (is the VM called this too? I don't remember). Edit: Not sure what the JVM Dalvik is based on is licensed as though.
They do not want to piss of their corporate masters..
Who is "we"? As in, who will pay the lawyers?
looks like it came directly from the source repo.
Modular is the application platform of Fuchsia.
It provides a post-API programming model that allows applications to cooperate in a shared context without the need to call each other's APIs directly.
The thing is, this only works on countries similar to US where most people are on contracts.
In the rest of the world, where people are on pre-paid, we use our phones until they either die or get stolen, which is way more than just 1-2 years.
Between that and fragility of smartphones (mechanical damage, water damage), most people are bound to replace theirs every few years anyway.
My S3 is 4 years old now, and it is works perfectly fine.
When it dies, I will most likely adopt one of my Lumia devices as main one, or will buy a 2nd hand Android device, instead of giving money to support bad OEMs
Just spent it a new battery very easily for 7 € for the years coming.
However do be aware that your perception of what is slow changes as you use newer devices which perform better, making older ones feel slower than they did before - much like a shiny new work PC makes your home PC feel slow (or vice-versa) - so that may partly explain it.
Finally, I've just breathed new life into my ageing Nexus 7 (2013) tablet by installing the custom ROM "AICP" on it, which has made a huge difference to its performance & battery life. I recommend giving it a whirl on your SO's S3 - download from http://dwnld.aicp-rom.com/?device=i9300 (though you'll need to read up elsewhere about installing custom ROMs if you've never done it before)
Point is, I won't change my S3 until it crumbles to dust, if I can help it. I don't understand the obsession with buying the latest mobile every 1-2 years.
Also, the GC on older Android devices is a mess. It does not take many apps hanging out in the background before things slow to a crawl.
Is it still getting security patches? If not then it's not running perfectly fine.
2 years update + 1 security for mobiles that cost more than a computer is not something I am willing to pay a premium for.
Do you have an study available that I can refer to?
My smartphone's getting on for 9 years old; I don't think I'm a key demographic ;)
If so, then there might be greater pressure from phone rental agencies on the manufacturers to stabilise and fix bugs so that they can extract maximum value from the hardware.
To be honest, subjected to a "mildly aggressive & negligent" usage pattern, like mine, even an iphone will barely last more than 3 years and it will be in "far from good" condition after 1 year of usage!
Modern smartphones are simply not built to last unless you take exceptional good care of them. Or maybe it's just that me and the people I know tend to be "exceedingly violent" with our smartphones, dunno...
I think most people are more like me, but there are indeed a non-negligible segment of the population like you (and my sister) who just destroy their phones. Based on everything I've seen, it's definitely more of a you thing than a modern phone thing.
We are very far away from phones that you could use without screen protection and not see damages after 1-2 years. For a device that is supposed to be carried around all the time, modern phones show little resistance against scratches. But maybe that's just not possible to archive.
 - Unless you want to take a risk and get someone to replace you just the broken glass, a process which involves some manual fumbling with heatguns and UV-hardened glues. Or, unless you live in Shenzhen, where they'll replace you the whole phone front (with electronics and all) for cheap, if you give them your phone front (which they presumably fix up later and resell to the next person).
I dropped my Xperia Z1 Compact a bunch of times from that height, the screen is still fine...
> You accidentally sit on it, and you may have a screen to replace.
Solution: Don't ever put your phone in your back pocket.
I used the occasion to ask for a front for S3. She sold me one for somewhere around $50 - because I didn't have the broken one for exchange - which I later used to fix my SO's phone myself. It's not that hard after you've seen how the Chinese do it, though a little stressful - I had to use a needle to punch through a speaker canal, which for some reason wasn't hollowed out properly :).
I guess the reason it works in Shenzhen is because that's the place where the phones are made and recycled - they have tons and tons of parts for every model imaginable, both from factories and from broken phones. Given how many different phone models are out there, I doubt any city except a major metropolis could sustain this type of market.
That's a direct quote from the linked page. Unless you are suggesting that the authors of fuchsia have "insufficient tech chops".
If someone important enough says "decouple" and "monolith are bad" often enough, those arguments will be used to support more or less all changes as they already got leverage in the organisation.
The actual reasons could be something else, possibly as simple as they would like to control the OS, and possibly also that it's cool to write OSes.
So either Google fully controls Andromeda, as it does with Chrome OS, so it can update it (Google could probably still release an open source "Andromedium" version later on, like it does with Chromium OS), or it somehow forces all OEMs to update on time right from the beginning.
But the later sounds like a real pain from Google's perspective, as it would probably have to one day completely retract Samsung's license for instance, or sue it, if Samsung doesn't comply, and this could create all sorts of PR problems for the Google. Or Google would have to compromise and allow the OEMs to delay updates from say 1 month to 3 months or more. And then we'd be right back to square one.
I don't think there's a real practical solution to the update problem other than Google fully controlling the codebase.
> The joke of course being that linux is monolithic and googles new OS is a microkernel ergo more modular.
If you're suggesting that because the microkernel is modular that means "it should solve the update problem", I disagree. I don't think it would be much better than what we have now. Sure, it may be easier for Google, or say Samsung, to update their modules faster. But what about the other modules in the market? Will they be updated just as fast by noname OEMs? Probably not.
Right, but they will never get my money if they don't deliver timely OS updates for the old model.
You're right of course that this issue has absolutely zero to do with Linux.
Which is why I would absolutely love to see a hardware vendor adopt paid updates, with a mission statement along these lines:
"we cannot believably promise a reliable progression of software updates, because too many before is have tried and failed. What we can do, however, is rewrite the rules so that we will have a much stronger incentive to follow through than or predecessors".
No matter how updates happen, if they do the effort will be paid for with money coming from the customer, one way or the other. Any payment scheme other than pay on delivery requires trust and that specific form of trust has evaporated a long time ago.
I believe he's talking about having a driver API. Fuchsia also runs its drivers in userspace, which is the proper design in a post-Liedtke world. (principle of minimality)
The update problem on Android is indeed a policy issue, not a technical one.
The fact you need Android device OEM's support is precisely the problem here, and that's mostly (mostly - admittedly not entirely) to blame on the way that manufacturers need to update their Linux kernels. Linux' monolithic nature makes this a major pain, especially since component manufacturers (for the SoC, touchscreen, fingerprint reader, camera sensor) implicitly require maintaining some non-mainline Linux. The manufacturer now serves as the central hub coordinating this 'mess' to update their devices. Mostly (again, not entirely) due to Linux, Android updates are dependant on manufacturers compiling update packages which are essentially full system upgrades. Each time they make an upgrade for a phone, they need to make an entire new device image.
The manufacturers have clearly shown to be incapable of handling this responsibility, both in terms of their abilities as well as their motivations. A more modular OS would go a long way in taking away these responsibilities, since it will be technical means by which responsibilities can be given to a better motivated third party. Clearly we should be looking towards a model more akin to Microsoft Windows, where there's a more stable driver API/ABI for drivers, that allows the OS' original manufacturer to update critical components and introduce new features without depending on an incapable/unwilling manufacturer. Make system updates not full system upgrades, but make the system able to update some components with reasonable confidence that things will keep on working. With such a system Google or the "Open Handset Alliance" - or depending on the openness of this new Andromium, others like the LineageOS community - can take care of updating phones instead of the device OEM.
In such a scenario you might still be stuck on graphics drivers from several years ago with serious rendering bugs, but at least security issues like Stagefright  and Dirty COW  can be effectively dealt with in a matter of days. That's a huge improvement over the current situation, where the majority of Android devices are still affected by both issues, years or months after their publication.
Imagine how unlucky we should be to be fully dependant on Asus or LG for software support on our laptops. But that's exactly the situation on our phones.
This is what we need with phones.
This is it; unless crucial apps (or some other necessity) only work with a newer Android version (which could push OEM's to update) they never will. Much better for OEM's to hope that you'll "update Android" through the purchase of a new phone.
Google has already done 90% of the necessary work by adding Android apps to ChromeOS. Two and a half years ago it created "App Runtime for Chrome" which demonstrated that Android apps could run on Windows and Mac in a limited, buggy way . If Google had put meaningful effort into developing such a strategy we would by now have a relatively simple way to develop software which runs on 99% of laptops and 85% of smartphones and tablets. Developers would now be targeting 'Android first' instead of 'web app first then iOS then maybe Android'.
Sundar, if you're reading this - do it!
Applets have become slow to start many years latter when bloated "enterprise" applications have been produced in abusive ways.
The security of applets has started to deteriorate a bit slightly before the death of Sun. It has become a security hell only since it is in the hands of Oracle.
I did my fair share of applets back in the day, starting with the very first public versions and have very vivid memories of the loading screen :)
At least that was the case for me...
Although yeah, even with our newfangled fibre connections applets are still pretty bloated.
* Java itself was slow for a long time
* The Browser would hang while loading an Applet
The second was as far as I can tell an API issue. Applets would block everything by default until they were loaded. A really bad idea in a single threaded environment when you had to send several MB over low bandwidth and the JVM itself took long to start. Just making the load async with a completion callback could have solved this issue and I remember a few Applets that actually used an async download to reduce the hang.
In any case, Google doesn't need to be as strict as Sun was. It is free to implement "write 90% of your code once and 10% customised for each platform".
Actually they suffer from most of the same problems, only computers have gotten faster (masking performance issues) and our expectations have lowered. How many of these web apps obey the native OS themeing for instance?
Although I think there's more going on in regards to 2. I was never bothered so much by the UI of a java applet looking different. What bothered me was that even very fundamental stuff like input fields and scrolling felt both alien and shittier than native. And while it's certainly possible to make a web app just as shitty, if you rely on 'stock' html elements, a lot of the subtle native behavior carries over.
Just a few weeks ago, for example, I built a web-app for mobile devices. It felt off immediately because the scrolling didn't feel right. All I had to do was turn on the momentum scrolling (with a line of ios-specific css), and the scrolling suddenly felt native. Had I used a hypothetical Java applet equivalent, I might've had to either go for a non-native-feeling scroll or build it myself.
While I of course can't prove any of this, I think what people care about is that things feel native, not the 'skin' used to display it.
I think people finally called the bluff that users have any expectations. And even if they had, what they don't have is choice. The current market is that everyone is building a walled garden around their selling proposition, so if a company decides to make a web app instead of a native one, then that's all you have. Nobody will make a better one and risk getting sued. If a service doesn't want third party applications, then they won't happen.
It's a really interesting question actually because it's so hard to compare the two. On any objective measure, today's web apps are much better than applets in terms of responsiveness, etc. But then again, an applet could run on machines with 16MB of RAM total. I think you'd be hard pressed to get plain html page in a modern browser to run on a machine like that. Either way, in both cases we had a much better solution in native apps.
> 2. I was never bothered so much by the UI of a java applet looking different. What bothered me was that even very fundamental stuff like input fields and scrolling felt both alien and shittier than native.
Modern web apps can score better here, but quite often they don't. The more complex the become the less native they get, scrolling, text input, etc are generally OK (unless your an arshole that overrides scroll behaviour), but html still doesn't have an equivalent for native table views and the goodies (navigation, resizing, performance) that comes with them.
For me the skinning does matter though, I have a beautiful, consistent desktop that browsers (not even electron apps) shit all over. When something doesn't look quite right from the second you open it it magnifies all the other differences.
Oh yeah, complex UI stuff is definitely a good reason to avoid web apps.
But for many, probably even most apps it's precisely scrolling, text input, and other 'basic' stuff that matters, and in those cases a web app's 'default' will be more native.
> For me the skinning does matter though, I have a beautiful, consistent desktop that browsers (not even electron apps) shit all over. When something doesn't look quite right from the second you open it it magnifies all the other differences.
I agree on a personal level, but I suspect we're outliers. Can't substantiate that at the moment though, so I might be wrong.
This applies to web apps pretending to be mobile apps, too. You can quickly tell one from another; the web app is the one with mediocre UI that behaves "wrong" in more or less subtle ways.
If an issue no longer affects anyone in any way, is it still an "issue"? Odds are that all the code you've ever written would have been considered criminally bloated at some era of computing history, but it hardly matters now.
I said it was masked, not gone. It still causes a lot of issues for people on resource constrained machines.
> Odds are that all the code you've ever written would have been considered criminally bloated at some era of computing history, but it hardly matters now.
For much of computing history where were making clear gains with newer hardware. Up to the 90's software was getting more bloated but it was doing more. Most apps today really aren't doing much/any more than we were doing in the 90's but require vastly more powerful machines.
Modern web apps are non standard but pretty.
It seems to me that the 'new' Microsoft (since Satya Nadella took leadership) is changing their closed/proprietary stance on many topics. Not everything of course, they still have to sell stuff, but as far as "genuine cross-platform" is concerned, they are certainly giving developers all the tools to both make and target all major operating system.
Giving anyone not on Windows a second rate experience? Core as the name says provides only a subset of the Windows .Net framework and most .Net code in the wild is written with the implicit assumption that it runs on the Windows framework.
> VS code universal electron app
Instead of making their main IDE a proof of concept for .Net Core they wrote a Web3.50/NodeJS IDE. I am very sensitive to high latency IDEs so that is something I wont ever touch.
> the whole Office 365 paradigm
Trying to keep up with the competition, Google Docs ring a bell?
> since Satya Nadella took leadership
Nadella 2014. Linux on Azure 2012. Office 365 2011. Mono based on Microsoft’s promise not to sue 2004. Open sourcing parts of .Net is really the only thing you can assign to Nadella, everything else was still done by the good old triple E leadership.
Now, consider the orders of magnitude improvement in computing power over the years, and notice that webapps aren't really doing anything more complicated than old applets did...
Android and Material Design will not be "write once, mediocre everywhere". It may become "write once, great on Android (majority of phones) and ChromeOS, mediocre elsewhere." But writing for Android does not exclude creating native versions for other platforms. Using Java did exclude creating native versions because that was the reason to use Java, to not have to write native versions.
If Chrome manages to provide better versions of most applications on most platforms, it may win. Otherwise, people who use those applications will hate it with the heat of a thousand Suns, and it will go the way of the Java Applet.
Uh, Java AWT was the native toolkit. Swing was the non native UI with the ugly METAL default Look and Feel. There are some nice custom Look and Feel implementations that don't try to emulate a platform, I think Matlab uses one for its UI.
One thing that strikes me as weird is that almost any widget set that does it's own drawing or even just it's own automatic layout and tries to match look and feel of native UI invariably does not match even the basic look because various UI components use wrong size, are placed slightly differently and so on. For example everything I've ever seen that tried to match how windows 3.1 Ctrl3D looked draws window decorations one pixel narrower than the original, which is plainly visible and ugly, similarly things that attempt "looking like Motif" usually use different thickness for various lines and borders and also often mix-up meaning of focus rectangle (which should move by tab) and bevel around default button (which should stay in the same place irrespective of which control has focus). I see no technical reason why either of these things cannot be done right, is there some legal reason for introducing such small differences, that are small, but big enough to be annoying?
A quick check of wikipedia backs my memory, the Java classes were just a thin wrapper around the native components. AWT was mostly bad because it was limited, it does not even have a Table.
> even the basic look because various UI components use wrong size, are placed slightly differently and so on.
The windows API does not come with a layout manager AFAIK. I vagualy remember setting every bit of relevant size/position data by hand last time I used it directly. Same could be done with AWT, so this is mostly likely caused by programmer lazyness.
For example, look at Windows 10 and the mishmash of controls (are they flat? or do they have a bevel?) available from the control panel, settings app, old COM dialogs, MMC etc. etc.
What choice do we have? Everyone's doing their own walled garden, so it's not like I can go and find an alternative SaaS / operating system with same features but better UI...
One of the last straws for me was the built in mail app that had this awful background image. Too many flashbacks of shitty access apps.
I remember issue no.3 :
United States Court of Appeals,Ninth Circuit.
SUN MICROSYSTEMS, INC., a Delaware Corporation, Plaintiff-Appellee, v. MICROSOFT CORPORATION, a Washington corporation, Defendant-Appellant.
Decided: August 23, 1999
Usually culprits are things like downloading multiple data files in a single threaded block, or insanely deep object graphs with thousands of memory fetches per real operation.
That's not really anything to do with the technique (code in browsers) or the runtime (JVM) or even the programming language (Java) -- and everything to do with poor development.
It's not fast on a $200 mobile phone though. It's still pathetically slow.
One problem I see with Google's ecosystem is that they've betted on the wrong horse - Java is a pain in the ass and Android's Java foundations are its second largest weakness. (The first one being the Google-Vendor relationship that makes all but the very latest Android devices unpatched and 100% insecure.)
This is basically how Dalvik/Zygote worked on actual Android.
From what I know, Chrome too uses always-running background processes even if you don't have a browser window open (for mainentance work and to improve startup times).
So I'd assume the project would do exactly that.
I still think touch screens is the future - also on desktops (there either in tablet form or "drafting table" form).
I think editors more like acme and less like vim might rise up. Along with new input types like the power bar and surface wheel.
Acme, and anything like it, would be completely horrendous on a touch interface. I use it regularly (although I find that I prefer Sam).
Edit: Then again, I have touchscreeens on many of my laptops. I don't find them useful. With the exception of drawing art, I wouldn't miss them if they disappeared.
Unlike emacs or vim, acme leverages the mouse/gui for powerful editing - and I believe (multi)touch screens have the potential to be better guis than than screen+mouse. For one thing mice generally utilise at most three fingers and one hand - and IMNHO while eg blender/photoshop combine mouse and keyboard - the combination is awkward and not very intuitive.
I think (but am far from certain) that acme is a more promising approach.
Most popular apps are designed for both tablet and phone. The ones that haven't specifically been designed for tablet are usually basic things like Guitar Tuner or Flashlight which don't suffer much from having a phone layout on a tablet.
Sure, you will still sometimes encounter a crappy app which has a horrible user experience on your device - similar to web pages and webapps which assume you have a 24-inch display - but not often enough for users to avoid the platform.
We already do. It's called a webpage.
I doubt that most developers would do any effort to have their apps "responsive".
The other issue I have is that I don't see Android apps as efficient way of getting my work done, applications that don't need to worry about the mobile form factor will most of the time offer a superior user experience.
You already have a 99% cross platform way to ship an app, you can create a web app.
This would have been the way to build Linux into the next great desktop platform. I dont think people mind a 1GB Chrome runtime if it opens up a billion apps for them.
I think apps can be handled well by both browser and mobile phones. Considering that ART runtime also JIT compiles the java code to native, performance should also not be a worry.
I've never tried it or seen anyone do it though.
'Running' can be pretty good, even when it's not a native app - there are plenty of successful web apps, including a few written by Google.
A fresh OS for devices that had chrome as an app and its android hornet behind it would make a bunch more sense.
Pushing the jvm kart is similar to something like bowser vs Yoshi
So, divide up by services, and download only the services needed by installed apps with the first app that needs them. Adds basically nothing to the browser install.
By that logic nobody should even create native Android apps today. You're saying that if Android apps could run on even more platforms it would somehow stop being worth it to create them because iOS is excluded? Makes absolutely zero sense.
Unfortunately, the hard part of an operating system isn't in a cool API and a rendering demo. It's in integrating the fickle whims of myriad hardware devices with amazingly high expectations of reliability and performance consistency under diverse workloads. People don't like dropped frames when they plug in USB :) Writing device drivers for demanding hardware is much harder than saving registers and switching process context. The Linux kernel has an incredible agglomeration of years of effort and experience behind it - and the social ability to scale to support diverse contributors with different agendas.
Microsoft, with its dominant position on the desktop, famously changed the 'preferred' APIs for UI development on a regular cadence. Only Microsoft applications kept up and looked up to date. Now Google has such a commanding share of the phone market - Android is over 80% and growing http://www.idc.com/promo/smartphone-market-share/os - they have a huge temptation to follow suit. Each time that Microsoft introduced a new technology (e.g. https://en.wikipedia.org/wiki/Windows_Presentation_Foundatio... WPF) they had to skirt a fine line between making it simple and making sure that it would be hard for competitors to produce emulation layers for. Otherwise, you could run those apps on your Mac :)
There are many things to improve (and simplify) in the Android APIs. It would be delightful to add first class support for C++ and Python, etc. A project this large will be a monster to ship so hopefully we'll soon (a few years) see the main bits integrated into more mainstream platforms like Android/Linux - hopefully without too much ecosystem churn
So much this; Linux Plumbers conference years ago was bitching about how every gorram vendor wanted to be a special snowflake, so even though the architecture was ARM, you basically had to port the kernel all over again to every new phone. I haven't kept up with it, but I can't imagine it's gotten better. The problems they're listing as reasons to move to a new kernel aren't caused by Linux and they won't go away until you slap the vendors and slap them hard for the bullshit they pull, both on developers and users.
As for kernel ABI, this has been rehashed to death: just release your fucking driver as open source code, and it will be integrated and updated in mainline forever: http://www.kroah.com/log/linux/free_drivers.html
Doing new WPF application development for the last three years for the biotech industry.
It is also the official API for classical desktop application and shares a lot with UWP.
As the other reply probably indicated, it's still not apparent to .net developers. MS really dropped the ball with providing a clear path for desktop development.
And on mobile devices, many hardware component vendors provide custom drivers (with binary blobs) anyways. It will not be hard to convince them to support Fuchsia for new hardware releases. If they loose access to Android otherwise...
"Linux is a free set of buggy device drivers." https://news.ycombinator.com/item?id=8470638 .
Just observe how systemd is overruling and countermanding Linux behavior any chance it gets.
It's too bad, given how much nicer their app approval process, etc is than Apple's that the Android dev experience has been so much worse all these years.
Also, considering the way that the ARC runtime for Chromebooks was a failure and had to be replaced by a system that apparently essentially runs Android in a container, will it really be possible for a completely different OS to provide reasonable backward compatibility?
So, even if they were presently 100% focused on merging the two OSes, Hiroshi's job would be to convince you they aren't as not to risk impacting the bottom line of their sales and their partnerships with OEMs that are continuing to print money for them.
But bear in mind, marketing voices like Hiroshi's job is to promote and sell
people on the existing product... right up until the day they decide to
officially announce something else.
Aside from Google+ (which was pushed directly by Larry and grudgingly integrated-with by the rest of the company), Google hasn't really had plans "as a company" since the mid-2000s. Big companies (other than Apple under Steve Jobs) don't actually work like that; once you've got a product-focused org chart and strong executives that push their own focus areas, you will necessarily get product-focused initiative that respond to resource availability & market opportunity. The executives are not doing their jobs otherwise.
I googled for "google magenta", and all the top hits are actually about an entirely separate (I assume?) project about AI music: https://magenta.tensorflow.org/welcome-to-magenta. So they didn't think very hard about the name for a start.
I'm also skeptical that a big new effort like this would be done entirely in the open. The Chrome team has something of a history of doing that and then throwing stuff away (e.g. Chromium mods and hardware configuration for a Chrome tablet that never got off the ground).
The Android team, on the other hand, seems to prefer developing stuff in private before open sourcing it. And their stuff seems to have more traction (or maybe we just don't see all the aborted efforts because they're private).
I feel like the Chrome team really believes in open source, and developing in the open, whereas the rest of the company (and especially Android) doesn't care as much and prefers being secretive. But as Sundar Pichai used to run Chrome, maybe he's changing things up a bit?
But golang could be started in internal use with incredimental steps. fuchsia/andromeda in contrast have non-code barriers for entry like management approval and industry adoption. My guess is that it will pivot from a full-blown android replacement into something more focused.
Their reasons that "dart is better" is the typical google koolaid before they attempt a market takeover. As we've seen over and over with Android, chrome, and AMP especially. Google loves to make glass house open source projects you can't touch. You're free to look at how great it is, feel it's well refined curves and admire the finish, but God help you if you don't like how the project is going and want to fork it for yourself.
Don't bother trying to commit a new feature to any of Google's software that they don't agree with. It will languish forever. Don't bother forking either, because they'll build a small proprietary bit into it that grows like a tumor until it's impossible to run the "open source" code without it.
Fuck dart, I don't care how great it is. Microsoft is being the good one in this case by extending js with typescript, google is trying to upend it into something that they control
Dart is a replacement for GWT at this point. See AdWords being written in dart now. Though it's not clear now Flutter.io will play into all this (that's targeting mobile with no web target).
As for typescript, Google actually embraced that fairly heavily with Angular2 being written in it.
And yeah ECMA is a totally open standard with committee members from all sorts of companies and backgrounds. Dart is not. I don't care if JS is slightly worse, as least I know that for now and the foreseeable future I won't be paying a google tax to use it.
After the open source community "stole" mapreduce and hbase google has begun offering maglev and spanner as "services" rather than giving them to the OSS community. Maglev was supposed to be open sourced a while ago, and google now offers DDoS protection service on Google cloud instead, most famously with their Krebs PR stunt. Maybe they forgot about it? Did I mention they removed "don't be evil" as their motto a while back because it was "immature"?
Google has begun down a decidedly different path since the Alphabet transition a while back. It's no longer the brainchild of Sergey and Larry, it's losing its soul and becoming a shareholder cash machine. Maybe the floundering of some of their moonshot projects is taking a toll on the companies' confidence to remain a market leader while maintaining their traditional values of openness and shunning of questionable marketing tactics? I'll admit that's pure speculation but I really wish I knew what happened to the Google I remember.
Since I'm being accused of FUD I might as well throw a bunch more speculation in for the hell of it. Their most recent papers are conspicuously lacking enough detail to make your own implementation, and read more like marketing whitepapers on how to use their services and how great they are. Their tensorflow library was probably released as truly open only because they couldn't hire enough devs with machine learning experience to meet their needs. They needed to introduce the world to enough of the secret sauce to meet their own demand and they remain completely silent on how their real moneymakers work.
My extreme speculation? They started using machine learning for search a few years back and found out just how easily their previous search algorithms, developed and perfected for years, were utterly outclassed within months. A start-up with these techniques could have been their undoing. This oversight cannot be repeated, they cannot offer too much of their technology back to the world anymore lest they risk being beaten to death by their own weapons. Thus google threw away a lot of what made them google, and rebuilt themselves as a semi monopolistic oligarch that's much more in line with traditional too big to fail companies.
They now spend more on political lobbyists than any tech company by far. They like to release nice things for free when a competitor just happens to be a making a decent living charging for the same thing. They engage in a lot of the typical corporate warfare now that doesn't seem natural for a company with a nice playful exterior and an original motto of "don't be evil".
As far as the FUD accusation, does it count that I don't work for or with any company that has anything to do with google or the other tech giants? These are just my opinions based on observations, and a lot of those opinions are backed by verifiable facts.
You're free to put the same data together and make your own conclusions, which would lead to more interesting discussion than dismissing my points just because.
"Don't be evil" is the first and last thing stated.
Why do you post stuff that's trivially searchable and trivially called out as bullshit? Why would I bother reading any of your rant if you can't get trivial details right?
Dart is an ECMA standard: https://www.ecma-international.org/publications/standards/Ec...
It is not clear how they would "contribute" that to OSS
Since they compile down to a Turing complete language there's really no limit to the heaps of dog shit they can abstract away. Historically, c++ is nothing more than an insanely complicated C preprocessor and it has more than proven that such a strategy can be viable long term. In fact, the first c++ compiler made, cfront, is still available and literally outputs raw C code from c++.
If it catches enough traction, browsers will begin implementing native typescript parsing since it offers many potential performance optimizations on top of what js is capable of. At this point you just maintain your typescript codebase and use some library to give your legacy clients some transpiled J S on the fly.
It doesn't, unfortunately. TypeScript's type system is unsound, so the VM can rely on types for optimization.
Typescript to JS transpilation is extremely similar to the strategy that produced C++ from C. We know it will work, and it's been done before to great success. C++ isn't perfect but I think everyone agrees it's definitely a lot nicer to work with than C, and that's exactly how I describe Typescript as well
Having said that, my only exposure to Typescript has been in Angular 2. Having used other tools like Ember, React, and Elm, Angular 2 seems like a magic step backwards to me. I will concede that my opinions on Typescript may be tinted by my experience with Angular 2 though, so I'll give Typescript a stand-alone, honest evaluation, and adjust my opinions as necessary.
You can take as hard look at Google as you would like, but choosing Microsoft over Google (one for-profit company over another), while not caring how the technology, the licensing or the workflow compares is a bit hypocrite. (e.g.they are both open, and they both have rules of commits).
I'm wondering, why do you need a throwaway for such heavily invested FUD? Your other comments here are in similar tone, and I'm surprised to see such hatred without any obvious trigger. Maybe if you would come forward with your story, it would be easier to discuss it?
disclaimer: ex-Googler, worked with Dart for 4+ years, I think it is way ahead of the JS/TS stack in many regards.
In what ways do you consider it ahead of Typescript? Personally as someone who's particularly fond of static type systems (Haskell and the like), Typescript's type system seems way more advanced and powerful than Dart's (union and intersection types, in particular, and non-nullable types). Map types (introduced in Typescript 2.1) also seem pretty interesting.
Personally I don't get the hype around union types: at the point where you need to check which type you are working with, you may as well use a generic object (and maybe an assert if you are pedantic).
Intersection types may be a nice subtlety in an API, but I haven't encountered any need for it yet. Definitely not a game-changer.
I longed for non-nullable types, but as soon as Dart had the Elvis-operator (e.g. a?.b?.c evaluates null if any of them is null), it is easy to work with nulls. Also, there is a lot of talk about them (either as an annotation for the dart analyzer or as a language feature), so it may happen.
Mapped types are interesting indeed. In certain cases it really helps if you are operating with immutable objects, and mapping helps with that (although does not entirely solves it, because the underlying runtimes does allow changes to the object).
I dislike nulls though, I always wish people would just use a flag or error handling when objects are undefined, instead of "hey this object is the flag and sometimes it's not actually an object!"
You'd think language designers would learn after dealing with null pointers :)
Self types (the "this" in the return type) is handy.
I can see us adding some of those to Dart eventually.
Non-nullable types are great, which I've said for a very long time. We are finally working to try to add them into Dart. It's early still, but it looks really promising so far. It kills me that I've been saying we should do them for Dart since before TypeScript even existed and still they beat us to the punch, but hopefully we can at least catch up.
The main difference between TypeScript and Dart's type systems (and by the latter I mean strong mode, not the original optional type system) is that Dart's type system is actually sound.
This means a Dart compiler using strong mode can safely rely on the types being correct when it comes to dead code elimination, optimization, etc. That is not the case with TypeScript and at this point will likely never be. There is too much extant TypeScript code and JS interop is too important for TypeScript to take the jump all the way to soundness. They gain a lot of ease of adoption from soundness, but they give up some stuff too.
In addition to the above, it means they'll have a hard time hanging new language features on top of static types because the types can be wrong. With Dart, we have the ability to eventually support features like extension methods, conversions, etc. and other things which all require the types to be present and correct.
For example, If you have a string-typed foo and a number-typed bar, "foo + bar" is still a valid statement in TS because they have to maintain backwards-compatibility with JS's unfortunate language design choices.
Dart is a different language, it has no fallback to something familiar. I don't doubt that it's many years ahead of TS in every way but it's still rather proprietary compared to TS that I can shut off at any time with minimal effort.
The openness of typescript and dart are comparable. Both being run primarily by their champion companies with code free to review and fork but with limited ability to commit changes. They both require you to sign over copyright of code committed which I don't like for my own reasons but the license is open source.
I'm not ex google, MS, or any of the tech giants. I'm not smart or dedicated enough to work anywhere you've heard of :). Most of my comments on throwaway accounts are unpopular, that's why I don't use my normal account. I'm not some invisible super shill, hackernews knows all the accounts I use and I'm fine with that.
I've just got my own opinions and when they're controversial it's not in my best interest to comment using my normal account. It wouldn't be for anyone. It would be utterly stupid to hurt my open source projects or reputation as a developer just because somebody doesn't like my opinions. My code and my work have no opinions, and I like to keep it that way. Throwaways are my way of keeping my opinions to myself, and I don't see anything wrong with that. Separation of church and state if you will.
I'm not totally against Google or any company in general. Microsoft in particular has an extremely rocky history when it comes to open source projects. They've probably done more harm to Linux than any company in existence. If typescript and dart both had equal migration paths I would choose dart in a heartbeat. I love tsickle and the closure compiler and the fact that the angular team is using typescript. Still, I feel like my criticism of dart has some truth to it at least.
I've taken aim at Google for the past week for what they've done to the openness of Android, AMP, and dart. Am I wrong? It's hard to argue that any of Google's platforms are as open as they were a few years ago. Some of my really unpopular opinions were posted in reponse to other poster calling me FUD or a shill, and can you blame me? It's one thing to say "I disagree and this is why" but pretty rude to just say "I don't believe you because you're obviously lying or getting paid to say that". To that I say well screw you I'll post what I want without being polite at all if you're going to be so rude. I'm replying nicely to you because you genuinely asked why I used a throwaway and said that you worked with dart at Google, way more than most would admit.
Having an unpopular opinion just gets you labelled as a shill or FUD and that's a lot of the reason I use throwaways. I've actually gotten death threats before for disagreeing with people on the internet. It's hard to say I would be better off getting death threats from people that can easily find my name, occupation, and address. Look at more of my post history and you'll see that I'm probably not a shill, or a really sneaky one if you don't want to believe that.
I just bashed Intel for removing ECC support from their desktop lines a few days ago, and Facebook and Microsoft a day before that for their shittastic timeline algorithm and irrational fear of Linux taking over corporate clients respectively. A day before that I trashed PayPal for some of their recent cronyism and praised the quality of Google's Guava libraries.
In my more ancient post history I mention how much C#'s ecosystem sucks compared to java and how even open sourcing the language doesn't mean much when it only targets windows using visual studio. I bitched about the baby boomers screwing the mellenials and asked how it's possible to start a side business. I said I don't like Python because it's slow and gave people online marketing tips on how I write high ranking blog articles. I mentioned that .NET core sounds great but it's alpha quality and it sucks that I have to use the new version of Windows server to have http2 support. I talked about how WordPress is absolute shit for anything even medium traffic. Mentioned a quip about using monotonic grey codes. Brought up some arguments about montsanto and GM windblown crops. Mentioned that Uber doesn't give a fuck about their drivers and some examples of this despite their press releases saying otherwise.
I'm getting a lot of comments about how I'm some full of shit corporate mouthpiece again so I guess it's time to cycle the old throwaway again.
I'm not asking you to agree but keep in mind that some people will have opinions completely contrary to your own. Sometimes their reasoning will be logical even though you come to a different conclusion, people are just different
Dart has an ongoing project (Dart Developer Compiler) which has a goal, among others, to produce readable, idiomatic EcmaScript 6. That is as close to your TypeScript fallback as it can get. (2)
Somebody also demonstrated Dart to LLVM compilation is possible. The language has a decent library for parsing the Dart sources, worst case, if you are that heavily invested in your product, you could also write something that does transpile your codebase. I did try to do it on small scale and specific examples, it is actually not _that_ hard to do, if my business relied on it, it would be certainly within reach.
(1) I'm not sure if you can call it lock-in as it is entirely open source, you can fork it, build it for yourself, change it if you have special needs. The same goes for perl, php, python, go, whatever language you prefer. Yeah, most people don't do it. Why? Because most people don't need it. If you become Facebook-size, it may look better to invest in the PHP toolchain and VM than in transpilers. YMMV.
Does dart have a pluggable compiler framework similar to Roslyn or Antlr AST's? That would make it a lot easier to write your own conversions.
One more point in Typescipt's favor though... It would be a lot easier to modify the JS VM in browsers to support native typescript than dart. In my mind it's a lot more likely to happen because of this(less work)
It's still got bugs, of course, but we have internal customers working on real projects using it on a daily basis.
I agree totally that picking a language is a huge commitment and you want to do that with an organization (company, standards committee, group of open source hackers, whatever) that you trust.
Google is a huge company and has done lots of good and bad things, so it's easy to find enough evidence to support assertions that we should or shouldn't be trusted based on whichever view you want to demonstrate.
One way I look at it is that instead of answering the absolute question "Can I trust Google to shepherd the language well?", consider the relative question "Can I trust it to shepherd the language as well or better than the maintainers of other languages I might choose?"
Assuming you've got some code to write, you have to pick some language, so the relative question is probably the pertinent one. I hope that we on the Dart team are a trustworthy pick, but different reasonable people have different comfort zones.
> Does dart have a pluggable compiler framework similar to Roslyn or Antlr AST's?
All of our stuff is open source, including all of our compilers and the libraries they are built on. Most of it isn't explicitly pluggable because plug-in APIs are hard and Dart in particular doesn't do dynamic loading well.
But it's all hackable, and much of it is reusable. In particular, the static analysis package that we use in our IDEs also exposes a set of libraries for scanning, parsing, analyzing, etc. that you can use.
But every open source project has standards about what they merge. Try getting a patch into Linux and see what they say; it won't be a rubber stamp.
And Android. Android used to honor the promise of being open. Years ago. This was before every manufacturer was encouraged to lock bootloaders, and back when platform SDK's and drivers for hardware were generally available even if they were kinda hard to get. This was also before the Android kernel heavily diverged from mainline Linux, and before "google play services" grew from a tiny app to a framework that powers half the OS features.
Nowadays you can only run your own Android on devices specifically built for it. Open distributions like CyanogenMod are dead or dying. Google Play services is closed and proprietary, and probably about 95% of popular apps require it to work. Even if you manage to get your own Android distribution built and running you will need to side load all your apps, and most apps just don't work because they've been built to depend on proprietary bits that Google has snuck in all over the place.
Google is better at the "embrace, extend, extinguish" strategy than Microsoft ever was. So good, in fact, that they have many well intentioned people defending them to the death even as they choke off the very open source projects they created. Virtually every platform that Google runs for more than about 5 years goes from completely open to something impractical to run yourself. If you don't believe me look into any of their older projects that are "open source".
After a certain point it's free software as in "free coupons". Somewhere in the mix, eventually, the price of of their "charity" is passed on to you.
We can't have the 90s Linux revolution for handhelds because they each need customized kernels and drivers. Many fall into disrepair and go unmaintained, even in things like Cyanogen. (On two phones I tried running newer CM images on old hardware and ran into speed and performance issues).
This is why things like Plasma and Ubuntu mobile have such limited phone support. Porting is difficult.
Also notice that I said "PC" above. There are plenty of x86 systems that are just as difficult to port to (PS4, Wonderswan, those old T1 cards with 4x486 processors on them). At least Microsoft forced their ARM manufactures to use UEFI. Too bad those platforms have locked bootloaders. I'd love to see some Lumia running Plasma.
Google making Google play services was a natural reaction to manufacturers never updating Android on their phones for years leading to all kinds of vulnerabilities and bugs on Android that kept it far behind iOS in quality and features. Lets face it - Android used to be sneered at, the red-headed problem-child OS that used to the butt of jokes till it grew out of puberty and pimples in Ice Cream Sandwich. If manufacturers had truly honoured OS updates, Play Services may never have been built - it allows Google to update Android without updating the OS. And yes, they will retain full control over Play Services - I completely understand the need to fully possess it and ensure a high level of quality assurance.
Also blaming the fall of CyanogenMod on Google is ridiculous. CM fell because of mistakes made by Kirk Mc Master and several others. He attempted to be a dictator even going to the extent of banning OnePlus phones selling in India - this was fought and resolved in the courts. All goodwill for CM was destroyed. OnePlus ditched CM and moved to Oxygen OS. CM had a stroke and died. Now Lineage is the new shiny OS rising from the cooling corpse of CM.
I still disagree with play services because it wouldn't be that hard to force manufacturers to support updates when you command such a large part of the market
There are these things called contracts, and if there are clauses for an OEM to be allowed to have access to Google services, Google lawyers could certainly add a few more sentences regarding compulsory updates.
Android isn't open source, except in the hearts and minds of fanboys everywhere. =)
Good luck getting all those apps running without Google services, or getting the devs to re-written them to use alternative APIs.
It only works in countries like China because of the way the government controls everything.
I pretty much doubt anyone cares about Amazon or Jolla's fork, or cared for the Blackberry's one.
It is just about not caring one second to enforce updates.
Google's projects all seem very inviting from a distance. Usually it's not until you're ready to implement something that you find out that you're fucked, and how.
Serious ranting below but something I never get a chance to say:
I'm a born skeptic and avoid the silicon valley mindset even though I'm a driven person. I used to find myself often in disagreement with others because they don't or refuse to see the truth. Some people don't like to be told they're wrong. Many of those will fight other opinions just to justify their own decision, but will secretly reconsider. Others will hang onto beliefs with every ounce of strength as their mistake builds into a Maelstrom that consumes everything they care about.
With some people, after challenging their beliefs, they will end a friendship rather than admit you were right in the first place. Especially if you refused to do something their way and it saved them from disaster. Some can't stand to be THAT wrong. As if I was some asshole who saved them from their fate, and now they're a spirit left wandering the earth until they can fulfill their original destiny. Its like I helped them cheat without telling them about it, stealing the joy from victory. This is something I learned the hard way more than once.
In real life I keep my opinions to myself to avoid this nastiness, and offer opinion only when asked. The people open to advice even if they disagree learn to ask my opinion since I always tend to have one. The majority of people I know, including some good friends, have no idea what my personal opinions are on many subjects. It would cause pointless pain and argument with people I care about regardless of their beliefs.
I'm not loyal to any platform or company and I will freely throw a strongly held notion to the wind if I find disturbing evidence that I was mistaken. Most people are not so malleable.
A lot of people take their beliefs too seriously to the detriment of society. At least on the internet I can express my opinion, however "uncool" using throwaways.
In the real world the best and most meticulously researched advice I've ever given is at exit interviews. The one time you can be open, honest, and politically incorrect with coworkers. Multiple companies made serious operational changes after giving my exit interview. Others have told me in nicer words "that's really fucking great to hear I'm pretty happy I never have to talk to you again".
The problem is, you never know how somebody will respond. During exit interviews I'm treated more like a person than a subordinate since the boss relationship is formally over, which helps I'm sure.
In real life, the way to influence a strongly held opinion is best decribed by watching the movie Inception. You introduce nothing more than minor inconsistencies while outwardly expressing little opinion, then wait to see if your clues are enough to lead them to towards the promised land.
My other common tactic is to do things without asking any opinions first. You at most come off as insensitive, aloof, rather than someone to intentionally disregard their advice. Usually the opinion matters less in practice than if you had asked in the first place. Classic forgiveness is easier than permission.
Ive sometimes wondered if this makes me a physcopath or if that's just how some people tick. Anyways, god bless throwaways and the internet
Fighting the good fight, fighting for the things that are just, and true, and good - are nearly always worth it, the key is to back off before it becomes a pyrrhic victory.
That's a lesson I had to learn the hard way.
It's not being evil or that I'm always right. The comment was mostly in reference to those that have been calling me a shill the past few days and how they should keep in mind that their opinion is not fact.
I gave up the good fight years ago. The worst was when I helped turn around a failing small business. We all wanted the same goal, the company to be successful. It sucked so bad that I learned that it's better to be nice to your friends than to dedicate yourself to a cause or try to fix all their problems.
If that means letting them fall sometimes that's okay, as long as you don't let them get any deeper than you can reach. If you help pull them out in the end you're still a good friend.
So the company turnaround, it worked in the long run but at great cost. Cutting employees that sucked at their jobs but were friends and helped us with the initial plan. Cutting moochers that I loved but were sucking the company dry with constant unscheduled time off and freebies. Redoing our systems to automate as much as possible made us our first profit in years but a lot of that was from jobs eliminated. Hiring people of a higher caliber than existing employees by raising application requirements above what most of the current employees would meet. Offering our new more qualified people more money than Bob who's been here for 15 years but did our financials on pieces of scrap paper.
By the end of that process a few years later, my lesson was that I made the owners a lot of money at the expense of losing about half my friends. Most of the other half resented me for what I had done and thought I was a traitor, even though I had just helped implement exactly what we had agreed upon a few years back.
We planned to cut dead weight and streamline and automate operations. To add new talent with up to date skills. To cut our benefits slightly to money to invest in the company's future. Everyone wanted this until it was their benefits or their job being automated. I followed through with the cause and at the end I felt like a Judas figure and packed up and left in shame.
You could say it was a pyrrhic victory for sure, but after that I'm very wary to set anything in motion that's too heavy for me to stop on my own
Omega Man is an interesting term, never heard of that before. You're totally right that it's how I try to operate but only when I'm doing controversial things. Perhaps I'm doing it right if I seem to be going about it in the most quiet and passive way possible :) .
You don't have to worry about me running any communities online. I'm a productive member on a bunch of online communities including HN and I don't use my throwaways to respond to, upvote, or otherwise sockpuppet my regular account except a couple times I admittedly may have upvoted the same thread on different accounts by mistake. Most of my less opinionated stuff is under my real name
The only reason I respond sometimes is because I disagree. Sometimes my controversial opinions prove to be a lot more popular than I thought. And possibly miraculously, all of my throwaways eventually gather substantial positive karma despite the fire and brimstone rained upon some of my comments :)
It continues to be a weakness of the Four Freedoms model.
How diverged is it? Would the ever be merged back together?
Android is missing a ton of new Linux features on many devices and the kernel is getting increasingly unusable by ARM devices in vanilla form because of these badly done third party modifications
OO is a bad word these days and functional is all the rage, even though functional languages were largely superceded by OO languages eons ago for many reasons people are slowly redicovering.
There's a huge push to put more structured language concepts into js now that it's being used for substantial projects and it's out of necessity more than convenience.
When I'm hacking together a quick Python script all that stuff gets in the way but when working on larger systems strong typing and object syntax are practically a neccesary evil for maintaining readability
No, it's not like that.
You can write readable code in any language as long as you can write readable code. It sounds tautologic, but what I mean is that ability to write readable code is a skill separate from writing code or knowing a particular language.
Strong static typing - as just about any tool and language feature - can have both good and bad effects on code readability. In the end, the readability (so also maintainability and other related metrics) depends on the skill of a particular developer in the largest part.
Both OO and FP techniques, as well as all the language features, are the same. You can misuse (or ignore) them all.
What we need is to make an "average developer" better at writing code, not more bondage and discipline in our tools. The latter is (a lot) easier, so that's where we focus our efforts, but - in my opinion - it's not going to solve the problem.
Functional programming has origins in lambda calculus and academia because mathematical problems map more easily from pure math to functional programming. It's really popular in the circles where it's more useful/easier than OO.
Honestly I don't think the people 20 years ago chose OO for most business languages over functional out of ignorance. They had a choice and decided that OO was better for business problem solving languages like Java even though a large majority of programmers from that era were math majors and familiar with functional syntax.
I feel like we're in one of those cycles where a large number of a previous generation have retired and it's time to learn some of these lessons all over again.
Notice how many wood commercial buildings have been going up in the last 15-20 years? A lot, and just long enough after everyone involved in all the great city fires of WW2 to be too dead to object.
I'm going to ignore the social component... that said, we work in a wonderful profession where the world is changing completely every decade and many design decisions from the previous generation make no sense anymore. The business case for developing your application in COBOL rather than Common Lisp may have been sound 20 years ago, but today many of the reason why you didn't choose lisp are invalid (e.g., garbage collection takes milliseconds rather than seconds).
Note that this is not the case in more mature fields such as construction.
I don't understand half the decisions outlined in the article.
> I also have to imagine the Android update problem (a symptom of Linux’s modularity)
I seriously doubt the Linux kernel is anything but a minor contributor to Android's update problem. Handset developers make their money by selling physical phones. In two years, your average consumer probably doesn't care if their device is still receiving software updates. They'll jump onto a new phone plan, with a fresh, cool new mobile, with a better screen, newer software (features!), and a refreshed battery.
Maintaining existing software for customers costs handset manufacturers $$$, and disincentives consumers purchasing new phones (their cash cow). The money is probably better spent (from their POV) on new features, and a marketing budget.
This might be true for the US, where 75% of subscribers are on post-paid (contracts). It's not true for the rest of the world.
* Europe: < 50% post-paid
* Rest of the world: < 22% post-paid
I'd also argue that Android users will be more likely to be pre-paid than post-paid customers (compared to iPhone users) in all of these regions, but I have no data to back it up.
Anyway, I agree that it's probably not very profitable, if at all, for android handset makers to support their devices for > 2 years. But I think many customers would benefit from it ...
And more over cellular companies put a lot of bloat on the phones, some of them would make malware creators proud (e.g. an app that if you open it after 1 month trial period it will add a recurring cost to our cell bill).
The Linux kernel is at the very heart of Android's update problem - not because of "modularity" but because it lacks a stable ABI. Because of this, Android requires handset makers and SoC manufacturers like Qualcomm to provide updated drivers; these parties are perversely disincentivized to do so as they would rather sell more of their latest model/chip. If the Linux kernel's ABI were stable, Google could bypass the manufacturers altogether when sending out updates.
The Android ROM community struggles with this - you can update the userspace or overclock the kernel to your heart's content, but you will forever be stuck with whatever kernel version the OEM-supplied drivers support.
Edit: added ROM paragraph
Qualcomm is dominating the SoC industry, in part due to how it uses its IP. Fortuitously, Qualcomm are being sued by the DoJ and Apple for anti-competitive practices in multiple jurisdictions and this might blow the SoC field wide open, depending on the rulings. If other chip makers can license Qualcomm's patents on a FRAND basis, Google could offer the deal to MediaTek if Qualcomm declines.
What are you referring to when you say the Linux kernel ABI is not stable? I ask because the A in ABI means application, and Linux has maintained a consistent ABI for decades.
I have a suspicion that you're trying to suggest that in-kernel interfaces be kept rigid and unchanging to satisfy some unspecified number of out-of-kernel driver developers. The simpler solution would be for out-of-kernel driver developers to get their code up to quality and merged into mainline so that they'd be ported automatically whenever there's an in-kernel api change.
Or we could drive linux into irrelevance and not have to worry about that anymore. Windows and MacOS users do not have to suffer from this, why should users of an open source operating system? I for one welcome Google in cleaning out this mess with a competing kernel. You know your suggestion will never happen, you know some hardware manufacturers will never open their drivers, and some just can't be avoided (people doing GPGPU work will not switch from NVIDIA, most gamers won't switch from NVIDIA, NVIDIA is owning the GPU market for the purpose of machine learning with their SOC and is working on a lean vulkan driver for linux for things like Tesla's self-driving cars).
During the early days of constant wifi APIs churn within the kernel, there were many, many out of tree drivers that ran perfectly fine but took a very long time to get mainlined because hey the kernel devs have """standards""" as they like to call it. I couldn't use a distribution that was prone to upgrading its kernel version, like Fedora, because those out of tree drivers broke on a regular basis. One of Ubuntu's major advance in its very early days was to provide fresh software every 6 months while keeping a kernel that was in sync with all those drivers and they integrated said drivers into the distribution. That gave a middle ground between using something like Fedora/debian unstable with constant breakages, or suffering the glacial pace of debian stable.
... until I discovered NDISWrapper. An implementation of the Windows API for network drivers. Yes, really. It worked so good, and because it was a well maintained project, it quickly got updated to new kernel APIs and allowed anyone with any of the working windows drivers to be freed from this suffering. It was great. Nowadays the linux kernel has support for most wifi chips in-tree (Broadcom is still a problem though) but in those days it was truly liberating to be able to use Windows drivers in linux.
Talking about wifi drivers, the Windows XP driver for my first usb wifi adapter still worked in Vista and 7 despite the manufacturer never updating it. Identical driver binary working on 3 OS generations. Talk about commitment to not breaking things. NDIS 5.1 was "deprecated" for NDIS 6.0 but MS kept the support for it for as long as reasonable for the age of the hardware.
I know this is going to sound flippant, but believe me it's not: who's "we" in your sentence? Google? Because I wouldn't want to see Linux "driven into irrelevance" unless it gets replaced by another free software OS which works and is not completely owned and controlled by a single corporation. Actually, I'd rather see Linux's problems fixed or improved rather than adopting Google's brand new OS.
The one on the Android device is a fork with quite a few APIs trimmed off, like for example UNIX V IPC.
Nobody says anywhere that it will replace android. It looks just like a lot of these other Google projects. They put people on it. If turns into a contender then they might use it but if in the meantime android introduces features that make it more competitive then maybe they will throw fuschia away. That's my understanding anyway.
Also: "I have very strong reservations about Dart as the language of the primary platform API" and then later... "I am not a programmer"
For the past year I've been every now and then looking for a mid-level Android tablet and I've always given up, since they all ship already with an outdated version of Android and slim chances of even getting an update to the current one (let alone future versions)
Maybe Google should split play store/app ad income with manufacturers, adding some incentives (50/50 profit split for phones on latest Android, 20/80 for one trailing major version - and then nothing. Or something along those lines).
That might have the added benefit (for Google) that manufacturers would have a stake in a healthy play ecosystem.
It's the ability to impose a private tax on ads and software that's supposed to be the monetizing strategy for walled gardens, isn't it?
If hw is commoditized - manufacturers need another leg to stand on.
I'm not sure the alternative is much better - lots of unpatched insecure phones in use?
Well, that's exactly why I've bought an iPhone this time. Exactly €1000 lost for the Android ecosystem, just because I want encryption. I know, Android 6.0 has encryption by default, but 4.4 doesn't, and that's the point. The whole story displays a lack of professionalism, in my opinion.
I've been running Android since the Nexus One so I'm no newbie to the platform, but the ease with which iOS manages to get all UI interactions at ~unnoticable FPS and outstanding battery life is staggering when you're used to Android. It feels like some really fundamental choices were made badly on the platform that make it incredibly inconsistent and unreliable. A fresh start would be fantastic.
I was very happy with the 5, even with the intermittent lags, especially considering it's price at release. I suppose I'm not a very heavy phone user, and I never play mobile games, but I've been very happy with the 6P on Android 6.0-7.1. Battery life could definitely be better, and it does get fairly warm at times, but overall it's been a very good experience for me considering the Snapdragon 810 it's using is generally poorly regarded.
But bad I/O is the killer (e.g. fun things like triggering 2 image load requests at once which then take 2500ms vs. doing them sequentally which takes 400ms on some Samsungs. This happens also if several processes collide). Apple side-steps that by throwing money at the problem (good controllers, expensive flash) which probably won't happen in budget Android market.
I don't read about Android and iOS [hardware] nearly as much as I'd like to, and HN doesn't seem to generally cover the subject too well. What are some sources you could recommend I read to stay updated?
HTC One M9: 2840mAh
iPhone 6: 1810mAh
iPhone 6 Plus: 2915mAh
In streaming video playback, the iPhone beats out all of the other devices.
Galaxy S6: 6.3 hours
LG G4: 6 hours
HTC One M9: 5.5 hours
iPhone 6: 8.8 hours
iPhone 6 Plus: 11.1 hours
I get greater than 60 fps with my existing Vive three.js WebVR-ish electron/chromium linux stack. Even on an old laptop with integrated graphics (for very simple scenes). Recent chromium claims 90 fps WebVR, and I've no reason to doubt it. So 60 fps "up to 120fps" seems completely plausible, even on mobile.
Slightly in those people's defense, it is true that while GC relieves you of the need to track lifetimes and worry about using dead pointers, it doesn't relieve you of the need to consider performance as one of many factors that go into your code. So while I think the performance issues of GC'd languages are very frequently overstated, it definitely is true that a UI framework written in a GC'd language by someone who isn't giving any thought to performance implications of allocation can very quickly exceed its targets for 60 fps, let alone 120 fps, even on very simple GUI screens. But that's only maybe 10% the fault of garbage collection... 90% is that someone is writing a GUI framework without realizing they have to pay a lot of attention to every aspect of performance because GUI frameworks are very fundamental and their every pathological behavior will not only be discovered, but be encountered quickly by all but the most casual programmers. It doesn't take long before someone is using your text widgets to assemble a multi-dimensional spreadsheet with one text widget per spreadsheet node or something, just as one example.
In a manually managed language, the performance of the application's memory management code is limited by the skill of the application developer. In a managed language, it's limited by the skill of the GC developer.
GCs have gotten a lot better in the past twenty years, in large part because they have the luxury of amortizing their work across a million applications. That makes it financially viable to throw a ton of person-years at your GC. That's not the case for the malloc()s and free()s in a single application.
So just through economies of scale, we should expect to see, and indeed have seen, managed languages catch up the the memory performance of the average manually managed app.
I'm curious; what would be an example of something you would describe as an "average garbage collected language using a virtual machine"? Java would certainly be the first language I'd think of for that description.
I italicized "average", and wouldn't include Java, because most languages are very small efforts, and thus are different than the top few. A person-century or person-kiloyear of optimization effort has an impact. Observations that "implementations of strategy X generally have characteristic Q" can be true, but there's a hidden context there of "severely resource-limited implementations of X".
But two caveats:
Sometimes you are trapped. In CPython and PyPy (but not in Jython or IronPython), parallelism remains defined by the GIL.
Language implementation tooling sucks less than it used to. Now even toy languages have JIT and rich compilation infrastructure.
Aside: In the late 80's, before ARPA was hit by Bush I, project managers had a great deal of autonomy. There was discussion of "what more neat things could we do to accelerate progress?" One observation was that JIT expertise was highly localized, and we could either wait many years for it to slowly spread, or pay someone to stand on people's desks and catalyze its being written up. But that kind of micro-grant didn't yet exist, and time ran out on creating it. Society chose option 1, but a human generation has now passed, and we finally have accessible JIT infrastructure. So, yay?
(In fairness, note autonomy and "old boy network"ness was a less happy thing for potential researchers at other than the few main research institutions. Some change was needed, it's just not clear it required societally critical tech to remain largely unfunded for decades. We don't even have a (language) wiki. Though even national science education improvement efforts have failed (but oh so close) to attempt one.)
Ruby 1.8, Lua, or CPython.
What do you think makes Ruby and Lua more "average" than Java?
The status quo right now among android hardware vendors is to violate the GPL, and they have faced few if any repercussions for doing so. I wonder if Fuscia is sort of viewed as the way forward to addressing that.
Anyone care to speculate why there isn't a community version of chromium os? I'd donate to it for sure. It sounds like getting android apps working on it would be pretty easy: https://groups.google.com/a/chromium.org/forum/?hl=en#!topic...
No it's not the status quo. The major OEM's do release their code. Yes, there are some Chinese OEM violators, but that's typical of China.
The zeitgeist is moving towards conservatism in general, so it doesn't surprise me, but it's still sad.
> You can use ML like deep learning variations to learn association between your wishes and corresponding code blocks
I suggest you read up on ML.
So for example, for simple programs you don't need to understand what individual code blocks are doing. All you roughly need are some well-defined procedures/visual components (able to ignore unsupported operations) that can be composed LEGO-style. Here AI can learn to associate certain compositions of code blocks with your sentences, e.g. you can teach it with touch what does it mean to resize, move to left, change color etc., and even provide those code blocks. To help it, you have to annotate those code blocks so that you maximize chance of valid outcomes. ML by itself is not capable of inference, so inference must be done differently. Yet what your AI learns with associating certain sentences with outcome in your code blocks will persist. And for making associations you can unleash millions of developers that might be working on your goal unknowingly, e.g. by creating a safe language like Go for which you have derived nice rules that you can plug into your system. Initially you could only to pretty silly things, but the level of its capabilities will be rising all the time and there is a way forward in front of you, even if a bit dimmed.
Developing software requires understanding of completely open ended natural language. NLP is nowhere near that level of AI and doubt that it will be in the next 30 years.
The point is that only really good SW engineers have any chance to survive, those low-skilled ones will be gradually replaced by automated reasoning.
Mostly they are unbearably vague and based on tons of false assumptions about how things work. Separating the trivial parts of a user request from the unbearably complex parts is itself often unbearably complex. It requires a conversation with the user to make it clear what is simple or what could be done instead to make it simpler.
The examples for trivial user needs that you have given are all within the realm of what we now use WISIWIG editors for. Not even that is working well. The problem is that you can't layout a page without understanding how the layout interacts with the meaning of the content on the page.
The logic capabilities of current ML systems are terrible. It's like, great, we have learned to sort numbers almost always correctly unless the numbers are greater than 100000!
Even in areas where AI has advanced a lot recently, like image recognition, the results are often very poor. I recently uploaded an image I took of a squirrel sitting on tree branch eating an apple to one of the best AIs (it was Microsoft's winning entry to some ImageNet competition).
It labelled my image "tree, grass", because the squirrel is rather small compared to the rest of the picture. Any child would have known right away why that picture was taken. The tree and the grass were visually dominant but completely unremarkable.
ML by itself is incapable of inference, hence you need some guiding meta-programming framework that could integrate partial ML results from submodules you prepare.
As for squirrel example, it was probably one of "under threshold" classifications of ResNet, i.e. tree was 95%, grass was 90%, but squirrel was 79%, so it got cut out of what was presented back to you. Mind you, this area went from "retarded" in 2011 to "better than human in many cases" in 2016. I know there are many low-hanging fruits and plenty of problems will still be out of reach, but some are getting approachable soon, especially if you have 1M ML capable machines at your disposal.
That's not a conversation, that's a statistic. A conversation might start with a user showing me visually how they want something done. Then I may point out why that's not such a good idea and I will be asking why the user wanted it done that way so I can come up with an alternative approach to achieve the same goal.
In the course of that conversation we may find that the entire screen is redundant if we redesign the workflow a little bit, which would require some changes to the database schema and other layers of the application. The result could be a simpler, better application instead of a pile of technical debt.
This isn't rocket science. It doesn't take exceptionally talented developers, but it does require understanding the context and purpose of an application.
It's not general AI but even less-than-general AI can erode our ability to earn money from developing software.
Imagine you run a small business and need just some simple site with your contacts, and you are able to assemble it with voice in 10 minutes. That would be a complete game changer for most regular people.
For anyone interested, I intend to write quite often about consumer technology on this blog. Topics will include hardware, software, design, and more. You can follow via RSS or Twitter, and possibly through other platforms soon. Sorry for the self promotion!
Thanks for reading. Please do send any corrections or explanations.
I think you alluded to this, "cue endless debates over the semantics of that, and what it all entails," but it might be worthwhile to add the official statement.
Maybe a long term project? I think Google is at a position where they can write a great OS from scratch, learning from the mistakes of others, and it has a chance of becoming the greatest OS that ever was.
With the talent of it's engineers, they can bring new ideas that can be better implemented, from scratch on a new OS. They already have a bunch of languages, web frameworks, and so many more technologies from Google that can be well integrated in this.
And looks like the project is mostly BSD licensed, which is great! I'm excited for just that alone.
It's typical market-o-speech.
They are not actively working on this :
For no reason.
I currently "add to home screen" for most things. I edit my images online, and develop code using cloud9 ide, etc. There are few things I need apps/programs for right now, and that's improving day by day.
iPhone is dropping heavily in world wide market share, but they still have a lot of the wealthy users. There is a non-zero chance they get niched out of prominence by Android (aka every other manufacturer in the world), at which point network effects start encouraging Android-first or Android-only development. There might be a point where Apple needs to double down on the web, and/or maybe kill off apps, like they did flash, to still have the latest "apps".
No HTML5 UI/UX comes close to what is possible to achieve with native APIs in any platform.
For old dogs like myself, it always seems that younger web dev generations are rediscovering patterns and features we were already doing in native applications during the 90's.
Also solutions like service workers look like some sort of kluge to sort out the problem to do offline applications in browsers.
WebOS, ChromeOS (barely used outside US) and FirefoxOS are all proofs that the experience is substandard.
The way it works is to funnel all the profits into a few huge conglomerates that benefit from exclusive access to all personal data and train users to never depend on anything that isn't a core product of one of these conglomerates.
Using their 80% margins they can afford to at least give us some time before scrapping software that doesn't look it's ever going to reach 4bn consumers.
The result is stability. Until they all get toppled by the next technology revolution. Years later, regulators will crack down hard on some of the side issues of their former dominance and once again miss the currently relevant issues :)
> iPhone is dropping heavily in world wide market share
That's just my anecdotal view, but I have never tried a web based app (electron native app thing or webapp in the browser) that is as great an experience (UX and UI) as the best of the best native apps on Mac and iPhone, and I'm not sure it's possible to push web tech that far without reimplementing everything in the web stack and making it as close to native that we're better off just writing native apps.
I would say I have never tried a web based app better than average native apps. (Except for GMail, because I don't like the sync feature of mail clients).
I only see this as a good thing if this ensures an easier upgrade path than in Android; and if vendor ROMs can easily be replaced by a stock OS (like on Windows).
Google doesn't have to support all hardware, they can pick to support only the hardware they want. That's what they already do with ChromeOS. Installing ChromiumOS on unsupported hardware can have its issues. The reverse is true too, installing not-ChromeOS Linux or another OS on Chromebook does not always work well, although it's fine on some specific models.
Android is like that too, and in a much worse way than for Chromebooks. We're not talking about stellar linux kernel support for all the custom ARM SOC that are out there. All manufacturers write their own closed source hardware support for android and this is how android ends up having issues with updating, since whenever Google updates the linux kernel it breaks the ABI and all the support manufacturers wrote for the previous version, and manufacturers do not want to spend so much time on needless busywork such as keeping up with kernel API churn that exists just to satisfy the dev team sense of perfection.
My favourite IDE to use today is IntelliJ, and I prefer it over my experience with Visual Studio (though to be fair, I did not use VS intensively in the past 3-4 years).
I don't experience IntelliJ as "slow". It launches faster than VS did when I used it, and once it is running I keep it open pretty much the entire work-week without any issues.
It's really hard to tell if this is actually something that will ship, or yet another Google boondoggle to be swiftly discarded (like the first attempt at ChromeOS for tablets). Google under Larry Page built and discarded a lot of stuff; I wonder if it's the same under Sundar Pichai.
Neither Android nor Windows nor Chrome OS nor your favorite Linux distro have ever been able to truly compete with the NeXT legacy as it lives on in Apple.
Google is smart enough as a whole to see this, and so it's not surprising that they're attempting to shore up their platform's competence in this particular area. What IS surprising is that it has taken them this long.
Perhaps what's truly surprising is just how much mileage Apple has gotten out of NeXT. It's astounding, and I know Apple realizes this, but I question whether or not they know how to take the next step, whatever that may be. And if Google manages to finally catch up...
I find this a funny statement. Apple has not seen runaway success in terms of market share, not on desktop platforms (where the top OSes are various versions of Windows), not on mobile platforms (where it is a distant second to Android in the worldwide market), not on server or supercomputer platforms (where it's effectively nonexistent).
Nor is it influential in terms of operating system paradigms. The only thing I can see people citing as a Darwin innovation is libdispatch. Solaris, for example, introduced ZFS and DTrace, as well as adopting containers well before most other OSes did (although FreeBSD is I think the first OS to create the concept with BSD jails)--note that Darwin still lacks an analogue.
market share won't feed nobody. that's all apple needs to care about. just look at their market cap and p/e ratio.
Apple has been worried, and actually threatened, by Google every day since 2008, when the first version of Android came out.
Without Android, Apple would probably have a 90% monopoly on mobile phones today. Saying they might be "worried" is beyond an understatement.
They are absolutely furious at Google, as Jobs was until he passed away.
Having your bottom level language semantics be dynamically typed seems to place a real cap on application performance. Given that the underlying machine code is typed, I don't think it makes sense for the lowest level language you can target to be dynamically typed.
(WebAssembly is basically an acknowledgement of that fact.)
> The biggest point being the ability to code in any language that compiles to wasm.
Conversely, I don't think WASM is a great target language either. It doesn't include GC, and I don't think it makes sense for each application to have to ship its own GC and runtime. WASM is a good target for C/C++, but not for higher level languages like Java/C#/JS/Python/Ruby/Dart.
They say they intend to support GC, but they've been saying that for a while and I haven't seen much motion yet. I don't think Fuchsia can afford to wait around for that.
Now instead of improving the linux stack and the gnu stack (the kernel, wayland, the buses, the drivers), they rewrite everything.
They put millions into this. Imagine what could have been done with it on existing software.
They say they are good citizen in the FOSS world, but eventually they just use the label to promote their product. They don't want free software, they want their software, that they control, and let you freely work on it.
This does not work. They were initially working on webkit but technical decisions they wanted were different from apple/webkit team so they had to fork. It is much better they implement their technical ideas with their resources.
How's that going to work? iOS, specifically? Is Dart a supported language?
"The engine’s C/C++ code is compiled with LLVM, and any Dart code is AOT-compiled into native code. The app runs using the native instruction set (no interpreter is involved)."
GCC is still in the NDK today because some of gnustl's C++11 features were written such that they do not work with Clang (threading and atomics, mostly). Now that libc++ is the best choice of STL, this is no longer blocking, so GCC can be removed.
Having said that, the reports I've seen contain scant actual evidence that Google actually plan for Andromeda to replace Android, or even that Andromeda is at all important to Google. I take these reports with a mountain of salt. I remember that the "Pixel 3" was definitely just about to be announced back in late 2016, and was definitely going to be running Andromeda OS.
A good example of that is Visual code. I am sure some at github (atom's paprent) is pissed.
(or that's the story as I remember it)
Imagine how easier contribution would be if you could write the OS parts with less lines, guarantied to no introduce most security and concurrency bugs we know about.
A bit weird to use the past tense here since it's not reached 1.0 yet. You can try it out today (tech preview) to create apps in Dart that run on Android and iOS:
(Googler, not on the Flutter team itself, but working on related developer tools.)
A company the side of Google, with all its internal politics, doesn't work as a startup. Starting a third operating system project and hoping it to replace two major ones means convincing people inside the company to loose part of their influence. Now it may happen if chrome or android were failing, but they're clearly not.
I use Andromeda equivalently with Fuchsia in this article. Andromeda could refer to combining Android and Chrome OS in general, but it's all the same OS for phones, laptops, etc. - Fuchsia. I make no claims about what the final marketing names will be. Andromeda could simply be the first version of Fuchsia, and conveniently starts with "A." Google could also market the PC OS with a different name than for the mobile OS, or any number of alternatives. I have no idea. We'll see.
Won't the Dart's single thread nature be bad to take advantage of Murli core processors? Or they are embracing web workers?
This is worrying for Apple. I can see the following playing out
- Apple continues releasing machines like the TB MBP, much to exasperated developer dismay.
- Other x86 laptop industrial design and build quality continue to improve.
- Fuchsia/Andromeda itself becomes a compelling development environment
- Developers begin switching away from Mac OS to Fuchsia, Linux and Windows
- Google delivers on the promise of a WORA runtime and the biggest objective reason not to abandon Mac OS, i.e. writing apps for iOS, disappears.
- Apps start to look the same on iOS and Android. iOS becomes less compelling.
- iOS devices sales begin to hurt.
Granted that the App Store submission requires Mac OS (Application Loader) and the license agreement requires you only use Apple software to submit apps to the App Store and not write your own, but it seems flimsy to rely on that.
It would really surprise me if Apple got there first. Tim lacks vision and will keep on milking iOS even if the iPad Pro is a failure as a laptop replacement.
Windows is still king in the desktop space, at least as far a user base goes, but it's terrible on tablets and phones. MS has all the tech in place with UWP, but it's still pretty far in the race in terms of simplicity and usability.
Chrome OS ticks all the right boxes, and is experiencing a huge growth, but it's not universal. If Andromeda is real, and it's able to become a universal OS that merges Chrome OS and Android it might be the best thing since sliced bread.
I don't buy that a finger and a mouse/trackpad pointer are equivalent input devices. One obscures the display and is imprecise. The other, well, isn't.
I'm fine with a different server OS than desktop. I see no compelling reason why I need a single OS for all of my personal devices.
Not only does the OS UI reflect the input method, every single application you run does as well. A touch-optimized application will be clumsy and primitive when using a mouse; a mouse-optimized application will be miserable when using a finger.
If an application can be written once for both, often it'll be a poor compromise or only properly support one of the two input methods.
Look at how awful early Java "write once run everywhere" applications were. Yes, some of that was the Java platform itself, but developers were given the opportunity to ignore platform-specific UX concerns and many eagerly embraced it.
But what if you could use the same language and frameworks on all devices? The same tools (IDE, code editor, compiler, etc)? That's what I'm referring to.
I think a single OS for them all is fine.
Consider gnome+linux vs android+linux. They're both linux, and they're not the same platform.
That's clearly not what the original poster was arguing.
It's obvious that the UI has to be different depending on the device, input, etc.
Apple will have to go through all of the growing pains Microsoft has already weathered, so they're miles off, but maybe they can do it faster because they don't care as much about what people think of them, and are willing to do whatever they think is right (if it is or not is to be seen).
Google is well placed because, while they have a bunch of platforms, the only one in the desktop space is basically dead, and really didn't ever have too much investment in it, so not too many feel put out if they get rid of it. But that not even the case, because they're able to engineer everything so that it stays relevant.
Might be limited to your specific environment though. As a counter-example among the people I know there are far more .NET devs than the total sum of iOS/Mac/Android devs.
Then I have to agree with you; UWP will go the way of Windows RT & the dodo, unless MS forces it as the main dev paradigm (which seems unlikely).
Also, I really don't see how a developer could argue against the notion of having a single language, API, IDE, compiler, etc, for working on all devices.
> Combining Chrome OS and Android is uninteresting to me, since I wouldn't be able to develop on it or really do any of the things I use a computer for.
Time will tell.
Wait, do we? Why?
I think why they would _not_ have the same interface between similar hardware and similar applications is a better question.
As for the user interface and bundled applications, let's not confuse "Operating System" with that although it's popular for some stupid reason. The one and same OS could of course have e.g. completely different window managers adapted for different human-device interactions and use cases. But that's very distanced from the actual OS, that is pretty much only interested in how to run and expose the hardware to the software.
All hail Universal OS!!
As of now, I never use Google drive sync, I use syncthing to sync folders between mobile and my Mac.
Universal OS rocks! (I hope it is in line with my vision of how the concept of universality is) I want to just visually ssh the phone and the machine, or rather the OS is in the phone and I just connect it to the machine, there was some project recently I don't know if it is available now.
Edit: Why the downvotes?
Beginners are typically terrified of anything they don't understand. I used to teach computers 101 to university students, and they'd pretty much freeze up every time they saw something they weren't expecting.
Then, after I was comfortable with everything did I start learning AJAX, I chose Vue.JS and I migrated the app I wrote in pure HTML to use Go  and wrote a guide about it .
I pissed a lot of people of VueJS project when I raised this  issue, yes, in hindsight I do realize that I was rude to them, but their documentation had a problem (and I had already said sorry if I was rude)
I still don't know how to use websockets and what not, or how to write load balancers or distributed database, or web proxy or cache or other things to be done at scale, I choose to ignore them as of now, as I build my capability to understand or until I need of them.
This is what I meant by that statement.
>they'd pretty much freeze up
You know, in 2010, I still remember the first C programming class (I had not understood a single word) I had in my college, I froze and for a minute questioned if I had made the right decision to enter computer science. The point is that newcomers need to know what they can ignore while they learn the basics, it is not possible to learn everything in one go.
Nobody teaches you real life, it just happens, and as a newbie it is my responsibility to learn in the best way I can without getting overwhelmed, and no, it is not a disparaging statement!
Also, this is why I started Multiversity, this is a YouTube channel which teaches by example.
There is a small terminology issue here: a "server" is a program that offers services to remote "client" programs. The clients make requests and the server responds to them. A client program will make a request like "allocate me a chunk of the screen and put these here bits in it", or "let me know about any of these events that happen". The server manages the screen and notifies the clients about things they're interested in.
IT MAKES PERFECT SENSE, DAMMIT!
I agree, it actually does make total sense - but that doesn't mean I won't get confused :).
My only prior exposure to "GUIs over the network" were web applications, where the roles are essentially reversed. That is, the part responsible for accepting user input and rendering the UI is the client (the browser), and the part that performs the application logic is the server.
I naively assumed that X would work the same way, but it wasn't too hard to unlearn that misconception.
...about that steering wheel...
It's sad that ReactOS doesn't get more support from the community.
Apple's CEO lacks the vision and is milking the status quo. Their iOS market share is a lot smaller and limited to a few devices.
Microsoft's CEO lacks the vision and is aggressively milking the status quo. They aggressively try to enforce a switch to software-as-a-service plus they are no in the gray adware/spyware business capturing way too much end user data which shades bad light. And the killed their QA. Their products since 2010 are a disaster. That's why XBoxOne tanked in all market beside US, WinPhone tanked world wide, Win8 and Win10 marketshare is lot smaller than Win7, which is the major desktop OS - and there is little reason to switch away from Win7. MSFT would profit from a 180 degree U-turn with a new management.
That seems very, very dangerous to me.
Actually we already are in the red zone, I just wish we don't go crimson. But I doubt people will care. They didn't up to now, no reason it changes.
We need competition. A single OS controlled by a single corporation with their own conflicting interests is a million miles from what we need.
Obviously different devices need different UIs.
Surface tablets seem to be doing OK though.
Also, it would be mind-boggling if they didn't actually fix the update problem this time, and if it wasn't a top 3 priority for the new OS.
There is little beyond syntax that a different language can offer because a modern OS cannot afford features like garbage collection. Indeed, this was one of the research aims of MS's Singularity project.
Today, there is no reason Rust should ever be 3x slower, especially in an OSdev context, where you currently have to use nightly.