Hacker News new | past | comments | ask | show | jobs | submit login
Android Is a Dead End (osnews.com)
55 points by MBCook on July 16, 2017 | hide | past | web | favorite | 68 comments



This is a very confusing argument. The summary seems to be Android defined as "a Linux kernel with libraries" is a dead end. Well, okay, although given its deployment on 2B devices it's probably the best argument for Linux.

A couple of years ago Android replaced Dalvik, the runtime for all apps. Did anyone say Android is a dead end then? Nobody defines Android in this way, by a single component, and in a way the author even acknowledges this. You might as well say given enough time Linux itself is a dead end.

What Android is is what Andy Rubin originally said it would be - an open source mobile operating system. And I doubt this will be a dead end any time soon, certainly not in our lifetimes. Important libraries and frameworks themselves will be improved and replaced as they have been for many years now, and yes perhaps Linux itself will be succeeded. But why would anybody care, especially if its transparent to users and even to developers (I'd imagine maybe a Linux emulation mode for NDK users.) To me, this sounds like a healthy project, which is sort of the opposite of a dead end.


> Well, okay, although given its deployment on 2B devices it's probably the best argument for Linux.

One could have said more or less the same about Symbian not that long ago.

> Nobody defines Android in this way, by a single component, and in a way the author even acknowledges this.

That's right, it's not defined by any particular software component. Any particular component can be replaced until the whole thing is replaced, a la Theseus's ship. So what is Android - what can't be replaced?

The answer is the development model. This can't be replaced because it's definitional to Android. You said it yourself: Android is "an open source operating system." More specifically, it's an open source OS that follows a particular development model:

1. Development of mainline Android happens behind Google's closed doors, and is thrown over the wall to OEMs. 2. Development of Android for particular devices happens behind OEMs closed doors, and is thrown over the wall to customers.

#2 is the rub. It annoys customers, who have to wait to get updates and fixes. It annoys developers due to the fragmentation problem. And it annoys Google because they don't control the update cycle, and instead have to shoehorn things into Google Play Services, etc. But critically it doesn't annoy OEMs, who love being able to apply their kernel "fixes," shovelware, and other familiar Android condiments.

Google would love love love to transition to a ChromeOS-like model, where they can exercise a lot more control. But they can't just do that outright - OEMs would revolt. They probably can't do it at all within the confines of Android and the development model they've established. That's their key problem.

> But why would anybody care, especially if its transparent to users and even to developers

It won't be. There is no universe where Google can provide a new OS that works on all their customers devices and supports all the apps. The best they can hope for is a new OS with a compatibility layer that works well enough - think Apple's transition from classic Mac OS to OS X.

But as soon as OEMs get wind of those, you can bet they'll be tripling-down on Tizen or forks or whatever. That's the dilemma that Google finds itself in.

Android is a sort of asymptotic dead-end, where Google can keep providing fixes like Dalvik->ART, but those can't address the essential problems. Eventually those problems will be severe enough to open a space for a new offering to thrive. Google hopes to own that offering, cannibalizing Android instead of letting someone else eat it.

(Oh, but then there's China...)


> OEMs would revolt

but.. where would they go? what other options do they have, realistically?

> you can bet they'll be tripling-down on Tizen or forks or whatever

really? i'm just not that sure. I think many OEMs would happily give up all the engineering expense/effort they carry right now to compete in what is essentially a commoditized market.


Windows mobile.


> Development of mainline Android happens behind Google's closed doors, and is thrown over the wall to OEMs

You don't know much about Android, do you?

Here's Android: https://github.com/android

Developed behind closed door? I don't think so.

Maybe you meant iOS, though? Because that OS is certainly being developed behind closed doors.


I am sorry to inform you that Google engineers are not doing their daily pushes to github.com/android.

Yes, Android is developed behind closed doors. Primary development occurs with a Google-private repository, and only made public at or near the point of a release.

This is different from the way Chrome, ChromeOS, Linux, Swift, etc. are developed, where primary development occurs in the open.


Ok.

So how do iOS and Windows Mobile fare in your opinion?


When did the OP ever mention iOS or Windows Phone being OSS? They've never claimed to be either.


"provides a read-only mirror" sounds like "developed behind closed doors" to me.


I would say apps are at a dead end. The way we are using our mobile devices with rows and rows of single purpose apps seems so... primitive now.

I mean, you need "Pizza Place" app to order pizza, "TicketMaster" app to order concert tickets, "HotelWhatever" app for hotel reservation, "Parking Meter" app, "Fitness" app, "News" app, etc...

Like I said; rows and rows of information silos taking place on your phone, unable to talk to each other, requiring new account for each of them, entering your credit card again and again. It's terrible.

Now let's say you want to order pizza. Instead of downloading yet another app, you could just type "order pizza" in the search box and an interface is created by the OS and you can chose the pizza place you want, toppings, etc, and the OS handles everything (adresse, payments).

AI is getting more and more capable and I think we reached the end of the "There's an app for that" era. Can Android evolve to this post-app paradigm? I don't think so. Same thing with iOS.

That's why, I think, Google is working on Fuchsia.


> you could just type "order pizza"

"Type"? What year is this, 2005?

That's a surprisingly narrow view of what the future looks like.

Today, you can already ask your phone "order pizza" and it will do just that. Whether it's done by an app or an intent or a web site is irrelevant: it just works.

Which is why the OP article "Android is a dead end" is so preposterous. There is no analysis, no reasoning, no evidence. Just a silly claim that criticizes the OS used by 80% of the phones in the entire world for simple click baiting.


Sure, I said "type" because that's what I would do, but you can speak to the phone or Assistant and achieve the same result.


I'm not going to have an argument with my phone on the train by talking and constantly trying to clarify the exact words I said. I use speech for when I'm driving, and that's about it, most of the time I'm going to be typing/tapping.


The year is 2017, and speech to text still isn't perfect.


And it will never be.

For English, and this kind of specific sentence, it's above 99%, though. Good enough for you?

What you are doing is called the Nirvana fallacy. Look it up, and stop doing it.


I actually meant more the speed rather than the accuracy. At least on my phone, it takes a fair bit longer to listen to me and process my voice than to swipe in two words.


Having to create and track separate identities for each app, particularly transactional/infrequently used apps, does indeed seem dated. But I think it’s a mistake to confuse the identity issue with apps as a whole.

AI won’t solve every use case. There will always be a need for experiences tailored for niche needs in productivity/creativity, utilities, entertainment, etc.


They kind of got a second lease on life with chromeos support for Android. However with Web assembly and progressive web apps the chromeos support for Android is a transitional move, eventually webassembly will eat the desktop market and the mobile app stores.


Isn't this what Google Wallet and Apple Pay were supposed to solve?


Extend, engulf, devour.

Most of the comments here miss the point of the article. The article author is saying that Google will replace the Linux component of Android with something proprietary. Then Google will have total control, and will be able to prevent Android clones.

Already, Android apps make such heavy use of Google proprietary code that open source Android systems such as Cyanogen and FDroid have mostly given up.


Except that the only evidence provided is Fuschia which is open source


Also, how does google prevent Samsung & co from taking over the AOSP codebase ?

Sure they could add rules to the "Google Play agreement for Fuchsia" but they are already in hot water because of the current Play Store contracts.


I don't think Google will do that anytime soon. Because they don't know how to make and sell a phone for profit. The only player that can do that is Samsung, and Google cannot cut them off. I bet Samsung would sell more tizen phones than Google would sell fuschia phones, if Google were to lock Android up.


Google lacks fundamental business and product management skills.

Creating an "open platform" encourages device makers to polish it only for their needs but not upstream features and fixes to competitors (OS fragmentation) and let multiple, insecure versions run forever that rarely get updated (legacy fragmentation). Plus, Linux security is usually an oxymoron which Fuchsia may address. If Google wants their platform to survive, they're going to have to require approval for apps and devices, and stipulate things like UX usability and mandatory OS patches.

PS: There are thousands of servers run by handset manufacturers and telcos that are required for some features of mobile devices to work at all. For example, Motorola Mobility smartphones' depended on their own servers for many social media features to work. These backend-heavy designs are supremely easy to brick, disconnect and invade privacy of users, en masse, for profit and for government spying.


> The article author is saying that Google will replace the Linux component of Android with something proprietary. Then Google will have total control, and will be able to prevent Android clones.

Sounds like a great conspiracy theory. Looking forward to the evidence to back up this link bait article.


So... there's absolutely no proof of what this article says, other than the existence of Fuchsia? Whose purpose we don't even know yet.....


Don't know if it's a dead end or not, but both the apis and the developer tools are a massive clusterf#%k compared to Apple and Microsoft. Just one head scratcher after another, especially if you dare to support older versions. In comparison, Apple has a relatively pain free API, and lets you access it with one of the most elegant languages I've ever programmed in (Swift). After delving into Android just for a bit, it is no surprise that there are so few decent, non-trivial apps for it, and even Google's own apps work better on iOS.


Interesting POV, but I'd much rather develop for Android than any other platform. You are correct that currently developing for Android is not for the faint of heart. Happily, it also turns out understanding Java and Gradle and IntelliJ is generally useful for a software engineer and let me develop for many more targets and vendors than one. And since all the components are all open source, the community can provide excellent updates, bugfixes, support, libraries, frameworks, etc.

Not to bash on Swift, but even with the incoming Swift 4 they still don't even have an ABI compatibility story (leading to bigger apps since they to bundle the runtime, and framework/library devs can't offer binaries), and they even had to break source compatibility (though the compiler will have a flag to compile Swift 3). And since most Android app development actually depends on the Android Support Libraries, you can provide great compatibility to current devices. Whereas - and this is according to iOS devs - supporting the latest version of iOS usually means abandoning supporting previous versions. Also, I heard the newest xcode 9 just got refactoring support - cute!


Your argument is moot, if you replace Swift with Objective C. Also you don't have to abandon previous iOS versions to support the latest, it just doesn't make sense to when 90% of the users are on the latest.


Pretty silly.

Let's take a look at the landscape:

- Windows Mobile: irrelevant market share, closed source.

- iOS: 10-20% market share depending on the country, closed source.

- Android: 80-90% market share. Based on Linux, open source.

I know who I'm rooting for.


> over the coming two to three years, Android will undergo a radical transformation. This transformation will be mostly transparent to users - their next Android phone won't actually be "Android" anymore, but still run the same applications, and they literally won't care - but it won't be a Linux device, and it won't suffer from Android's core problems.

How laughable. Google can't even get most users on a two year old upgrade let a loan an entire rewrite.


createArticle(template='IsDead', target=buzzwordList.get(random()))


Apple's iPhone is central to the product/value they deliver. And it's treated as such by Apple.

Google's Android is not. They deliver ads, and now some services that don't depend on ads. They needed to keep the channel open for these, in the face of a mobile transition that could otherwise be dominated by Apple and maybe Microsoft. Thus, Android.

Of course, this was also back when Google was still willing to really take a flyer on something.


Surprised at the lack of comments about how Kotlin might help the situation.


Android is slow? that's news to me. Been using Android for years.


Try out an iPhone for real world use.

I did the opposite. Android is extraordinarily slow ("feels unresponsive") even on high end hardware compared with a comparable iPhone. I had to use a Galaxy S8 for a spell at work and every non-keyboard tap seems to generate a half-second-plus delayed response.

I actually thought something was wrong (malware on a recycled company device?) and wiped the phone, installed all patches, and... still slow as hell despite the hardware that is 4x more powerful than my iPhone 6S.

It's as if they somehow ported the as-yet-unfixed lagginess of all Linux desktop managers to the phone on purpose, because they didn't know the world could be better.

And no, I'm not a fanboy, my phone is my only iThing. And no, there was no MDM or AV installed on the phone by my company that caused this lagginess.


"X Is Dead"

straight to the front page with this one!

I, for one, love my Google Pixel :)


Seems to me the long standing "Java is slow" versus "that's an old wives tale, Java software can be as fast native" all converges right down to the evidence of Android - compared to iOS, it's slow. If Java was just as fast then this would never come up.


Java can be performant, but still typically has a huge footprint and memory profile. Others will point out that there are multiple implementations of Java, which is true, but to what adoption? IMO, the only thing propping up Java is legacy enterprise and Android. The adoption of Go, Rust, et al show that people just aren't that into VMs anymore. They had a place in the self hosted world, but in the microservice/cloud world, it's a -lot- of overhead. The same is true in smartphones...we have a huge memory use problem in Android. It may be 'performant', but it's still a resource dog.


A key difference between an ELF binary and a Java .jar is that ELF contains a process image. The loader literally mmaps portions of the executable file into memory, and control is immediately translated to this image (dynamic linking and runtime relocations complicate this a bit, but mainly involve patching the image).

Java on the other builds its process image on the heap, using the executable file as input. Nothing is executed out of the binary file directly afaik. I believe classloading (defined in the Java spec) actually prohibits doing too much of this work ahead of time. The JVM has to defer work until startup in case the user wants to define a custom classloader.

My knowledge of the JVM is very basic compared to my knowledge of ELF, but I believe the basic gist of this is correct. I'm interested in diggging into this difference more deeply and writing a well-researched blog entry about it.


I have good news for you! Android (which never had a JVM) Lollipop replaced its old runtime Dalvik with ART, which is on around 1.5B devices. ART actually compiles Android's bytecode to ELF, though Android calls their variant OAT.

OAT, in turn, is a playful anagram of AOT, because what Android now does is even more powerful than the JVM or ELF. Android includes a JIT but it also does nightly profile guided optimizations. So not only does Android perform the preloading optimization you mention for ELF but as the user continues to use their apps Android will continue to optimize the compiled binaries.


Good to know. Does that mean Java classloaders are not supported on Android? And does this approach require breaking compatibility with that aspect of the Java spec?


> The adoption of Go, Rust, et al show that people just aren't that into VMs anymore.

Did you actually look at the numbers?

Go is certainly gaining some popularity, and Rust too (although an order of magnitude below Go) but both of these still have a fraction of the mind share that Java has.

You're being dazzled by the hype and forgetting to look at hard data.


> people just aren't that into VMs anymore

Bytecode can be translated quite well to native code. So I'm wondering if this isn't about "VMs versus native" as much as it is about "garbage collected versus unmanaged memory".


What's an example of an ahead-of-time compiler that takes bytecode and spits out native code? Can you do inline assembly for the target architecture? I heard a million time how JIT and JVM "could be" just as fast (or some claim faster) than AOT but in practice it hasn't happened. High performance code is written in AOT/native languages.


You can definitely do ahead of time compilation with .net bytecode. That's why you can write iOS apps in c# with xamarin's SDK.

I think the performance differences between JIT compiled code and native code mainly come from the 'convenience' features offered by languages such as Java or C#. I am thinking of garbage collection, (default) virtual functions or reflection.

In let's say c++ you can do all of these things too, but the cost is not hidden from the developer.


>What's an example of an ahead-of-time compiler that takes bytecode and spits out native code?

I believe that is how .NET works (the byte code gets distributed, and the user compiles it to native immidietly before execution).

Also, Android's ART is the replacement for Dalvik that compiles Dalvik bytecode into native code at installation.


Actually, ART is now "JIT guided AOT." The JIT will (re)compile code as needed but the JIT code is stored to disk, thus creating an AOT binary on future runs.


>I heard a million time how JIT and JVM "could be" just as fast (or some claim faster) than AOT but in practice it hasn't happened.

Why do you say this? There's lots of examples out there and Android is one of them.


How is an Android an example? What AOT are we comparing to here?


Possibly. The default strategy, at least in the server side world, simply uses too much memory. Compare Go RES memory to an equally implemented Java version. It's typically orders of magnitude smaller.


> The adoption of Go, Rust, et al show that people just aren't that into VMs anymore.

Or, you can also say that the adoption of LLVM compilers, Javascript/Node, Python, and Ruby, show that people are more into VMs than ever.


LLVM is not a vm, despite its name. Node is popular mainly because frontend people want to do backend work. Python is python, it's been around forever and is available. It's also unmatched in scientific realms. Ruby seems falling out of favor.


between github, metasploit, and rails, ruby is falling out of favor the way that java has fallen out of favor.


Well, I don't mean 'dead' by falling out of favor, just trying to gauge the pulse of new projects. Of course Ruby still exists, heck even Fortran has new development jobs, but I wouldn't call it a 'hot' language. Sure that comparison isn't fair or accurate, but an observation. Rails props up Ruby in a similar fashion existing codebases and Android do Java.


>Ruby seems falling out of favor.

Please elaborate, i'm interested.


Well, I typically look at a few metrics as PL fascinate me a lot. However, I'll be the first to admit I'm not an expert, so take everything below with a grain of salt.

First I judge by new projects/tools that interest me. I don't see much ruby there, but a fair amount of python admittedly. This could be biased to what interests me, obviously.

The second is people I know. Some are ex ruby devs, none still use it in their line of work.

The third is job trends. Ruby, specifically Rails, is just down by any metric. A quick google search sent me to http://www.indeed.com/jobtrends/q-Ruby-on-rails.html

Again, I said 'seems' because it seems that way to me, not because it is fact.


JavaScript is moving away from VMs in a sense, since WASM has basically unmanaged memory (no garbage collector). That's a good thing, because in theory you could build your own GC on top of it. (I say in theory because the technology is not there yet, in particular on the concurrency front).


Wasm code is bytecode; it targets a VM.


True, but it's a very simple VM, as it doesn't have a GC.


This post is just ignorant of iOS and Android and the dozens of differences between them that lead to user perf complaints. Everything from Apple's hardware advantage to the way Android handles background apps to the difference in built in library quality(core data vs 'use sqlite or something' for example) all make way more of a difference.

These days, the Java code in an android app is compiled to native anyway.


But it also uses GC even when "compiled to native", so you're not free of things that could make your app stutter. iOS does not use GC. WinPhone did use GC, but they spent a lot of time optimizing the hell out of the stuff that runs on the UI thread.


> Everything from Apple's hardware advantage to the way Android handles background apps to the difference in built in library quality(core data vs 'use sqlite or something' for example) all make way more of a difference.

I'd like to go off a tangent here. On LineageOS, under privacy settings you can deny apps permissions such as run at boot and run in background. I wish deny was default mode in all of Android and that apps would have to ask for permission to run at boot. This is good for the end user and this is good for Google as Play Services should have this turned on by default meaning greater adoption for Google Cloud Messaging. This can't happen in one step because the proverbial cat is currently out of the bag and we have to sort of slowly put it back in. Something similar needs to happen with all permissions. File storage: afaik all apps have access to their own data store. Most apps should not need access to device storage at all. Almost no app should require access to phone permission.

For example, look at this page by Capital One: https://www.capitalone.com/applications/mobile/android/permi...

> Don’t worry, we aren’t going to call you to find out your plans for the weekend. This permission allows us to use unique phone information (SIM ID and phone number) to guard against unusual sign-in activity. It’s a little extra security to help protect your money.

What is this madness? If it were up to me, I'd ban the capital one app for this amount of ridiculousness.

That being said, I am not an iOS fan either. I have problems with iOS as well. Why isn't there a way to limit data usage to say n MB a month? Why in the world should it take an OS update to update iMessage or Safari? How is this still a thing now that the iPhone is almost ten(!) years old?


>What is this madness? If it were up to me, I'd ban the capital one app for this amount of ridiculousness.

Fingerprinting devices to detect unusual logging activity? This is common. Heck, I've had Google lock me out until I confirmed with my recovery email after logging in from a new device. Much more common is simply getting an email after logging in from a new device. The most common method of doing this is for desktop sites is, I assume, IP address; which obviously does not work for phones.


> Fingerprinting devices to detect unusual logging activity? This is common. Heck, I've had Google lock me out until I confirmed with my recovery email after logging in from a new device. Much more common is simply getting an email after logging in from a new device. The most common method of doing this is for desktop sites is, I assume, IP address; which obviously does not work for phones.

This is too heavy-handed. Capital One lets me log in from any computer browser anywhere using just my username and password. As of Marshmallow,

> The most straightforward solution to identifying an application instance running on a device is to use an Instance ID, and this is the recommended solution in the majority of non-ads use-cases. Only the app instance for which it was provisioned can access this identifier, and it's (relatively) easily resettable because it only persists as long as the app is installed.

https://developer.android.com/training/articles/user-data-id...

The only pertinent use case the document details for using Phone permission for a credit card app:

> Abuse detection: Detecting high value stolen credentials > In this case, you are trying to detect if a single device is being used multiple times with high-value, stolen credentials (e.g. to make fraudulent payments).

> We Recommend: IMEI/IMSI (requires PHONE permission group in Android 6.0 (API level 23) and higher.)

> Why this Recommendation?

> With stolen credentials, devices can be used to monetize multiple high value stolen credentials (such as tokenized credit cards). In these scenarios, software IDs can be reset to avoid detection, so hardware identifiers may be used.

but the capital one app does none of those things! Even if it did, I'd imagine I am a pretty average user and all I want to be able to do is look at my list of transactions and pay my credit card bill every month and maybe cash out my reward. The phone permission can wait until you really need it.

If we criticize Amazon for encouraging users to make their devices less secure https://www.amazon.com/gp/help/customer/display.html?nodeId=... then we should criticize Capital One for permissions proliferation as well.


If its any consolation, in the latest version of Android, the permission model is mostly how you describe.


I don't know man??? I think you might be overlooking the fundamental flaw in the Android ecosystem... the hardware makers are not incentivised to "care".

I understand what the author of the article is saying. Android seems to be flawed, on a business and an architectural level. I'm just not sure changing a programming language will fix that. The resource problems, in fact, arise from the lack of tight integration between hardware and software. Basically, iOS is more than just an OS. It's very tightly integrated with the hardware of the various iPhones. Those integration points are all revisited in every update to ensure no regressions, and often, to implement improvements.

Speaking of updates, that's another problem with Android. (If we're being fair, I think that problem too is a problem of the more loose hardware integration). But the fact remains that as long as the OS is yoked with the business requirement of being able to be deployed by any OEM, on any hardware they slap together... Android will have update problems. OEMs are sometimes trying to throw a 2.5 cent network chip onto a 40 cent phone and expect Android to handle it with all the speed and grace that iOS handles the rigorously vetted hardware and professionally customized, hand coded firmware that Apple puts in their phones. This is very difficult to do. In fact, I think it's unreasonable to expect that this can be accomplished at all in the market that Android OEM's have to operate in.

Google needs to decide what they want... do they want Android in the hands of the majority of device users? Because if they do... they are going about it absolutely correctly. (Afterall, Microsoft beat Apple SOUNDLY in the pc market... but the reality is that it required many Windows installations to run on some very unstable hardware builds.)

Alternatively, Does Google want to make better DEVICES than Apple? If this is what they want to do... yeah... Android is just NOT going to get them there. It probably is a dead end.

But either way... it's not because of Java, or Rust, or whatever programming language we're blaming it on today.

It really is a problem of ... let's be honest ... a lot of substandard hardware that OEM's are pawning off as being the same as the good stuff. As well as firmware that many of them just don't want to invest much time and money into perfecting. (Because the return on that investment is so low.)

You could change your VM and programming language tomorrow... you'd still have the problem of OEM's slapping cheap hardware and barely functional firmware into something that looks just like all the other Android phones and saying, "Buy MINE!!! It's cheaper!!!"


There are videos showing the Nexus 5, a midtier phone of its generation, outperforming the iPhone of the subsequent generation in launching apps and getting to first interaction. https://youtu.be/hPhkPXVxISY

My own experience is that iOS feels more sluggish than Android for vanilla variants of Android.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: