Both mechanisms operate over a persistent TCP/SSL/XMPP connection that Google maintains between its servers and Android handsets via a service called GTalkService.
What? That's rather naughty if it is true (the article doesn't give any more details than this).
Does this mean that for non-technical users you cannot turn off a Google connection?
It's the connection that all your push notifications come down by. GMail push, Google Talk inbound messages, etc. They piggyback any number of things onto it.
Why turn off a safety feature? if you're not-technical enough to install a malicious app that slipped into the market, this mechanism is practically made for you.
I am a massive Google fan. But I know a number of people who are not and prefer not to "trust" them with data.
Most have android phones because they feel similarly about Apple. This would horrify them.
Admittedly that's an overreaction. But I can't help feeling that a persistent connection with little option/choice is not the sort of thing a free platform should have.
I think it's actually a necessity specially for an open platform, imagine if a malicious app got loose and they couldn't do anything about it. It's a 'damned if you do, damned if you don't' situation.
I'm tired of these excuses for software makers to have more and more control over the devices we own.
Palm OS phones like the Treo never had any remote kill switch or other such nonsense. Anyone could develop for it and post their apps wherever they pleased. There was no gatekeeper to be be paid or placated, nor were there any super-scary viruses that brought down the network.
If this kind of Orwellian control is "a necessity specially for an open platform", why not allow Linux vendors the same power over your servers and workstations? After all, "imagine if a malicious app got loose and they couldn't do anything about it".
Seriously: it is up to the users to exercise good judgement and safe computing practices. If an infected phone is causing problems, let the carrier disconnect it (just like ISPs disconnect problem accounts). Turning power over to someone else is just another example of the nanny mentality. Not to mention that the back doors themselves may be exploited by malware.
It's no the same thing, those apps come pre-bundled when you purchased your blackberry, the Google mechanism is a safeguard for when a malicious app slips through, and it only applies to apps downloaded from the market.
To be fair, they could build the backdoor into the phone when it's manufactured. No need to use Google as their middleman.
All you need is a routine in the radio firmware that recognizes a specific signal and either turns off the radio or flood the towers with traffic. Better yet - request instructions from a server and deploy resources according to the plans they get - communications meltdown, massive DDoS on critical services, you name it.
And since it's in the radio controller, it's pretty much hidden from view. You can root your Android phone or jailbreak your iPhone all you want, the radio controller is pretty much a separate computer.
Plausible deniability is an important difference. Factory-implanted backdoors ruin a commercial relationship -- and could be discovered before deployment.
On the other hand, subverting Google's own official 'kill-switch' at a later date could be the work of a lone vandal or disgruntled employee, and reflects more negatively on Google than manufacturers.
(BTW, I have nothing against Chinese hackers specifically; they're just a usefully vivid example from recent events. The same observation goes for any person or entity that gets momentary control of the official platform-wide revocation mechanism. Its mere existence, for either the iOS or Android ecosystems, makes it a super-juicy target for evildoers.)
> Factory-implanted backdoors ruin a commercial relationship -- and could be discovered before deployment
Only if they are discovered.
You can hide the firmware in ways not even the "official" firmware can access and only a mask inspection would show you have a small amount of ROM where none was supposed to be (or twice as much as you state in the chip specs). If I were paranoid, I would be seriously investigating whether such a plan could be actually conducted - how many processes would have to be compromised and how many people would have to be involved to introduce a feature like this in, say, a popular cellphone radio controller. Can we vouch for the integrity of the hardware/software stack in the towers themselves for not having any backdoor/sleeper code or logic?
Again, I don't imagine this as being the work of gangs, but of governments. It's like having your communications blocked as soon as tanks cross the borders and planes start dropping bombs. It's a very nasty scenario.
The chinese have balls when it comes to hacking, but not that much balls. If, and it would be, the bricking was traced back to the Chinese there would be a major international shitstorm about the Chinese "Declaring war on open technology, and American consumer electronics."
They haven't had to because they stop this stuff before it happens.
The cost of keeping the Apple App Store clean of malicious software is placed on the developers who are forced to wait for approval. The cost of keeping the Android App Market clean is placed on the users who will have to deal with the malicious apps that haven't been pulled yet.
Freedom isn't free it seems.
Does anyone know if Google can pull apps that haven't been installed by means of the Market?
I did a little googling, and found plenty of articles on Android's Marketplace having malicious software, but only articles along the lines of "Researcher says IPhone Data Model Could Lead to Malware" (http://www.pcworld.com/article/183741/researcher_says_iphone...) for the iPhone.
As far as I can tell, the iPhone has only been hit by malware when jailbreaking is involved. Maybe I'm missing it, but considering that minor infections in the Android market have made headlines, I imagine I would fine something for Apple too.
P.S. I'm hardly pro Apple, I have a Droid. I'm just saying that the probably do check for this sort of thing, because it's impossible in my mind for some one not to have tried to submit malware to the App store.
Apple has rejected some apps for uploading the users contacts to a third party server, using private APIs, etc. It seems like they do dig a little deeper. Who knows if that's enough to stop all malicious software but considering we've seen a number of malicious Android apps (fake Bank of America app, proof of concept botnet) it does seem like Apple's review process is providing some real benefits to go along with it's real disadvantages.
The description of "malicious" seems a little nebulous.
I have never heard of an Android application that breaks out of the sandbox. When people talk about "malicious" applications, these are apps that don't actually do what they promise to do, and because of an overly generous user (who okayed excessive permissions) they exploit trust.
This is similar to a web site saying "Hey, add me to your trust zone" (in Internet Explorer) "and I'll be extra awesome", and then exploiting that access.
Another poster mentioned that location has a special confirmation security grant, which is interesting to learn, however for other accesses there is no guarantee that the app is doing everything in your best interest.
No conspiracy or anything to be alarmed with if you ask me