
The mother of all Android malware has arrived - anon1385
http://www.androidpolice.com/2011/03/01/the-mother-of-all-android-malware-has-arrived-stolen-apps-released-to-the-market-that-root-your-phone-steal-your-data-and-open-backdoor/
======
ekidd
This is an epic screwup by Google and the mobile carriers, but it's also a
useful warning to Google's competitors.

1) This problem was reported to Google a week ago, through multiple channels,
by one of the app vendors who got ripped off:
[http://www.reddit.com/r/Android/comments/fvepu/someone_just_...](http://www.reddit.com/r/Android/comments/fvepu/someone_just_ripped_off_21_popular_free_apps_from/)
Apparently, Google has an unofficial policy of ignoring copyright and
trademark complaints, allowing lots of skeevy software to linger on the
market.

2) The phone carriers are apparently very slow to patch root exploits.

3) Users should be able to trust everything in a curated app store, or else
there's not much point to those 30% fees.

But a word of warning to iOS, WebOS, and Blackberry users:

4) Although the lax behavior of Google and the carriers made this exploit
easier, we'll eventually see problems like this on most mobile platforms.
Apple has allowed (benign) root exploits to slip through their approval
process in the past. If your phone is vulnerable enough to be rooted, it's
vulnerable enough to be owned by a malicious app.

~~~
YooLi
"3) Users should be able to trust everything in a curated app store, or else
there's not much point to those 30% fees."

Isn't the appeal of Google's app store that it isn't curated? Anyone can sign
up and submit anything.

And Apple has never let a "(benign) root exploit" through approval.

~~~
ekidd
_Isn't the appeal of Google's app store that it isn't curated?_

Yes and no. As a developer, I certainly love the relative ease and freedom of
Google's Market. But as user, I don't want to fear clicking "Download" or
"Buy". Apple succeeds, in part, because their App Store provides a certain
measure of trust, and users are willing to open their wallets.

At a minimum, Google needs to respond _quickly_ to malware reports, and to
enforce anti-piracy rules vigorously. Allowing attackers to pirate and
republish apps allows them to attack many more users than they could reach by
developing malicious-but-popular applications from scratch.

 _And Apple has never let a "(benign) root exploit" through approval._

Yeah, I think that was an incorrect statement on my part. Apple has passed
major, hidden functionality that violates their policies, but I was mistaken
in thinking a root exploit was involved. For a mea culpa, see the thread at
<http://news.ycombinator.com/item?id=2279823>

------
iuguy
Bear in mind that this is just something casting a wide net by using the
android marketplace.

There's enough bugs and 0day floating around in webkit as well as unpatched
exploit code to take on Safari, the Android browser and Chrome at the moment.

From a mail over the weekend about a pentesting exploit kit we subscribe to:

    
    
        This release introduces two new exploits for the webkit CSS rule
        deletion vulnerability. Use safari_parentstylesheet to exploit
        all those pesky OSX machines (fully up to date and patched)
        and android_parentstylesheet for anything running android 2.2 and 
        below. Moreover, using android_hotplug you can further escalate
        your privileges to root. Being offsensive has never been so good!
    
        ==New Modules==
    
        o safari_parentstylesheet (Safari <= 5.0.3 64bit webkit css rule deletion vulnerability)
    
        o android_parentstylesheet (Android <= 2.2 webkit css rule deletion vulnerability)
    
        o android_hotplug (Android privilege escalation vulnerability)
    

That's just one pentesting tool we use and that's _a legitimate toolkit_.
Malware targeting webkit in general is on the increase, with various payloads
for safari, chrome, osx, ios and android. It's still the minority by far, but
it is growing. Heck, even Metasploit's getting in on the game
([http://blog.metasploit.com/2011/01/mobile-device-security-
an...](http://blog.metasploit.com/2011/01/mobile-device-security-and-android-
file.html)).

Incidentally if you want to see a video of the safari bug in action, you can
download one from
[http://partners.immunityinc.com/movies/Lightning_Demo_Safari...](http://partners.immunityinc.com/movies/Lightning_Demo_Safari01.zip)

There's also one on owning android at
[http://partners.immunityinc.com/movies/Lightning_Demo_Androi...](http://partners.immunityinc.com/movies/Lightning_Demo_Android.zip)

~~~
mrcharles
I'm not sure after your paranoia inducing post anyone is going to download zip
files from links you provide.

Are these videos somewhere on the web in viewable form?

~~~
iuguy
If you watch the android video you'll see that a link will do the trick, not
even a download. So it makes no difference really - either you trust the link
or you don't. Scary, huh?

I'm not aware of these vids being anywhere else but they are from the guys who
make the aforementioned framework.

------
JacobAldridge
"I demand a walled garden, and I will gladly pay 30% to ensure all apps are
reviewed, approved, and subject to the whims of the closed-shop store
providing them."

This is obviously a serious issue - as the OP notes, the double-edged sword of
openness. Still, the speedy response of Google makes me feel warmer than the
(not-so-common now) decidely un-speedy application process Apple put many
developers through.

Edit: As jevans points out in response, and others have noted in this
discussion, Google's 5 minute response time might be better characterised as
'1 week of sitting on their hands when the developers complained, and 1 rapid
response when it went public in a loud way'.

I'm feeling less warm now, and looking fondly at the non-smartphone Nokia I
own which is so clearly targeted at the 11-year-old-girl's-first-phone market
that it came inside a pink cardboard handbag. But has no malware.

~~~
sid0
The Apple walled garden is way more dangerous, because it's almost as easy to
slip malware into your code (remember, Apple doesn't do full source code
audits), and the false sense of security makes users even more complacent. It
really is a perfect example of security theatre.

~~~
jedsmith
They do static analysis of your executable and check what you call, which is
why they know if you're using a private API and reject you because of it.
They've also caught bugs in my app and sent it back.

I believe the advantage of Objective-C for them is that all messages pretty
much go through one point in the runtime (as far as I know). That signature is
probably very obvious, and they can probably do a lot of looking at your code
with very little effort. If you have malware in your code, there's a pretty
good chance they'll find it based on what it does. I imagine they can see a
lot more about our apps than you'd think, since they probably run them in a
debug build of iOS.

I can think of literally dozens of things they can look at, and I don't even
have access to their systems to know what data is available. They have
hundreds of engineers who have probably figured all of that out, and automated
it for the approvers, too.

And how would we know? Is a malware author going to blog and say "I tried to
slip malware into the App Store, and they denied it?" Since an attempt costs
$99 (I can't imagine you'd keep your account after a failed attempt), that
raises the bar to trying. I seriously doubt we're the first to think of
exploiting iOS, and you haven't heard a word about it...

I think _security theater_ is a bit of a stretch, frankly.

~~~
sid0
_They do static analysis of your executable and check what you call_

Doesn't Objective-C have dynamic binding? I'm sure you can determine what
function to call at runtime, which means you can always get past a static
analysis.

 _I can think of literally dozens of things they can look at_

I can think of literally dozens of ways any analysis can be subverted. Even
dynamic analysis wouldn't work in case the thing uses a timebomb or even
something simple like downloading data from the internet (apps do that right),
and sending in a special payload that instructs the application to do
something evil once you reach, say, 1,000 users. Apple wouldn't test your
application by installing it on a thousand phones, would it?

~~~
jedsmith
> Doesn't Objective-C have dynamic binding? I'm sure you can determine what
> function to call at runtime,

You need a message name (@selector), and the names are strongly typed --
meaning, they're pretty obvious with reflection tools. If there's a way to
send a dynamic message without putting a @selector in your code, I don't know
it, and I'm willing to learn.

When you start talking about the POSIX layer and stuff near the bottom (C),
traditional wisdom applies there: what does your app link against? If you're
linking against the dynamic libraries near the bottom of the stack and walking
their contents (to avoid putting a string in your binary of what you're
looking for, perhaps?), Apple's probably going to check that disassembly
pretty heavily.

> Even dynamic analysis wouldn't work in case the thing uses a timebomb

You keep on writing time bomb as if it's some magical device that circumvents
all security. A time bomb _needs a callback_ in the binary, and Apple's going
to wonder why your app registers a timer for a specific date. This is what you
don't seem to get: Apple has a disassembly of your entire binary, and they can
see when they run it that you register a callback for December 2012. Where
does that callback go? Code in the binary.

Same thing with your 1,000 phones case: clearly something needs to count, and
the obvious candidate is a Web server of some kind, and then the app needs to
_actually do something_ if the response comes back as 1,000. Which means that
code needs to be _in the binary_.

Or you need to download code from the Web server to execute. Which is _easily_
detectable by Apple, and you'd never get approved.

> something simple like downloading data from the internet (apps do that
> right)

You will be rejected if it's used as any part of execution, and they can (and
do) check that. If you even touch APIs that try to sneak data into executable
pages, I bet they'd terminate the app in record time.

Trust me, they've thought this through. The reason that I asked if you're
speaking from experience is because you're making a lot of FUD claims which
are easily fixable. Seriously, buy a Mac and try doing something malware-like
with the iOS API. Then try submitting it to Apple. Otherwise, you and I are
both bags of hot air, theorizing about hypotheticals.

~~~
blibble
I find it pretty hard to believe the iOS review monkeys have the technical
ability to reason about the disassembly of a binary (if sufficiently
obfuscated there are maybe 1000 people tops on Earth with the ability to do
this).

As long as the operating system has a dynamic linker all bets are off wrt.
static analysis, and the halting problem definitely applies to automated
analysis.

If the OS allows writable pages to ever be executable then you can pretty
easily hide your nasty code in the low order bits of an innocent looking data
file (say an image), then pull them out and execute them at runtime.

~~~
robterrell
iOS doesn't allow writable pages to be executed. As I recall that was one
reason why Android 2.2's V8 javascript JIT was so much faster than iOS's.

Also, use of dlopen() is not allowed. A long time ago I heard of two AR apps
that used it (they dynamically linked against CoreSurface, to copy the bits
from the camera preview into an opengl texture) but I haven't heard of anyone
sneaking an app using dlopen() into the store in over a year.

~~~
blibble
hence "to ever be executable", they don't need to be writable _and_
executable, as long as at one point they are writable (for you to dump your
code in), then at some point they're executable (writable or otherwise).

as for dlopen(), you could just compile that directly into your app rather
than using it from libc/libdl, bypassing that limitation entirely.

~~~
darren_
On non-jailbroken iOS you can't mark a section of memory as executable once it
has been writable (and vice-versa, apparently). Executable pages are also
checked against a signature by the kernel before they're used.

~~~
blibble
that's pretty incredible, I'm genuinely surprised.

I suppose it's possible when you start from scratch, there's no way they could
do that on the Mac.

------
nailer
This was inevitable - Google takes a laissez-faire attitude towards copyright
violation on Android market, which is full of ripped-off IP - games with names
and artwork belonging to other companies, rip offs of Rolex logos for clocks,
etc - that I've personally reported and that have always been ignored by
Google.

If they don't care about the small stuff - and it seems they don't - something
nastier was always going to come along.

Hell if they'd bothered to notice one of the submitted apps was 'Spider Man'
perhaps this would already have been averted.

~~~
babblefrog
The way it is supposed to be done is with a DMCA complaint, and I don't think
you can do that unless you own the IP in question. Or are you saying that they
are ignoring DMCA complaints? That is a much more serious accusation.

~~~
nailer
Android Market has a 'flag' button that lets you specify, eg, that 'Wolverine'
isn't anything to do with Marvel but uses Marvel characters and artwork. They
ignore this and similar copyright infringement flags.

You have to own the content to file a DMCA complaint.

------
JonnieCache
The fact that they're collecting IMEIs is interesting. One of the little
discussed facts about smartphones is that they make it trivially easy to
change the IMEI.

On the Galaxy S you can simply mount the NVRAM where it is stored as r/w and
change it, and any other data you want.

For those who don't know the IMEI number is what physically identifies an
actual handset, like a MAC address, except that the networks/authorities view
it as being more of a watertight way to identify someone, as up until recently
changing them has required a soldering iron.

This malware's behaviour implies what I have suspected for some time, that
there is a black market for IMEIs, likely being used for organised criminals
to remain anonymous, or to enable the resale of stolen handsets.

Anyone fancy taking a guess at what an IMEI is worth on IRC these days? CC#s
are meant to be about $0.10 each aren't they?

~~~
peterwwillis
Haha, I don't know, it's always been easy to get a new IMEI. Go to one of
those "cell phone support forums", advertise an unlocking service, ask for
IMEI, receive in your email. With a little more work you could monetize it but
botnets for CC#s are probably a lot easier, and 419's provide even more pay
for the effort involved. But then again i'm not a kid in some 3rd world
country looking for a quick buck so who knows, Android apps could be a really
interesting proposition.

------
sedev
I think that this illustrates a point that's worth making about the difference
between Android devices and iOS devices. There is no 'better' - there is a
tradeoff. This is the same tradeoff that Linux elsewhere offers, really.

This is the tradeoff: "if you are willing to invest your time, mental energy,
and vigilance into bending your device to your will, avoiding traps such as
mention in the article, and doing upkeep, a Linux-based device will give you
enormous and awesome capabilities, leveraging the full power of having a
general-purpose computer in your pocket."

I am glad that that's available, because for some people, that's a great
tradeoff! But it's important that there be another tradeoff available, which
iOS is currently the flagship of: "if you are willing to accept more limited
capabilities, you can have those capabilities in a form that Just Works and
does not require your vigilance, time, and mental energy."

I propose that the market share of Android and iOS roughly reflect the number
of people for whom each of these tradeoffs, is the one they want. Of course we
may want different things at different times - but when we buy these devices,
we are voting with our dollars as to which tradeoff, overall, serves us best.
I hope that both retain vigorous market share, because different people are
best served by different tradeoffs.

------
kilowatt
Coincidentally, this is exactly the kind of thing Google's recent Zynamics
acquisition is meant to find automatically through binary code analysis.

~~~
zmmmmm
If I was google I'd have a giant server room full of VM's running android
hosting every app in the app store with monitors of their state. Then you can
monitor not just what the code is predicted to do but what it actually does.

------
jasongullickson
_"as well as remotely removing them from user’s devices"_

...well that is an interesting feature!

~~~
dean
This is surprising. Does anyone know if Google can actually do this? Maybe
provide a reference.

~~~
megablast
They said they could and would do it, just as Apple and Microsoft have said
they could do it.

~~~
blub
Nokia can't do it, but the people don't like their Symbian UI. Those are the
priorities I guess.

------
jimrandomh
Completely missing from both the article and the comments: the list of
permissions these malicious apps requested. I'd really like to know whether
they found a way around the permissions model, or if this is a case of users
clicking 'OK' to a prompt that says "do you want to let this app root you?"

~~~
antrix
The apps included root exploits. The thread on the Android subreddit has lots
of details.

------
IgorPartola
I am for openness of the Android platform. I don't believe that a review
process, like Apple's, would be beneficial. However, the Android market is not
like the web. On the web, if you are selling a downloadable piece of software,
and I come along and buy a copy, break your DRM and start selling it myself,
you have a surefire way to get me to stop. What you can do is talk to my
payment processor and tell them I am committing fraud. They don't like that,
and will shut things down. While the law is fuzzy and I am innocent until you
can prove I am ripping you off, the payment processor has a contract with me
that says that I will not do anything illegal, and they will enforce it.

Now, on the Android Market, there is effectively, only one payment processor:
Google. And they have a conflict of interest, since they also own the
platform. They can either police all the apps (a la Apple) or they can ignore
all fraud reports. The in-between gets them into a lot of grey territory about
what constitutes fraud, etc.

------
ReadyNSet
There is one thing everyone is missing, Apple does have a process of verifying
the identity of the author even if there was a malware that was slipped in it
is easy to identify the author and report to authorities that in itself is a
good deterrent against trying. remember malware writers want low hanging fruit
first and and Android extremely ripe for it.

------
unwind
I found this: _"Update: holy cheeseballs, they've been pulled already! Took
less than 5 minutes from first contact to pull!"_ utterly amazing. That goes
against ... well, pretty much everything I've ever heard about Google's speed
and ability to respond to humans saying something.

Too bad it was over a security incident, but at least it proves that Google
can react.

~~~
trezor
Actually, reading the thread on the Android subreddit one of the affected
developers said he filed complaints over a week ago with no reponse nor action
taken. That is until it reached the (Android sub?)reddit-frontpage.

Google doesn't really do as much customer-service as it does PR-management,
and that seems to be the way Google usually handles these things. Very few
problems are addressed before they turn into high-profile cases of bad
publicity. And really: That is pretty bad.

Let's not let Google-fanboyism blind us from the facts: Google is one of the
worst companies on the planet when it comes to customer-service.

~~~
jodrellblank
_Very few problems are addressed before they turn into high-profile cases._

Really? How would you tell if that wasn't true?

~~~
trezor
Good question. If I should answer it 100% honestly, I would say I've reached
that conclusion trough very unscientific means: personal experience and a
possible "postive" conformational bias.

Examples of this would be me having issues with Google services or
applications, Googling for answers and then finding forum-posts on the Google
forums discussing the problem.

These posts would mainly consist of 3 things: 1. Original poster asking for
advice on a problem he has. 2. Other people experiencing the same problem,
stating so and in some cases offering workarounds of various quality and
reliability. 3. A complete lack of response from Google's end. No confirmation
that it is a known bug with or without workarounds or a confirmation that this
is a bug and that it is being investigated. Nothing.

These threads, when I find them, tends to be at least a year old if not more,
with a staggering amount of people experiencing the same problem and all this
with the issue at hand remaining unfixed. To quote a comment I read in one of
those threads: "It's pretty obvious that Google doesn't listen and Google
doesn't care".

Moving back to your question: "How would you tell if that wasn't true?". I
guess in these cases I actually wouldn't, so it is indeed a good question.

But I know that for all the issues I've had, I still have them and they are
still entirely unadressed by Google. Reporting problems to Google feels like
showeling data into a write-only device and I can't think of any company I've
dealt with which has had such a complete absence from its customers when it
comes the products and services it offers. None.

Things like reliable customer service is why companies likes to send their
money towards Microsoft: With Microsoft you may have Microsoft-costs,
Microsoft-problems and Microsoft-quirks, but there is a huge support-apparatus
ready 24/7/365 to help you get your problems solved. To put it bluntly:
Customer support is not a "I'm feeling lucky"-button.

------
jsz0
Google needs to do something. They absolutely do not need to mimic Apple's
policies or impose any radical new limitations but the current situation is
starting to spiral out of control. Once this starts hitting 'normal people' it
could have massive repercussions. They need to stop this before the average
person fears installing software onto their device or the entire third party
ecosystem is going to be stunted. When you think about the depth of personal
information, location awareness, microphone, camera, etc the ramifications of
this are huge. The first virus that leaks SMS messages, owns Facebook
accounts, or turns on cameras to spy on people could basically end Android as
a viable consumer product. Of course we've seen this all before on PCs but I
think the stakes are higher these days especially with devices that are so
personal and highly connected.

------
mariuskempe
If someone who knows a lot about these things has a few moments to spare, I'd
love if you could answer [http://www.quora.com/iOS-vs-Android/What-are-the-
relative-me...](http://www.quora.com/iOS-vs-Android/What-are-the-relative-
merits-of-iOSs-and-Androids-application-sandboxing-models) on Quora.

------
Derbasti
Never mind walled gardens, but Google could at least run some basic virus
scanner before posting stuff on the market.

~~~
dpcan
Smartest post on this entire thread.

They clearly analyze the APK before uploading and look at the Manifest, etc,
so why not do a rootkit scan of some kind.

------
sp332
I saw that there was a rootkit for Android botnet C&C presented at Shmoocon.
Is this a similar attack, or something different?
[http://www.grmn00bs.com/2011/01/30/smartphone-code-
release-f...](http://www.grmn00bs.com/2011/01/30/smartphone-code-release-for-
shmoocon)

PDF:
[http://www.grmn00bs.com/Shmoocon2011_SmartphoneBotnets_Georg...](http://www.grmn00bs.com/Shmoocon2011_SmartphoneBotnets_GeorgiaW.pdf)

Video: <http://vimeo.com/19372118>

------
chibea
Actually, that the malware contained a root exploit is fixable sooner or
later.

The next insight will be, that even if the sandbox had worked, this type of
attack still is possible by using the user's trust in the brand of a well-
known app to use the permissions granted to it for malicious intents. There's
no easy way to avoid that up front automatically.

------
Estragon
So, was there any app-scanning software capable of detecting this before the
warning? I have Lookout installed on my droid, but I have always wondered at
its effectiveness. It has a "there are no crocodiles, so it must be a great
crocodile repellant" feel to it...

------
jgroome
I know it's probably off-topic, but would it have killed them to link to the
original reddit thread?

They were more than happy to copy the whole story and quote verbatim from the
thread. Strikes me that a direct link would have been the polite thing to do.

------
ordinaryman
For developers who need to provide mobile access to their hosted apps, this
event makes it easy to decide on a path with regard to "Native apps / Web apps
?" question.

~~~
JonnieCache
Except that as discussed, there are plenty of code execution flaws in mobile
builds of webkit that will do just as well as any flaw in the native app API.

One XSS or SQL injection vuln in your webapp and your users could be just as
rooted as the victims of this malware. Exactly like in the desktop browser
world.

------
sigzero
As far as I know, I was at a security conference in January, there is no code
signing and no sandbox on Android. That is not a good scenario for "security".

~~~
metageek
You are absolutely wrong. All apps in the Market are signed, and apps are
sandboxed by running them with separate Unix userids.

~~~
sigzero
Okay I am wrong.

