Hacker News new | past | comments | ask | show | jobs | submit login
AppleDoesntGiveAFuckAboutSecurity iTunes Evil Plugin Proof of Concept (put.as)
85 points by mafuyu on Feb 15, 2014 | hide | past | web | favorite | 49 comments

People unfamiliar with the Apple ecosystem of products might not realize what a problem this is.

It's not an attack in its own right, but a powerful step in a non-elevated attack against Apple products.

If someone were to end up running malicious code (say, due to a recent Flash Player zero day code execution exploit ...), without the need for administrative rights, they could install this plugin into the iTunes environment for that same user.

Why is this a problem?

This plugin successfully MITM attacks an iTunes purchase and extracts your Apple ID and plaintext password. Network proxying at the machine level to attack SSL traffic would normally require admin rights.

Now that the attacker has the Apple ID, they can:

- Send a remote lock or wipe command to the user's Apple devices that are using Find My Mac/iPhone.

- Log in to Messages and receive copies of everything sent to them / they say to everyone. (They will get email notification of a new device, mind you)

- Potentially find out their home address, phone number, and real name information from the Apple service portal if they've ever taken in a device to be worked on (turning it into an identity theft attack of sorts)


Just an FYI if you're not familiar with what an Apple ID does for the average Apple user.

Mind you, without admin rights, fake alert software could also be installed on the Mac, prompting a user for credentials of some sort with a dialog specially crafted to imitate the real thing.

The difference here, however, is that it is all legit dialogs. There are no crazy unknown processes listed on the machine that a more paranoid person might notice (once the iTunes plugin is dropped). The iTunes process itself has been made malicious.

I, for one, will be changing ownership and permissions on the plugin folder on my machines to make it non-writeable without elevation.

Changing ownership and permissions on the plugin folder won't help if an end user wants the plugin. Suddenly all iTunes plugins can't be trusted, and shouldn't have been installed in the past.

In fact, I now wonder how far that extends -- if I've plugins for Xcode installed, should I worry about my developer account session in Preferences which auto-downloads and can create signing keys for apps? It's a thought...

In my situation, I've never used an iTunes plugin and see no need to personally. So the solution helps me, but you're right that it's a stopgap measure that's not universally applicable.

There are three Macs, five iPhones, and five iPads in active use in my family. Additional Macs, iOS, and iPod devices sit idle or have been given away. I am intimately familiar with the ecosystem.

Anyone able to use this detail in an actual "attack" assuredly has many other avenues to carry out such an attack, and will continue to have such avenues unless and until Windows and Mac OS are at least as locked down as iOS.

... Nothing about this article or my comment said anything about security in iOS.

I have no idea what your statement is about.

If you think my comment was about the security of iOS, you need to read it again. Carefully.

You're right. I re-read what you wrote.

And all you did was echo the lower portions of my own comment where I admitted that "fake alert" style applications could also take these details. And I'm sure other styles of attack as well.

The author of the article was trying to make a simple point though: If Apple allows an iTunes plugin such low level access that it can proxy a store transaction - ideally the thing they should be the most paranoid about - then they should probably revisit their plugin architecture (possibly taking a page from web browsing plugin sandboxing).

Claiming there will always be problems until the OS is as locked down as iOS is overkill.

There are a million ways to proxy a store transaction. Guess what else the user can do:

- Add trusted CA certificates.

- Configure proxy settings.

- Launch arbitrary applications.

- Delete all the users files.

What you're talking about here is sandboxing to protect the user from themselves, and all the lost utility that results. In other words, iOS.

- Add a trusted CA: will prompt for credentials

- Configure proxy settings: prompt for admin credentials

- Launch arbitrary: already admitted as much

- Delete all users files: yup, that's a problem

But the problems I illustrated aren't limited to the single machine that gets infected.

Just clarifying.

> Add a trusted CA: will prompt for credentials

You've already social-engineered your way to getting somebody to download and run an application, ignoring warnings along the way. Prompts for credentials when installing applications are perfectly normal.

> Configure proxy settings: prompt for admin credentials

??? Not on my machine. There isn't even a padlock icon in the relevant window.

> - Add a trusted CA: will prompt for credentials

IIRC this isn't protected by the actual keychain file, and can be done by simply editing the file manually.

Regardless, peer commentator already noted that password prompts are hardly an issue here.

> - Configure proxy settings: prompt for admin credentials

Actually, it doesn't. Additionally, browsers have per-user local configuration.

Let's assume you're right and Apple should revisit it (I don't think they should; I prefer plugins that can, in fact, do anything they wish). How does that lead to the OP's hysterical conclusion?

Browser plugin sandboxing is a very new phenomenon. Apple doesn't give a shit about security because... they said they'll investigate doing something that has only recently been done for the first time at all? What?

This sounds like a "Other side of this airtight hatchway" security non-flaw. If you can convince a user to run arbitrary code, then it doesn't matter whether it's an iTunes plugin or something else. You're owned either way.

It's tempting to be this reductive but it ignores context and specifically the implications of the word "plugin."

The context of downloading and executing a full fledged app fits with your implication that the user should know she'll be "owned" by a mistake. (Despite this, Apple offers mitigations in terms of warnings and code signing, but I take your point.)

The implication of downloading and installing something that uses what Apple itself calls a "plugin" API is different. There is an implication of sandboxing, and while you can debate the limits, most reasonable people would NOT expect an iTunes app plugin gets access to naked Apple account credentials.

It's easy to now say "you should trust no sandbox." But practically speaking most of us need to do this all the time. Do we verify all the security of all the Javascript sandboxes we run, to ensure the millionth Chrome update didn't introduce a sandbox hole? No we do not. We trust if the sandbox has a hole someone will blow the whistle. Which is happily what is happening here.

There is no implication of sandboxing. Even sandboxing browser plugins is new. You should never assume a plugin in any application is sandboxed, because that has never been true about any application at all until the past few years, and is still not true about virtually everything except browsers.

Sandboxing of plugins is the exception, not the rule.

It's strange that people have so much trouble understanding this.

So.. If a user runs arbitrary HTML or JS in their browser, they are owned?

Bit different than something that inherently has full access to the parent processes memory space like an iTunes plugin no?

At the technical level that is the mechanism that makes it possible.

That does little to advance the point though. It is like saying "well exploits happened because in C you can write over the array boundary". Yes that is how some of them happen. But explaining the mechanism doesn't justify or solve the problem and it takes too low of a view.

I guess one can ask whether :

1) Should plugins be allowed?

2) Should plugins run in the same memory space as parent process? There are other ways. Like how come a Google Chrome tab doesn't usually succeed in corrupting or blocking another chrome tab? hmm interesting. Maybe, setup a socketpair or sharedmemory or a message and mailbox system.

3) Should signing, verifying and vetting of these plugins be done differently?

1) Yes, 2) Yes, 3) No.

This is what makes a desktop computer useful. If you don't trust code, don't run untrusted code.

I also recommend not aiming loaded weapons at your own foot.

I mostly agree, my only digression would be that iTunes should be split from the player component with plugins, and the store component.

Then plugins like you had even back in the winamp days like milkdrop can stay in the player, where they belong, and keep the store stuff separate. But even this approach is less than perfect so I don't know really what a great solution would be.

The browser is unique for using the JS virtual machine to sandbox its executed code. It's very different from running a downloaded executable.

The browser is not unique. There are other virtual machines out there. There are slew of IPC mechanisms. OS facilities. Containers, virtualization, chroot environments. Maybe they are not practical for this particular problem but somehow saying browser is magic because it is using a scripting a language VM isn't true in general.

Thinking about, heck, a browser can even execute downloadable executable as well -- that is the PNaCl framework.

I think what the parent meant, was that the browser is unique in actually having an attack surface (the JS VM sandbox) that has been hardened by millions of people hitting it with arbitrary foreign code all day. Just because something else on your system happens to be a virtual machine, does not mean its security has been ascertained to nearly this extent.

I wouldn't expect, say, VMWare, or the OpenType rendering engine, to hold up if millions of people were throwing arbitrary foreign code at them each day. But people don't--the code these VMs see is mostly trusted code they either created themselves, or got from a reputable source, and it's heavily the same 1000-2000 blobs.

MacOS uses a sandbox security model for all major apps now--but automatically silently loads 3rd party iTunes plugins into iTunes app security context.

Can you install iTunes plugin via a browser with no user interaction?

Depends on the 0day.

Right, because I often expect visualizer plugins to not be sandboxed and have full control over my iTunes Store transactions... not. :)

Sounds more like AppleDoesntGiveAFuckAboutiTunesPlugins.

I think from their POV, as long as apps installed through the Mac App Store are safe (through the sandbox), what code you install (such as full-priviledge unsigned code and iTunes plugins) is up to you.

> The plugin folder is writable by current logged in user so a trojan dropper can easily load a malicious plugin

A trojan could also replace the iTunes dock icon with a fake iTunes that does the same thing. Once you're running unsigned code on the machine, all bets are off.

The problem here is that it's "just" a plugin. Think of the problem like malicious sites that target windows stating that you need to download a codec to view certain media. The casual user will not receive any sort of warning when they download it, and when itunes opens, they have the keys to the castle. Also, given that a lot of users use the same password for everything, getting the itunes user password means you have a good chance at snagging root at the same time.

Download what, exactly?

Downloading the zip file linked to by the OP gives me a folder full of source code, not an "iTunes plugin". Even if it did give me a built plugin, I wouldn't have a clue what to do with it absent further instructions. This is a very poor demonstration of whatever they're trying to prove.

The README says "Copy to ~/Library/iTunes/iTunes Plug-ins to install."

So, they're expecting the "casual user" to copy a file to a hidden directory? Good luck with that.

I did a quick search for iTunes plugins. They all seem to come in some sort of executable installer, thus being subject to the ordinary warnings. The OP even suggests as much, with "a trojan dropper can easily load a malicious plugin". How does that "trojan dropper" get on the system?

This isn't a security hole, it's a wannabe scriptkiddy who wants to make noise.

It's a proof of concept showing a flaw in a core, non-sandboxed application. Someone with better social engineering and/or or someone interested in targeting specific people could turn this into something rather nasty

No, it's demonstrating (poorly) the ordinary functionality of application plugins.

The most iTunes could possibly do is display a warning dialog on unsigned plugins. Not a bad idea, perhaps, but its absence is hardly a flaw. You're already postulating sufficient social engineering that I can't believe the warning would stop anyone.

So you're saying that a malicious iTunes visualizer plugin won't get installed and get past antivirus?

Or maybe that your visualizer/screensaver plugins should have access to your credit cards?

Worries the crap out of me, personally. :)

I'm saying once you download and run arbitrary native code, all bets are off. Hell, I just kind of assume any such code has full root access no matter what user it's started as, given the commonality of privilege escalation vulnerabilities.

No you dumb ass, I am expecting a malware dropper to copy it. Where malware dropper can be any application that you download. Now go reverse CoinThief different samples and learn a thing or two.

Thank you for confirming you're just an angry scriptkiddy looking to make a name for yourself. We all now know for certain that you can be safely ignored.

I forgot about Windows, and cannot speak to it.

On the Mac, iTunes plugins are generic ".bundle" files that don't open iTunes when double-clicked, and need to be manually installed via drag-and-drop into the appropriate folder.

These .bundle files DO seem to circumvent the untrusted code warning dialog though, which I'd say is an actual vulnerability here.

Am I missing something here? It sounds like all he's complaining about is that iTunes plugins don't execute in a separate address space from iTunes itself, which is hardly unusual for most applications that have a plugin architecture. Maybe I am missing something. The amount of hand wringing and hyperbole implies a much worse issue but I don't see what it is.

It seems that the only real issue is that it circumvents keychain, that would normally prompt before giving any protected value to unknown code (not signed or already prompted). So a plugin can access to protected data that iTune can fetch from keychain silently.

I don't know if there is other ways to achieve that with local attacks, but it looks like a middle of the road issue. Quite serious, but also requires the target system to be already compromised.

As others have said, if you've already convinced the user to run untrusted unsandboxed code then it doesn't matter whether it installs an iTunes plugin to do its dirty business.

I have an even easier to implement attack - a menu bar-less app that shows an iTunes password window after using system APIs to bring iTunes to the front. Make up some excuse in the text for why it needs your password. You don't even have to mess around with breakpoints, most users will probably type their password - and you don't even have to wait until they buy something from iTunes!

The plugin folder is writable by current logged in user so a trojan dropper can easily load a malicious plugin.

Maybe I'm missing something, but if you allow for the fact that a virus is already on your system, then you've already lost. It'd be one thing if iTunes was allowing the equivalent of root escalation, but it doesn't sound like it's even doing that.

From what I've seen, Apple is like any other company. A part of the company understands security. Another part thinks it understands security but only pays lip service to it and in reality doesn't understand security and considers it a pain.

Weird. At least on Mail.app and Mavericks, plugins need to be properly code signed, etc.

(I had to do that for https://github.com/rcarmo/HJKLPlugin to work in Mavericks)

Yes, OP mentioned that it was fixed in Mavericks. Since Mavericks is free, I don't really see the problem. If you are running a < 2007 Mac you really don't care about security anyway.

This article should be renamed, "How I hacked iTunes to steal your credit card information (probably)", to get more votes on Hacker News.

Why are you all justifying quite the security flaw? If Adobe Flash can be exploited if you click a button, should it not been patched?

I am quite amazed how a few people are defending a broken model and arguing about "arbitrary code". Every single piece of code you download is arbitrary. Hell, even the operating system is. You have no idea of what really runs in your computer, you trust there is nothing malicious hidden in most apps. Trust, nothing more than that. So it's very easy to own all your Macs and steal financial information without a single suspicious prompt for higher privileges. If you think it's ok to have iTunes binary not writable by a normal user but then fully controlling it from a plugin then what can I say? I hope your data is not valuable ;-)

> Every single piece of code you download is arbitrary.

Apple's solution to this is the Mac App Store. If you can get an app into the Mac app store, and then break out of the Sandbox to install this iTunes plugin. THEN you have a post.

iOS has been refreshingly malware-free using this model (even though there have been holes there as well obviously), and it's clear why they're bringing it to the Mac.

> If you think it's ok to have iTunes binary not writable by a normal user but then fully controlling it from a plugin then what can I say?

I agree that not having an unsigned code warning in iTunes for new plugins is a major oversight, and a break in the Apple Model. But with that in place, saying "but plugins can do anything in iTunes" is like saying "but a *.com-filtered Chrome extension can intercept all my passwords!". And guess what? Google are also limiting all the plugin extensions to their Web store...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact