It's not an attack in its own right, but a powerful step in a non-elevated attack against Apple products.
If someone were to end up running malicious code (say, due to a recent Flash Player zero day code execution exploit ...), without the need for administrative rights, they could install this plugin into the iTunes environment for that same user.
Why is this a problem?
This plugin successfully MITM attacks an iTunes purchase and extracts your Apple ID and plaintext password. Network proxying at the machine level to attack SSL traffic would normally require admin rights.
Now that the attacker has the Apple ID, they can:
- Send a remote lock or wipe command to the user's Apple devices that are using Find My Mac/iPhone.
- Log in to Messages and receive copies of everything sent to them / they say to everyone. (They will get email notification of a new device, mind you)
- Potentially find out their home address, phone number, and real name information from the Apple service portal if they've ever taken in a device to be worked on (turning it into an identity theft attack of sorts)
Just an FYI if you're not familiar with what an Apple ID does for the average Apple user.
Mind you, without admin rights, fake alert software could also be installed on the Mac, prompting a user for credentials of some sort with a dialog specially crafted to imitate the real thing.
The difference here, however, is that it is all legit dialogs. There are no crazy unknown processes listed on the machine that a more paranoid person might notice (once the iTunes plugin is dropped). The iTunes process itself has been made malicious.
I, for one, will be changing ownership and permissions on the plugin folder on my machines to make it non-writeable without elevation.
In fact, I now wonder how far that extends -- if I've plugins for Xcode installed, should I worry about my developer account session in Preferences which auto-downloads and can create signing keys for apps? It's a thought...
Anyone able to use this detail in an actual "attack" assuredly has many other avenues to carry out such an attack, and will continue to have such avenues unless and until Windows and Mac OS are at least as locked down as iOS.
I have no idea what your statement is about.
And all you did was echo the lower portions of my own comment where I admitted that "fake alert" style applications could also take these details. And I'm sure other styles of attack as well.
The author of the article was trying to make a simple point though: If Apple allows an iTunes plugin such low level access that it can proxy a store transaction - ideally the thing they should be the most paranoid about - then they should probably revisit their plugin architecture (possibly taking a page from web browsing plugin sandboxing).
Claiming there will always be problems until the OS is as locked down as iOS is overkill.
- Add trusted CA certificates.
- Configure proxy settings.
- Launch arbitrary applications.
- Delete all the users files.
What you're talking about here is sandboxing to protect the user from themselves, and all the lost utility that results. In other words, iOS.
- Configure proxy settings: prompt for admin credentials
- Launch arbitrary: already admitted as much
- Delete all users files: yup, that's a problem
But the problems I illustrated aren't limited to the single machine that gets infected.
You've already social-engineered your way to getting somebody to download and run an application, ignoring warnings along the way. Prompts for credentials when installing applications are perfectly normal.
> Configure proxy settings: prompt for admin credentials
??? Not on my machine. There isn't even a padlock icon in the relevant window.
IIRC this isn't protected by the actual keychain file, and can be done by simply editing the file manually.
Regardless, peer commentator already noted that password prompts are hardly an issue here.
> - Configure proxy settings: prompt for admin credentials
Actually, it doesn't. Additionally, browsers have per-user local configuration.
Browser plugin sandboxing is a very new phenomenon. Apple doesn't give a shit about security because... they said they'll investigate doing something that has only recently been done for the first time at all? What?
The context of downloading and executing a full fledged app fits with your implication that the user should know she'll be "owned" by a mistake. (Despite this, Apple offers mitigations in terms of warnings and code signing, but I take your point.)
The implication of downloading and installing something that uses what Apple itself calls a "plugin" API is different. There is an implication of sandboxing, and while you can debate the limits, most reasonable people would NOT expect an iTunes app plugin gets access to naked Apple account credentials.
Sandboxing of plugins is the exception, not the rule.
That does little to advance the point though. It is like saying "well exploits happened because in C you can write over the array boundary". Yes that is how some of them happen. But explaining the mechanism doesn't justify or solve the problem and it takes too low of a view.
I guess one can ask whether :
1) Should plugins be allowed?
2) Should plugins run in the same memory space as parent process? There are other ways. Like how come a Google Chrome tab doesn't usually succeed in corrupting or blocking another chrome tab? hmm interesting. Maybe, setup a socketpair or sharedmemory or a message and mailbox system.
3) Should signing, verifying and vetting of these plugins be done differently?
This is what makes a desktop computer useful. If you don't trust code, don't run untrusted code.
I also recommend not aiming loaded weapons at your own foot.
Then plugins like you had even back in the winamp days like milkdrop can stay in the player, where they belong, and keep the store stuff separate. But even this approach is less than perfect so I don't know really what a great solution would be.
Thinking about, heck, a browser can even execute downloadable executable as well -- that is the PNaCl framework.
I wouldn't expect, say, VMWare, or the OpenType rendering engine, to hold up if millions of people were throwing arbitrary foreign code at them each day. But people don't--the code these VMs see is mostly trusted code they either created themselves, or got from a reputable source, and it's heavily the same 1000-2000 blobs.
I think from their POV, as long as apps installed through the Mac App Store are safe (through the sandbox), what code you install (such as full-priviledge unsigned code and iTunes plugins) is up to you.
> The plugin folder is writable by current logged in user so a trojan dropper can easily load a malicious plugin
A trojan could also replace the iTunes dock icon with a fake iTunes that does the same thing. Once you're running unsigned code on the machine, all bets are off.
Downloading the zip file linked to by the OP gives me a folder full of source code, not an "iTunes plugin". Even if it did give me a built plugin, I wouldn't have a clue what to do with it absent further instructions. This is a very poor demonstration of whatever they're trying to prove.
The README says "Copy to ~/Library/iTunes/iTunes Plug-ins to install."
So, they're expecting the "casual user" to copy a file to a hidden directory? Good luck with that.
I did a quick search for iTunes plugins. They all seem to come in some sort of executable installer, thus being subject to the ordinary warnings. The OP even suggests as much, with "a trojan dropper can easily load a malicious plugin". How does that "trojan dropper" get on the system?
This isn't a security hole, it's a wannabe scriptkiddy who wants to make noise.
The most iTunes could possibly do is display a warning dialog on unsigned plugins. Not a bad idea, perhaps, but its absence is hardly a flaw. You're already postulating sufficient social engineering that I can't believe the warning would stop anyone.
Or maybe that your visualizer/screensaver plugins should have access to your credit cards?
Worries the crap out of me, personally. :)
On the Mac, iTunes plugins are generic ".bundle" files that don't open iTunes when double-clicked, and need to be manually installed via drag-and-drop into the appropriate folder.
These .bundle files DO seem to circumvent the untrusted code warning dialog though, which I'd say is an actual vulnerability here.
I don't know if there is other ways to achieve that with local attacks, but it looks like a middle of the road issue. Quite serious, but also requires the target system to be already compromised.
I have an even easier to implement attack - a menu bar-less app that shows an iTunes password window after using system APIs to bring iTunes to the front. Make up some excuse in the text for why it needs your password. You don't even have to mess around with breakpoints, most users will probably type their password - and you don't even have to wait until they buy something from iTunes!
Maybe I'm missing something, but if you allow for the fact that a virus is already on your system, then you've already lost. It'd be one thing if iTunes was allowing the equivalent of root escalation, but it doesn't sound like it's even doing that.
(I had to do that for https://github.com/rcarmo/HJKLPlugin to work in Mavericks)
Apple's solution to this is the Mac App Store. If you can get an app into the Mac app store, and then break out of the Sandbox to install this iTunes plugin. THEN you have a post.
iOS has been refreshingly malware-free using this model (even though there have been holes there as well obviously), and it's clear why they're bringing it to the Mac.
> If you think it's ok to have iTunes binary not writable by a normal user but then fully controlling it from a plugin then what can I say?
I agree that not having an unsigned code warning in iTunes for new plugins is a major oversight, and a break in the Apple Model. But with that in place, saying "but plugins can do anything in iTunes" is like saying "but a *.com-filtered Chrome extension can intercept all my passwords!". And guess what? Google are also limiting all the plugin extensions to their Web store...