Hacker News new | past | comments | ask | show | jobs | submit login
Apple Support Allowed Hacker Access to Reporter's iCloud Account (macrumors.com)
314 points by antr on Aug 5, 2012 | hide | past | web | favorite | 177 comments



It seems logical that the easiest attack vector for any type of cloud storage is through social engineering. You're essentially protecting potentially valuable or incriminating data behind millions of dollars worth of firewalls, encryption and other technology... or a customer service representative paid $10-15/hr, if that.

Depending on how valuable the data is to you, it might be easier to just pay off a CSR, and then fake a phone call where you pretend to convince that CSR that you are that person. The person will get fired, but probably won't go to jail unless they can prove collusion. And then they can either find a new job, or depending on which country the person is living in, they can live nicely off of the money for a while.

I'm not sure how to solve this problem, except by having highly paid and specially trained CSRs that do the account resetting, or by never allowing resetting ever, and if you forget your password and your security questions, you're SOL.

I have to admit this only makes me more leery of putting anything on cloud storage, although my own personal data is pretty useless to anyone, which is my only saving grace. Others who are more important might need to think twice about relying on these types of services.


I'll confess, I honestly didn't even consider the possibility that the hacker just social-engineered Apple support. I mean, Mitnick wrote an entire book about that kind of stuff, and the whole HBGary thing went down in sort of the same way, but ... still, to be able to call up the support department of a major technology (!) company, in 2012, pretending to be someone else and get access to their account that way? Apple didn't send a text message to his number-on-file? They didn't try a callback? Were there any challenge-response questions at all?

That's absurd.

This should make every iCloud user reeeeeaally nervous.


A good social engineering attack knows more about me than I do. They know my first pet, my mother's maiden name, and where all my banking records are. Lots of ways of getting that. Notice - the call was because the guys _phone_ was inoperable. A call back could go to a burner, and Apple would be none the wiser.

Very few, if any, defenses against social engineering, other than (A) Not allowing it, or (B) Requiring a Notarized-registered-letter of identification to start the process.

I'm a fan of using Notaries for password resets. Particularly to my email account, as it's the most valuable thing I own. Double-notarize in the event of two-factor resets. Make it a HUGE burden. Lock me out of email for a week or two if required, but don't give anyone access to my email.


This should make every user of every online service really nervous. It sort of makes the Google/Facebook model of "it's impossible to actually talk to a human" look good.


Indeed, but even companies who don't offer a phone-based customer support service can be susceptible to basic social engineering.

When Facebook was still granting new users access by checking that their email matched a school's domain, I was able to make accounts at multiple schools by sending a forged email (claiming to originate from the school domain) to Facebook support saying something like: "I never received the confirmation email. Will you please activate my account at fakename@targetschool.edu?"

And it worked 90% of the time.

I wonder how many websites nowadays would be susceptible to a targeted and personalized forged email to customer service (especially since emails are frequently used to prove account ownership).


You know, as much as I laughed at your comment I think you have a point here.

The long times it takes for them to even answer a mail (if at all) would probably give a heads-up to anything fishy going on in your account. Secondly, unless your account is actually worth the wait, they would probably try to attack an easier target instead of Google or Facebook.


True, however it also means if your account does get hacked, you will have to wait weeks until they respond to your plea for help (if they respond at all).

I think neither of these is the solution. If you can't talk to a human you'll never get help if you're locked out. If human support is available, there is always a chance they'll hand over your account to some scammer. Two-factor authentication means you'll be screwed if your second medium is unavailable or highjacked.

Maybe the only way to protect yourself is having independent (offline?) backups that only can control. Sadly, that's not an option regarding a lot of walled-garden services such as Facebook or iCloud.


Bluehost requires ID on file before granting SSH access. Seems to me having that link to a meat-space audit trail should be required. In that sense, it would seem to me Google and Facebook should not only offer but demand two-factor authentication to a node in their social graph.


Security through support obscurity?


Not really. More having a smaller attack vector. There are less people that could authorize a reset, they are better paid, and centralized. (rather than being an underpaid store clerk)


> Apple didn't send a text message to his number-on-file?

"Sir, we are just going to need to send an SMS to your phone number"

"Ok which number is that.. "

"it is the 065 488 48.."

"..that is my old work phone, which I no longer have access to"

Works all the time. The tip with social engineering is to ask for a little at a time. You don't call up and say "I don't have my phone, email, password, and nor do I know my mothers maiden name.. please let me into my account"

You take it all a step at a time and give it a narrative, just like a real user in the predicament would (and I have been in the predicament and called Apple). Works almost every time.


My father-in-law told me yesterday about a novel he's reading. The thief wanted to test the target's security system, so he threw clods of dirt over the fence until security came out and investigated. For a week. Then he threw cats over (good luck finding a pissed off cat in the middle of the night). Then they declared the alarm dysfunctional and posted a sign in the security shack: until further notice, disregard alarms between 0300 and 0500.

Makes you wonder if offering accessible customer service, at scale, eventually it's not even the guy on the phone that's the problem. The policy person cries uncle.


People have been doing this for years to steal cars. Set off its alarm every night till the owner disables it.


"Humans are the security hole that can never be patched."

Social engineering will always work.


There was an article about HFT recently [0] that mentioned a case where social influence is of small importance: the game is played beyond human capabilities even when really needed.

[0] http://news.ycombinator.com/item?id=4339531


On the positive side, perhaps the publicity will cause Apple to tighten up. They have demonstrated that they are serious about security.


I don't know, I have mixed feelings about this. It's akin to building even more inscrutable captchas or tightening up airport security measures every time a new breach happens. At best it might close one particular loop hole but at what cost and incovenience to millions of people and billions of transactions?

I had the misfortune to lock myself out of my bank account once or twice and the process for unlocking it was so dreadful (a 30 minutes interrogation with questions like "when and where did I make my last ATM transaction") that since then I keep the required sensitive info in a GPG encrypted file so that I never have to call them again. Other equally frustrated but less tech savvy customers are probably doing the same with post-it notes. Is this an improvement?


I was out of town and went to make a large cash purchase. (The retailer added a very hefty 10% for using debit or credit cards.) So I ran into the problem of a daily cash withdrawal at the ATM. I also did not have anything with me other than an ATM card and a Credit card with me (No ID). Turned out the bank didn't even ask for my ID when I went in. I just explained my situation and they just handed over a couple thousand...

Security at the bank seems discretionary at best.


> Security at the bank seems discretionary at best

No, it is a cost benefit decision. Do you know they don't check the signature on cheques or credit card transactions? Heck I bet if you mail in a change of address they will go ahead and do it, possibly sending something to your old address.

The reality is that fraud is at low levels compared to legitimate transactions. Putting in lots of extra hoops just makes the legitimate transactions harder, and chances are it won't affect those trying to commit fraud since they have a wide variety of things to try while tellers don't (eg fake id in this case).

In this specific case, anyone coming into the branch is on security cameras inside and out. TV shows, the Internet and technology make it increasingly easier to match up the footage with real people. And the bank doesn't bear the full costs of any investigation since they are passed off to the police/FBI.

If you ran the bank would you add a dollar in expenses and one minute per transaction that has a 10% chance of catching fraud, and fraud occurs one in every 25,000 transactions? Would you have the same measures in every branch across the country or have their expense and severity proportional to the amount of fraud that does actually happen at any location?

Despite what we see in films and TV shows, bank robbery is pitiful way to not make money:

http://www.thefiscaltimes.com/Articles/2012/06/11/Why-Robbin...

http://crimeblog.dallasnews.com/2009/04/new-bank-robbery-sta...


I totally agree with you but on two points:

1) Cost benefit analysis and discretionary security is not mutually exclusive. It's cost benefit analysis ergo discretionary security.

2) Crime pays. You just have to be sophisticated and powerful enough to not be indicted. (TARP?)

The interesting thing about your comment is when you apply your logic towards combating terrorism. The cumulative harm of prevention of terrorism outweighs the damage and death caused by the terrorism itself. The 'cost-benefit analysis' must take into account the 'positive' externalities for those who advocate those policies.


On 1) that is what I meant. The level of security measures is proportional to the risks, and a realisation that every measure costs time and money.

For 2) white collar crime certainly has shorter prison sentences in the US. It is a little harder to apportion blame as directly as with a bank robber. The general cause of problems has been the US government bailing out creditors. Because of that creditors have been laxer in their standards, had lower oversight and a greater tolerance for risk. This is virtually US government policy and has been going on since the 1984 rescue of the creditors of Continental Illinois. Ultimately fixing this involves fixing the US government and the corruption of Congress - see Lawrence Lessig's talk about they operate around money - and smaller things like regulatory capture.

The response to 9/11 has been to massively amplify the original effects, giving a huge return on investment to Al-Qaeda. In the positive column has been some of the security theatre - the appearance of improved security will be reassuring to some people. But everything else has been negative - the government expenditures, making new enemies in Iraq and Afghanistan, the loss of freedom for Americans, the massive invasive spying on Americans, the use of "terrorism" as an excuse for inexcusable things, the loss of American prestige (Guantanamo Bay isn't good PR), the additional friction on American life in both time and money (try taking a flight) and the list goes on.

I don't want to belittle 9/11, but the same number of people die each and every single month on American roads. It happened that same month, and every month since.

IMHO it would be a far better remembrance to the victims if we said "fuck you" to the perpetrators and lived free and open lives despite them, rather than the crippling effects that did happen.


Some of the people dying on roads are suicides.

Very few are murders. Murders and accidents are not the same thing and not equally bad.

One difference: if you don't do anything about accidents, the rate stays the same. If you don't do anything about murder and just let it happen, the rate goes up as more people realize they can get away with it and serial killers or terrorists get more bold.


Once, while getting a certified check, I was unable to sign correctly (you have to sign two or three times). After a couple of failures the teller turned her computer screen around to show me my saved signature and said "just sign it so it looks like this"


>Is this an improvement?

Yes. Without the strict checks by your bank, none of their customers would be secure. With their checks in place, some, including you, are now secure.


Absolutely it is an improvement. I would gladly be subjected to a half hour of questions to protect tens of thousands of dollars.


Sure - you need physical access to get to someone's post-its. Might as well install a keylogger then.


Have they? (Honest question.)


I would argue that yes, they have.

They removed the copy on their website that claimed that "Macs don't get PC Viruses"[1]. They disabled automatic execution of Java Applets in response to Flashback[2]. The introduction of Gatekeeper and the App Store model shows their intention for reducing the vectors average users can install random software (which reduces rogue installations like Flashback). ASLR is fully implemented in Lion now, and the inclusion of FileVault 2 suggests they are aware of and trying to mitigate offline attacks[3]

Regardless if you think this is enough, it does show that they are doing something. For every couple steps forward in closing a security issue, issues such as this article show that more could be done. Security is hard, no OS or company will ever be Perfectly Secure(tm). Apple is not "doing nothing". Claiming that they aren't is an uneducated answer, claiming that they could do more and be more transparent about it is a more valid argument.

[1] http://www.wired.com/wiredenterprise/2012/06/mac_viruses/ [2] http://support.apple.com/kb/HT5242 [3] http://www.securitynewsdaily.com/960-apple-mac-osx-lion-secu...


I would also argue that with iOS, they have the safest (big) mobile OS available as well. ASLR and DEP have long been implemented and with iOS 6 they are also implementing Kernel ASLR.

Almost everything is sandboxed and there are no known viruses out there (for devices that haven't been jailbroken).

Jailbreaks are still possible (like you said nothing is perfectly secure), but have been slowed down to a point where hackers wait for a big OS release, before they decide to burn the exploits.


I have no idea if this is related, but just now I attempted to purchase an app on the Mac AppStore and, after authenticating, I was given a prompt to re-enter my password and:

Improve Apple ID Security

- To help ensure the security of your Apple ID, choose three security questions and answers.

Just random because I don't have challenge responses on record, or immediate low-hanging fruit in response to this breach?


Probably just a result of having no challenge responses on record. I experienced something similar a couple of months ago.


Never considered? Try to call them someday if you forget your passwords, you'll see how easy it is.

My solution at the moment is to remove every passwords from icloud. There're some nice scripts online - just did that and blogged about it on http://en.blog.guylhem.net/post/28778777551/icloud-remove-ke...

It's obvious it can't be trusted until 2-way auth is implemented. hell - if I manage to forget my password and loose my cellphones and homephone numbers, I WANT my icloud data to be gone for good!


I'm not sure how to solve this problem

It's easily solved, banks and other institutions have been doing it for years.

The solution is trivial, too: Require physical ID.

In order to open a bank account you have to either show up in person, or provide equivalent proof (e.g. PostIdent).

Why should it be different with cloud-services whose stated goal is to silo all your life's data? Why are they excused on lax security?


That's a great idea, and I think they already have a platform to do that with, the Apple Store and their Genius Bar. Most major cities have two Apple stores these days, and for people that aren't near one, an option to fax or mail could exist. In addition, a callback or text just plan seems necessary, even if the number is inactive.

The less information the other person (hacker/user) has to offer, the more time it should take to reset. In the meantime Apple should be notifying all the contact information on file about what's going on and offer a way to stop it.


>>for people that aren't near [an Apple Store], an option to fax or mail could exist.

Almost ten years ago, I asked an old friend (that got rich doing security for online gambling companies) about verifying identity with VISA cards.

He told me that the Russian mob would open a new account in e.g. the English countryside. When the security people called the (non-mobile) phone number, then someone answered and verified that it was their VISA card and yes, they wanted to open an account.

Edit: If my point isn't clear -- it is that the present capabilities of the criminal networks are probably much superior these days. (Addition: I assume he knew where the criminals came from because of police reports.)


When I opened my Ing account I didn't need any of this. They verify your identity through a series of questions about your past, as well as your SSN. In fact, all US banks allow accounts to be opened over the 'net now. I've personally done it with several of them.


Yeah, they pull a credit report in real-time and then will do something like show you three cities and ask you to pick one you lived at in the past (with a "none of these" choice also). It's a good idea but I'd think often hackable for targets who are heavy social media users and basically have their life story online and public.

Still pretty good for now, though.

Edit: anyone know what the cost is per query for these services? I assume it's not free, thus likely not feasible for services that don't stand a good chance of providing enough revenue to offset the cost.


That does require a credit pull to accomplish. Not insurmountable (and can be done without damaging the consumers credit score via a so-called "soft pull") but the company doing the pulling has to pay for access to that data.


You're right. When I forgot my battle.net secret question answer and wanted to change my password I had to send Blizzard two forms of ID. For some games.

The solution exists, its in use now and it is mind boggling that for hundreds of dollars in apps, my file vault password, my payment details and of course a remote wipe facility for my hardware it isn't even an option.


No, you had to send copies/images of your ID. ID that is trivial for a hacker to duplicate.


This is true.

There are lots of ways to mitigate this, but would drive up costs considerably.


Physical IDs can be faked.


Isn't that an area that cannot be controlled at all? There are government issued IDs and if a normal company cannot trust them, then there's no way out. Biometric identification can be the last unbreakable protection, but that's also only valid until you find someone who, for example, lost/damaged his eyes in an accident and is up for scamming the company you're targeting.

I mean, there's a reasonable limit of what companies may want to check, but once those proofs can be faked, it's not their responsibility to fix the issue anymore.

(PS. some countries have more restrictions than others too - for example in Poland you need two IDs with a photo to get any kind of mobile plan on a contract - that leaves plenty of ways to verify your identity)


It's a question of how closely the ID can be inspected for accuracy.

When I go to the store and buy beer, they want to see my license. They always make me take it out, which means the clerk can feel it, and the "feel" will often give away a fake to someone who has handled thousands of legit IDs. Next, they look at the pic on the ID and then look at me to make sure they at least kinda match. Nobody is really holding them up for a side-by-side, but you at least kind of have to look like the guy on the ID, which immediately limits the pool of people who could be faking my identity. Then, assuming it feels right and looks like me, they scan the ID and verify a record of that ID card with the state, which simply confirms that the state issued such an ID.

Between those three things, the system is actually pretty secure. But when you ask someone to scan in and email a copy of the front of their license as proof of ID, all three of those "checks" are eliminated.


Biometrics aren't unbreakable and can be spoofed quite trivially in many cases.

Here's a professor spoofing high-end fingerprint scanners with gelatin and a printer: http://vast.uccs.edu/~tboult/tmp/fingerprint-boult-koaa-medi... (sorry for the sensationalism at the beginning)


If the government database of IDs were available online (perhaps only queryable by a number on the card), the bank could look at the online version and verify it matches. Then you'd at least need to hire a lookalike to fool them.


Physical IDs don't get faked [in places where serious physical IDs are used, USA driverlicences in bars don't count] - it's simpler to make counterfeit dollars than counterfeit passports; from what I have seen from banking fraud statistics, if physical IDs are required, fake ID's are an extremely rare circumstance. You do get cases of (a) stolen IDs and (b) IDs bought off of homeless guys, and then used to open accounts and register companies for money laundering, etc. But not fake IDs - it's apparently too much effort and risk when compared to stealing or buying identities.


Exactly, but as you say the reason is that there's an easier and less risky (although it depends on the quality of the ID) way to get/steal money.

But that doesn't prove that requiring a physical ID is a safer method, just that there is a better workaround.


However, showing up in person is a huge inconvenience for a hacker in another country. Another technique banks use is to send a physical piece of snail mail to a physical postal address containing a verification code or card.


"I'm not sure how to solve this problem"

Easy. CSR has exactly the same screen as you do. With the same security questions as you have. In this case it seems, those questions were never asked. You design CSR frontend where they must themselves answer those questions before proceed. You may pay off that CSR, but she/he does not know answers to those questions so she/he can not do a thing.

If you forgot answer to those questions, alert is escalated, which needs two together CSR's + their supervisor to unlock your account + you must make Facetime call + whole process gets documented carefully.

What did I miss?


That is an interesting approach. Given the retail presence Apple has the opportunity to ask you to go to an Apple store in person and talk with service personnel there. One could easily put a picture on file (every Apple device has a camera now) of the owner, and the two bits of information:

1) You have the device with you

2) You are the same person as the picture of the owner

Would set a reasonably high bar to cross.


I suspect there are a LOT of places in the United States that are a few hours' drive from the closest Apple store.


I agree, I bet there are a ton of Apple users for whom going to an Apple store could be a real pain. However, they could make it an optional account security feature. (Then they just have to handle "you got hacked? Well, that's your fault for turning down our free enhanced security feature!")


Perhaps what they need here is an optional 24-hour password reset delay. A user could only adjust this setting when properly logged in. Even if Apple gets social engineered, the user has 24 hours to notice the difference.

Although it's extremely inconvenient to wait 1 full day to get back in, forgetting a password should be a rare circumstance .


No. They just need to implement one of the common protocols. For example, they could just require ID.


If you are willing to take the time to social engineer a CSR to get a password, you are likely willing to take the time to acquire a fake ID. They aren't hard to come by.


They still require additional effort. Right now, in my pajamas, without leaving my house or spending money, I can do exactly what that hacker did. I probably wouldn't even try it if I knew I would have to get a fake ID just to punish some Gizmodo employee for shits.


Besides: banks videotape their customers. We would have the culprit on video.


Spending a few minutes tricking a CSR on the phone isn't even close to the difficulty of obtaining a workable fake ID. Barriers are useful even if they can be crossed with sufficient effort.


While it's hard to make a fake ID, it's trivial to make a fake image of ID.

Checking a physical ID would be a somewhat effective barrier. Checking an emailed or faxed image of an ID? Useless.


That depends greatly on what kind of checking is done on the other side. For example, if they can cross-check your driver's license number with the rest of your info, then making a fake image of an ID that will pass muster will be tough.


The security of an ID is protected by the state. Screwing around with that is a federal, put your ass in prison, kind of breach, irregardless of your intention or the context.


Jurisdiction is an issue - what if the attacker is from outside the USA (especially those countries w/o an extradition treaty)?

ID is not a panacea, especially in this case. Apple is probably best to roll out some form of multi-factor auth.


Maybe this is a great reason to stick with Google's cloud services.


The fact that this happens doesn't have to do with any particular brand. Every company, Google included, is susceptible to this kind of social engineering attacks. Nothing is 100% safe. You can take every precaution possible and there will always be a weak link in the chain.

Apple will double down in security now, especially regarding iCloud, but even doing so there is a chance that this will happen again. Same for Microsoft, Sony and, yes, Google.

There's no magic solution other than being careful. And even with that security is always an illusion. Your door lock is easily opened, no matter how much money you put in it, the only thing preventing you from being robbed is that are more houses in your neighbourhood and that some of those could seem like an easier target.


Google isnt susceptible to this kind of social engineering attack.

It requires the existence of a customer service in the first place. Good luck trying to call Google.

There is no "magic" solution, there are just solutions. But to suggests its all the same... Thats just lazy.

Apple is more vulnerable, because they do do customer support. Sony was more vulnerable, because they just dint give a shit, and didnt bother anything to secure it.

Microsoft and Google still have a zero incident record. After all this time. They even went beyond their own responsibility many times, getting police involved because they suspected targetted (political) malware.

And no, in the world of formal discrete systems (computers) there are provable correct, and provably incorrect solutions. For example: DRM can always be hacked, but we can secure ourselves from the middlemen.

Any analog with a "door" deserves only ridicule.


Microsoft and Google have zero incidents only for very large values of zero. Google "gmail account hacked" or "Xbox live account hacked" or "hotmail account hacked".

In the last case, the top links are to Microsoft's FAQ pages.


Irrelevant because those accounts are hacked by someone who acquired the password of the actual user by a keylogger etc. They were not exploiting flaws related to server side. In iCloud's case, there is nothing the user could have done to prevent this attack.


Nor we're they in this case -- they were exploiting server side features of Amazon ( working as intended ) and Apple's tech support policies ( working as intended ). If anything this is a policy failure.

Not having any customer support worth a damn is a different kind of policy failure. (For example.)


Just because a company can have their FAQ pages SEO'd to the top of the SERPS doesn't mean their services haven't been hacked.

Most people who get hacked hardly ever report it, they just want their account(s) back.


I think you missed my point. If recovering a hacked account is an FAQ then presumably some accounts get hacked. Just maybe?


From the article:

The backup email address on my Gmail account is that same .mac email address. At 4:52 PM, they sent a Gmail password recovery email to the .mac account.

Here Gmail was only as strong as the weakest password recovery email service it was linked to. I consider this a failure on Google's part.


With two factor authentication enabled this shouldn't be an issue.


Why not require it then?


not useable in all country. Not applicable on imap, pop3. Use case where an email account serve more than 1 physical person. Unusable while you travel.


"Unusable while you travel" or in other countries is simply false: one of the two-factor authentication options is the google authenticator app on your smartphone, which requires no internet/phone connectivity at all. It's time-based.

The imap/pop thing is still a legitimate concern. App-specific passwords let those continue working, but they have security issues of their own.


And yet another option is a set of emergency use codes that you write down or print on a card.


It's quite possible to add two-factor logins for any protocol, that works in any country with a cell phone network. Just demand a response to a challenge via cell phone before validating the password, you could even require one of those RSA token thingies if you want. Just a matter of cost and convenience.


I used 2 step verification for my gmail. Our cell phone operator cannot receive international sms (I know, that sucks), so I used my home phone, so every time I logged in to my gmail I received a call from google voice robot to tell me the pin number. It sounded perfect at first, but when I started to actually use it, I noticed everytime I needed to login to gmail, I wasn't home. I'd call my parents so they could say me the pin or use the backup codes that google provided me with. That was so uncomfortable I had to turn it off.


Not everyone has a cell phone.


Phone is not the only way to access customer support. And for the look of this, BTW, Mat's gmail account got hacked first and then the social engineering on Apple side took part (the password reset was sent to his gmail account).

Im not suggesting that doing nothing is the same of doing something. Im just saying that not matter how secure and prepared Apple had been, this could have happened anyway.

Zero incident record, in any case, seems very unrealistic. I don't know any particular case first hand but then again i'm sure this is not the first time this happens with an iCloud account either. Mat is a public person and has commented his case publicly and thats why we are openly talking about it here.


Security is a two way street: both the user, as well as the company have a responsibility.

When i claimed certain high target technological companies have a zero incident grade, im talking about the fact the companies were never themselves the weak link.

If this guys account was hacked because he tattoos his password on his forehead, Apple too would be in the clear. But here, not the user, but the company screwed up.

There are many, many companies which do not have incidents, or take full responsibility when they do. The type of incidents we complain about, often indicate just gross negliance. (and this is gross negliance by Apple)

You are repeating the claim that there is no watertight security. This claim is wrong. Software can be provably secure. Authentification can be provably secure, just like any type of Content protection is provably unsecure.

Now, you are also making the claim, that we are talking about this, because of the affected users popularity.

Maybe thats why you are talking about it. But most of us are actually surprised because of the gross negliance. ICloud has no authentification, one can just call up, and take over the account. As we now know.

And if this was any other company, i doubt anyone would argue against this obvious and pretty much indisputable fact. But this isnt Sony, this is Apple, and they can never do anything wrong right? Eventhough, statistically, just like any other company, they might could not excel in every way. Maybe this is just one area, where they just screwed up?

Theyll learn from it, hopefully. But lets not pretend it didnt happen, or that it isnt as big as a fuckup as it actually is, back here in reality.



If I got veeti's joke then I think what he was trying to say was that it's nearly impossible to get Google on the phone unless you're a corporate customer. If I didn't get his joke then I'm making it now. joke


It was the first thing I thought. "Hah! Good luck getting Google to answer a phone.." Then I thought, "wait, was that a joke?"


joke's on me then :)

But back to my point social engineering doesn't require voice. you can do it via email just as easily.


Again there is no Google mail support to speak of (even with Google Apps for Business in my experience).


Sure there is. if you pay for your Google Apps account: http://support.google.com/a/bin/request.py


That's why I mentioned Google Apps for Business, that's the paid service you are referring too. There's support but it's still very difficult to get in contact with an actual human being. Social engineering via Google Apps support seems therefore highly unlikely to me.

(I am a Google Apps for Business user and have had to contact Google Apps support a few times … the process has never been really pleasant.)


I agree. If it's a random attack, then the probability is relatively low to get attacked. But if you're targeted, quite frankly it's probably very easy to attack you, either virtually or physically.

One thing I do know, though, is that like you said, security is likely going to get tightened across the board, and that means that it's going to get a lot more inconvenient for all of us. I guess that's a good thing, but it will definitely impact the usability of these services.

If it means that all vendors will tie their services to a two-factor authentication scheme linked to our phone, well that might just stop me from using the services altogether.


I think the services can be improved without becoming too annoying. Someone in this thread suggested a 24 hour delay, which seems reasonable. You could also send a "last call" email and text message to make sure the right user is the one that has requested the password change. Apple could easily separate Find my Mac from "wiping", or add a second password for that.

None of these will be 100% effective, but it will make things more difficult for attackers and not too uncomfortable for users.


I once had to reset the password of my stock trading account.

All they needed for verification was my home address.

I am also pretty leery of putting anything online.


Didn't they call you back to your phone number prior to resetting your password?


This entire thing reminds me of the mud puddle test. (http://blog.cryptographyengineering.com/2012/04/icloud-who-h...)

In fact that entire blog post is pretty on point.


Didn't everyone learn this lesson from Hackers?


HR can screen for people that have been bankrupt, and (probably a lot trickier) personalities that might be susceptible to taking bribes.


Take two people, one went bankrupt 10 times, one never, both make minimum wage. Offer them a $10 million dollar bribe. Is one really less likely to be bribed than the other?


I didn't mean to imply it was a good method but rather make the point that HR are actually doing this.


So people protecting $11 million probably shouldn't use such a weak check.

What about people protecting $500?


This _could_ be a lesson in not trusting cloud services where issues can be resolved by human intermediaries. Sounds a bit counter-intuitive but to me a bit more reassuring. But I could be wrong.


Whoa, wait, what? What occured was a simple confidence hack, not some industrial spy escapade.

Anyway, to answer your initial point, two factor authentication helps with this problem, as you have to still have the security token to authenticate. And if the "Something you have" gets stolen, then you need a manager to work through it to get you set up again, and all resets are heavily monitored and audited.


My point wasn't that this particular incident was some great case of industrial espionage. But it's a rather easy slippery slope to that outcome.

But what if your website is secured behind an Amazon EC2 or Linode CSR? Isn't Instagram and Netflix run at least in part on EC2? I have no clue what the security schemes are for either of those service providers, but if they allow CSRs to change passwords, then it's the same thing. If the CSRs can be paid off, or fooled over a phone call, then it might be cheaper to just do that if they want to inflict potentially millions of dollars worth of damage to a rival.

Having the security of your entire business behind a single CSR or a cell phone is the equivalent of millions of dollars worth of Cisco firewalls being outdone by a $20 wifi-router plugged into the internal network.


Yea, that's all protected by multifactor authentication. Front line CSR's don't need a raise over this. Maybe they have other reasons, sure. But not this.


The thought hadn't cross my mind, but after reading this post it got me thinking:

Sensa

So, let's get this straight...a hacker "decides" to hack the account of a semi-high profile tech guy and then after committing several serious crimes like fraud that could land him in jail for an extended period of time repeatedly contacts the person he hacked when he must know that Apple will surely pursue this matter?

I smell a rat...

http://forums.macrumors.com/showthread.php?p=15405091#post15...


What are you even alleging? What is the rat?


I'm not alleging, I'm quoting a comment from MacRumors that got my attention.

The fact that a hacker would repeatedly contact its victim and that Gizmodo has reasons for not being particularly found of Apple (after the lost iPhone incident) was not something I had though of at first, but did strike me as odd.


Honan no longer works at Gizmodo. It says so right in the OP, along with the name of his new employer. So... ??

You say that post "got you thinking." Got you thinking what?


To connect or not to connect? I have been debating the advantages and disadvantages of coupling both personal and work IT systems for some time now. If you tie your IT systems together, you can manage them more easily and efficiently. On the other hand, as in Mat's case, a single node failure can cause an entire system to collapse. For another example, consider fully automatic self-updating servers. Without safe-guards, a configuration bug can bring them all down within minutes. At this point, I think some coupling, but not total coupling, is best. Too little coupling won't allow enough productivity; too much increases your risk of system-wide failure.


I hope he sues Apple for this and wins, behavior like this shouldn't be allowed without consequences.


From iCloud's ToS, it looks like it'd depend on whether a court finds this to be either "failure to use reasonable skill and due care" or "gross negligence":

APPLE SHALL USE REASONABLE SKILL AND DUE CARE IN PROVIDING THE SERVICE. THE FOLLOWING LIMITATIONS DO NOT APPLY IN RESPECT OF LOSS RESULTING FROM (A) APPLE'S FAILURE TO USE REASONABLE SKILL AND DUE CARE; (B) APPLE'S GROSS NEGLIGENCE, WILFUL MISCONDUCT OR FRAUD; OR (C) DEATH OR PERSONAL INJURY. [Blanket disclaimer of liability in all other cases follows.]

I'd be curious if there is any good precedent on product liability for cloud services.


Just because a clause is in a contract, doesn't mean it has any effect.

A lot of terms are flat out bluffing to scare off folk like you.

This is why it is always a good investment to ask your lawyer.


This is really, really important advice, and if more people understood it, corporations would have a lot less power over people than they currently do. You can open up just about any ToS and find a handful of unenforceable clauses they're hoping you won't realize are unenforceable.


I suspect the vast majority of people never read the TOS and decide to sue, or not, for completely independent reasons.


In general, I'd say that if you are getting a lawyer involved, you need to have fairly solid evidence of real loss of a value more than about $20,000, otherwise the fees are going to eat up any award you might eventually get (don't forget even if you get a favorable judgement the other side can appeal, and will if they have staff lawyers who are getting a salary either way).


I don't think I would label this horrible behavior on the part of Apple. When you provide customer service for something like iCloud things like these are bound to happen. This is a case of social engineering not some tech rep downloading plaintext passwords to a laptop and losing it. With a really targeted attack they are bound to be successful with some rep. Its a matter of when not if. Having said that they will improve their support with this. And the guy could end up suing Apple as well.


The techrep shouldnt be allowed to reset your password. For all you know, that guy is your wife's ex.

This reminds me of facebook and how all its employees were stalking people using the god password.

They can and should follow bank protocol. Require an ID, make every action reversable ( like being able to undo a wipe ) and have both employee and requester on tape, with id's.


Honestly, the bank protocol is overkill for 90% of users. Most people using iCloud are using it to sync photos of their cat. The number who are keeping "their life" in the cloud is basically confined to techno-geeks.

Your average iCloud user is not necessarily going to want to a) prove their identity initially or b) do so again to get support.

I think you are better off taking the approach of "don't put something in the cloud if you can't afford to lose/expose it." Yeah, that pretty much limits its usefulness, at least or now. So it is.


If you dont put anything of value in iCloud, you dont care enough about a reset: you could just setup a new account.

Having every underpaid store clerk being able to reset the account of every customer, is just dangerously stupid.

Just not having a reset feature is even better.


> This reminds me of facebook and how all its employees were stalking people using the god password.

Wait what? Sorry to get off topic but when did this happen?


Possibly referring to this story. http://online.wsj.com/article/SB1000142405270230489870457747... If so, the quote only says they could stalk people with the master password, not that they were.


This is the internet, we don't distinguish between could do and did do.


Facebook employees were able to login to any acount using the password "chucknorris".

Including being able to read private messages of their friends, families, ex-girlfriends, etc

This wasnt just true when facebook was a university startup, but even when they were already the largest social network in the US.


I hope he sues Apple too. Not because I want any harm to come to Apple, but because I want Apple to have a significant financial incentive to push authorities to track down the villain.


And more important, a significant financial incentive to correct their own obviously inadequate procedures.


The remote wipe part is extremely scary. How do you disable this on your mac?


System Preferences - iCloud - Find My Mac (remove the checkmark)


Is there any way to remove the wiping bit without removing the whole finding functionality? That bit is extremely useful, and if a hacker managed to get into my iCloud I wouldn't be that worried about them being able to locate it. But being able to wipe everything as well is a different matter.


IIRC it does the wipe via the recovery boot, so wiping that partition would kill it.

BUT: you'd be hosed if you ever needed recovery, you wouldn't be able to use full-disk encryption, and there's likely other bits of the OS that would break in subtle and interesting ways without it there. Tread _very_ carefully.


>IIRC it does the wipe via the recovery boot, so wiping that partition would kill it.

Isn't this not an option in relatively recent Macs, which have the recovery functionality baked into the EFI firmware and not as a partition on the disk?

Newer Macs have that functionality out of the box, and a bunch from 2010 and early 2011 that did not originally ship with the recovery firmware ended up getting it later via update: http://support.apple.com/kb/HT4904


That's 'Internet Recovery', which is just enough to fire up WLAN and grab a recovery image to netboot from in the event your disk is totally unreadable. Unlikely it'd still be able to trigger a wipe this way. Recovery itself is still it's own partition (from my mid-2011 Air):

    apaulin:~/ $ diskutil list                                                                                                                                                                       [13:38:41]

    /dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *121.3 GB   disk0
   1:                        EFI                         209.7 MB   disk0s1
   2:                  Apple_HFS Macintosh HD            120.5 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3


As long as you have a recovery USB, you'll be fine.


You need to disable 'Find my Mac' under iCloud's settings: http://support.apple.com/kb/PH2697


Creepy. Well, this book by Kevin Mitnick is still very relevant I guess: http://www.amazon.com/The-Art-Deception-Controlling-Security...


One of the main issues with the Apple ID is the ease of use vs security. Tying the remote wipe functionality with the ability to purchase low cost content (the primary use case for the Apple ID) is always going to have one group of users unhappy.

I frequently want to quickly purchase a song on my iPhone. I also, frequently tell my friends my password so they can do the same. How many of you have typed your Apple ID password on your Apple TV with others watching? I wouldn't really ask my friends to exit the room to type in a super secure and long password with many characters groups (one that should be required for remote wipe functionality).

How many users keep their password secure knowing the main place they enter it is on their iOS device? For the many every day Apple users I know, they set their passwords to something easy so they don't have to hit their keyboard too many times when entering them.

If Apple, can separate the two authentication functions as they do with OS X and FileVault it would go a long way to preventing these types of rare but high impact events. Another suggestion would be to separate the remote wipe into two phases, erasing the keys and cleaning up the data. The initialization vectors (seed) do present a bit of a problem but I think the FileVault solution is more than adequate. If the encryption keys and the key escrow system is cleared remotely, that would leave me comfortable that my data is still secure. If we really trust our crypto algorithms, then erasing data and removing the encryption keys should really be no different. Users that do not have iOS data protection and OS X FileVault turned on, cannot be considered any level of secure anyway. And even with that data protection turned on, there are still many issues due to each app needing to implement security properly. It would be really great to see Apple improve their App Store to really audit the security of each application more than they do today.

Most of the work lies with Apple but it is a hard problem that will take time. I think Apple is going in the right direction by centralizing on iCloud rather than the PC as the central hub. This will give them a lot more flexibility and agility to move quicker and deliver secure results to the masses.


Absolutely. Forcing users to input their password each time they buy something from iTunes, or log into iCloud in the browsers, encourages simpler passwords. To have a single account in control of everything from buying a $1 song to remote-wiping a computer is madness.


Social Engineering will usually win out as long as a person is in the loop. It's just not feasible to expect a poorly paid CSR to be able to cope with this type of threat.

In the end, a company has to constantly weigh the cost of strong protections versus the risk, and what this exposure will cost them in terms of customer goodwill as well as any civil penalties that may arise.


I am confused; did the hacker guess the security questions or obviate them?

If the former it's not Apple's fault. If the latter; that's inexcusable.


Actually, it appears to me that almost 100% of “security questions” used during support phone calls are completely insecure.

Usually they'll ask a few (2~3 is normal) questions like your full name, date of birth, address with zipcode, email address, etc. Notice the problem of these? All of them, I mean, ALL, are PUBLIC INFORMATION THAT ANYONE KNOWS SOMETHING ABOUT YOU WILL HAVE.

This is almost as silly as credit cards, where you are supposed to give the card number, card holder's name (not required most of the time), expire date, and the 3-digit PIN. Anyone who touches your card will have that information, once and forever. Yes, ANYONE, that includes your grocery store cashiers, your favorite bar tenders, your mobile phone billing representatives, etc. The list could go on very, very long.

And I'm totally amazed that both systems persist as a fallback plan in this digital world with countless attacking vectors.


The thing to remember here is that where the liability lies matters. The banks effectively take on all the liability for financial losses due to credit card fraud & they're free to setup their systems to constrain losses to a level that they're happy with. Yes, arguably not all the losses fall on the banks, particularly the hassle and time of recovering from a particular instance of fraud but the majority of them probably do and that's what matters because the interests of those who bear the losses and those who run the system are aligned.

The trouble with cloud systems is that all the losses fall on the end user who has no influence over the security systems put in place to protect their data. (Except with Google where you can at least choose to use 2-factor authentication.)


Banks are experts at pushing financial liability onto others. Much of the time, credit card companies inform the merchant that they aren't going to pay for a transaction that they have deemed fraudulent. The merchant can chase the fraud themselves or eat the loss.

The fact that 'identity theft' is a commonly used name for bank fraud is another example. When some bank opens an account for person Y, there should be zero consequences for person X (regardless of any fraud committed by Y), but the banking system isn't quite set up that way.


As I've said elsewhere: "Keep in mind though; you can answer anything you want. Use a 1password generated string for each and store the answers redundantly. That's what I did."


Based on my experience, that's not how it works at all in practice. They will ask these info about your real identity as recorded in their CRM systems. I doubt you can list your name as BLAH BLAH BLAH there and still receive your package delivered correctly.


If a human is in the loop and you need to call in to verify - this could get quite difficult unless you use the "pronounceable" option in password generation.


If the Apple-chosen security questions are reasonably guessable, that's still Apple's fault.


Here is the list; You tell me.

Keep in mind though; you can answer anything you want. Use a 1password generated string for each and store the answers redundantly. That's what I did.

---------------------------------

What was the first car you owned?

Who was your first teacher?

What was the first album you owned?

Where was your first job?

In which city were you first kissed?

---

Which of the cars you’ve owned has been your favorite?

Who was your favourite teacher?

What was the first concert you attended?

Where was your favourite job?

Who was your best childhood friend?

---

Which of the cars you’ve owned has been your least favorite?

Who was your least favourite teacher?

Where was your least favourite job?

In which city did your mother and father meet?

Where were you on January 1, 2000?


I can barely answer half of those for myself and out of those that I can answer I'm either not sure I'd answer the same thing a few years later or it will probably be something a lot of people know.

Those questions are terrible.

Answering with a random string is the only sensible solution. But it is just as mindbogglingly bad. Because then you could just as well write down the password - and bam, you'd never lose it (well, if you did lose it you would have lost the answers to these questions as well so either way you are screwed).


Just keep in mind that the ACTUAL answer to the security questions doesnt matter, just whatever you type into the box the first time.

What was your first car? "spaceship" is perfectly acceptable response, and its not discoverable by public means.

it does however mean you need to know what you would have typed in for each of the questions.


I use the 1password pronounceable strings. Still plenty random, but you can say it over the phone to a customer service person if you ever do need.


Great idea: Updating my questions now...


For every single of these questions, my wife, mother and sister would know the answer as well. Ergo, they are not acceptable as a "secret" that can be used to grant access to my account; Apple should provide reasonable privacy also from my in-laws. The whole concept of "security questions" is stupid and useless, and shouldn't be used ever and anywhere.


That's why I always give a ten character random string as answers to those questions.


According to the guy's comments on Twitter, the hacker didn't have to answer the security questions.


Are they saying Apple sent the password reset request to a different backup email entirely? Or that they reset the password to a requested password while one the phone?

Even if someone had properly identified themselves as Mat Honan, neither of these should be permitted.


Mat posted a screenshot of his Gmail inbox which showed an email about Apple's password reset. So I'm guessing the hackers had compromised Gmail account BEFORE they called up Apple tech support. Or maybe that email was just an attempt and didn't help anyway with the actual password retrieval. I'm confused about this...


The original blog post makes it quite clear that the .mac account was used to compromise the Gmail account.

I think the screenshot is from after he regained control of the Gmail account.


Oh... in that case, how can a .mac account be used to compromise a Gmail account - using Forgot Password?


AIUI the .mac account was the backup email address of the Gmail account. So 1. The attacker compromised the .mac account. 2. The attacker used the I forgot my password feature of Gmail - to get an account reset email for the gmail account sent to the .mac account.


Yeah, one of the Gmail account recovery mechanisms is some process involving another email address.


Why isn't this part of every password-reset procedure? "We'll mail a reset code to the postal address you gave when you created your account"

This would mean that the attacker would have to commit mail fraud, which (a) is quite difficult; and (b) carries heavy penalties in law.


One problem is I have no idea what physical address Apple has for me, but I'm sure I have moved at least three times (as many as five) since I gave them that address.

A better solution is require a notarized physical mail in the event of password changes for high-security accounts. Everything else just goes to your email account.


I had this problem with a website from the Australian Government. I was actually trying to login to update my address, but I didn't know the password. For that though, I was able to visit a store front and update it after providing ID. I guess Apple could do a similar thing.


Apple already prompts me to agree to new terms and conditions every few months. Surely it wouldn't be too difficult to add '...and is this still your address?'


Because it's very inconvenient. I'm not judging, just answering. :)


In all fairness to Apple and any support desk, it ain’t hard to bypass a control system were one human talks to another exchanging information that is mostly in the public domain or bypassed using emotional based social engineering (sounding as if in a panic and your mother is in hospital for example). Support is human.

I helped a friend set-up a account with some provider the other day and one of the security question was the classic choice of mothers maiden name, favourite colour or favourite number. All of which are hardly secure as they can be obtained or educated-guessed a lot easier than most, but that’s another discussion. He wanted his favourite football player's name, so I told him pick mothers maiden name and use your favourite football players name. He knows this, and even if somebody who knew his mothers maiden name would still fail on that security check.

What could Apple do; And they will do something I suspect. Well they could add voice recognition to there support call system or/and add preregister calling numbers only (excluding device phone numbers already to cover losing said device) like your office phone. But they will step up-to the plate and hopefully turn this around, any good tech company will do that (even if it is going oops and we added password salts now - they evolve).

The whole aspect about all this that concerned me was how you can have what you perceive as a cloud backup that can then be taken away as well as your copy of the data. That is a lesson for the user more than Apple though. But will be reassuring to find out they have a backup system and maybe also concerning. That is a individuals perception of thought for them to ascertain for themselves, everybody is different.

I might also add that the chap who initial got hacked and subsequently also had his twitter accounts hacked said in a tweet that he is leaving the hacked tweets in the same way he does not go about removing scars on his body. Shows a insightful mindset and in many ways shows that pride was not a part of this and in that we would probably not of read about this had he been burdened by pride. Respect has to be noted there for him stepping up and going, this happened before he found out how it had been done and without knowing it was not an act of his own doing.


I don't believe that things happened as they are being presented. This is (ex-)Gizmodo we're talking about, people who have a long standing grudge with Apple.

In the middle of a 'major crisis' this guy finds time to type up a story, on a computer? He can still access work machines to submit? And then the hacker is kind enough to tell him what happened? And oddly, there is no mention of involving the police or the FBI?

This episode is either an inside job or a complete fabrication. My prediction is it will fall apart within the week, rather like Gizmodo's exclusive story based on the purchase of stolen prototype equipment.


Large amounts of personal data are collected by data brokers like Intelius, Spokeo and Whitepages - which makes this easier to pull off. It's fairly trivial to find answers to questions like "What's your DOB?" or "What's your billing address" by looking in one of these places. Most data brokers will have opt-out pages where you can request removal of your data - though they don't make it easy. There are also services that help with this: MyPrivacy (reputation.com/myprivacy) which I work on and Safe Shepherd (safesheperd.com).


We frequently see articles about well connected or influential people like reporters getting preferential support from large companies. This might be the dark side of special response.


Hopefully the article on Honan's experience will open some eyes and make everyone take the security of their personal accounts more seriously. The money in your bank is insured, your online presence is not, and there is a huge imbalance in how consumers address security for each. Some hackers don't want money or notoriety - they just want to watch the world burn.


I wonder if attacker will be caught and would end up in jail. All password change requests like that must be carefully recorded and are probably very traceable. Considering public nature of this exploit, Apple might put quite some effort to carefully investigate the incident.


The kid who hacked Sarah Palin's email got a year in jail. He was convicted of "the felony of anticipatory obstruction of justice by destruction of records and a misdemeanor of unauthorized access to a computer." [wikipedia]

The guy who hacked Honan is certainly guilty of the misdemeanor (which could wind you up in jail) and depending on what he erased and how they want to interpret his motives, he could be guilty of the same felony.


Sarah Palin Hack was basically the same. People don't figure what is available online gives the answers to their questions.


Can iCould be enabled remotely? I know it shouldn't be able to, but could it?


Everybody should read the account of an opposite situation with Apple tech support and password retrieval: http://www.pcworld.com/businesscenter/article/260414/how_did...


Its a good interesting piece but in this case could easily be that the employee in Mat's case didn't follow correct procedure or was not familiar with it (new employee?). Even if he knew the procedure for this cases there are all kinds of possible explanations: maybe the hacker pay him, maybe himself is the attacker, etc...


I totally agree that customer support is a very inconsistent department. I was just hoping Apple would have stringent training for these reps to all follow the strictest possible security checks. Who knows...


Damn.. this is popcorn-worthy. Anti-Applites are gonna say "sue them!" and Fanboys are gonna post a rebuttal to each of those posts.


I'm sitting at home surrounded by a bunch of Apple hardware, and my first reaction is "This is why you shouldn't use iCloud!". This is also why I refused to connect my iOS device to an Exchange server at work (which grants remote-wipe capability).

I don't think this is Apple versus non-Apple. I think this is everything-in-the-cloud versus everything-local.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: