Hacker News new | comments | ask | show | jobs | submit login
Apple Developer Website Update
282 points by danielsiders on July 21, 2013 | hide | past | web | favorite | 210 comments
Email from Apple

Apple Developer Website Update

Last Thursday, an intruder attempted to secure personal information of our registered developers from our developer website. Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed. In the spirit of transparency, we want to inform you of the issue. We took the site down immediately on Thursday and have been working around the clock since then.

In order to prevent a security threat like this from happening again, we’re completely overhauling our developer systems, updating our server software, and rebuilding our entire database. We apologize for the significant inconvenience that our downtime has caused you and we expect to have the developer website up again soon.




Here's my semi-educated guess for how the attack started: from casual observation (view source, URLs ending with .action, etc) a good chunk of the ADC is written in Java and uses WebWork/Struts2, a framework I helped create years ago.

Late last week a security advisory came out that allows for executing malicious code[1]. Atlassian, which uses similar technology, also issued announcements around the same time[2]. My wild speculation is this was the attack vector.

Sadly, I feel some responsibility for this pretty major security hole. There have been a few like this and they are all rooted in the fact that almost 9 years ago I made the (bad) decision to use OGNL as WebWork's expression language. I did so because it was "powerful" but it opened up all sorts of extra binding trickery I never intended. I haven't been contributing to the project in 5+ years, but this is a good reminder how technology choices tend to stick around a lot longer than you ever imagine :)

[1] http://struts.apache.org/release/2.3.x/docs/s2-016.html [2] https://confluence.atlassian.com/display/BAMBOO/Bamboo+Secur...


technology choices tend to stick around a lot longer than you ever imagine :)

It amazes me how true this is. I've learned that assertions such as "this is a mockup and should be replaced ASAP for reasons X Y Z" tend to get ignored by inheritors of proofs-of-concepts for as long as (or longer than) possible. My coworkers wonder why now I fight tooth-and-nail to (from their perspective) over-engineer things from the start; I know that short-sighted decisions will never be revisited until it's too late.


This is the biggest reason that I hate all of the "go fast and break things" culture around here. There has to be a balance, but big names and investors don't seem to be encouraging them.


So what you're saying is that it's your fault that I can't update my provisioning profile? No just kidding--Honestly, while you may feel responsible, the project has been up and running for a while. The current maintainers are responsible for the bugs that show up regardless of who designed the original code base. Definitely appreciate you shedding light on the situation since Apple hasn't really given us to much information.


Both are responsible, and responsibility can add up to more than 100%.


I think we, as a society, are ultimately to blame.


I think the universe, as quantistic thus probabilistic, is lastly to blame.


I think the human being, as imperfect, is finally to blame.


Are you people out of your mind? The person responsible for this is the unethical human being(or organization) that decided to abuse that security hole and use it to steal our information. Not the engineers, not Apple.

When you see an old lady walking alone in the dark, do you steal her purse because she did not protect herself well enough? I'm sure you don't... So please, don't over complicate things.


So when you entrust your belongings to a storage company, and they leave the doors unlocked and your stuff gets stolen, do you get mad at the thief? Or do you demand better security from your storage company?


The timing of this looks right: Struts 2.3.15.1 was released on 7/16/13, and developer.apple.com went down on 7/18/13 for "maintenance". Plenty of time for an enterprising black hat to notice and exploit the vulnerability.


> Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed.

So they can't rule out the possibility that sensitive personal information, which cannot be accessed, has been accessed. Got it.

Apparently our intelligence, which cannot be insulted, has been insulted.


By "sensitive personal information" they probably just mean passwords and credit card information, not names, email addresses and mailing addresses.


What I find slightly unnerving is that Apple didn't make this clearer.

If they know that credit card information was not affected, they should say that. E.g. "Sensitive personal information (such as credit card data) was encrypted and cannot be accessed, ..."

It's reasonable to suppose that 'sensitive' includes credit card information, but as it stands it's something we have to interpret.

I'd suggest we all check our credit/debit card statements more often over the coming days, just to be sure. =)


>What I find slightly unnerving is that Apple didn't make this clearer.

Apple is not a startup. They were ranked 6th in Fortune 500 for 2013. They are going to be rehearsed, political, and vague with their descriptions. Were you actually expecting them to release a postmortem on their blog with a link to the GitHub repo with the fix?


Sure, but I don't think it's a bad idea to raise the median level of expected standards. It seems reasonable to ask for clearer reports and some kind of a postmortem.

One of the hopes for the notion of startups searching / optimizing in these kinds of niche spaces (transparency, communication on a more personal / no-bullshit level, whatnot) might be that these kinds of optimizations will hopefully change what is to be expected from IT businesses in general (at least in terms of communication and so on.) One can at least hope..


If they say X wasn't leaked while it was, they will open themselves for litigation. A company of Apple's size cannot afford to make mistakes there. So, they have to put in weasel words such as "such as".

Also, this isn't a postmortem (yes, we may never see one, it is premature to comment on that _now_)


I do see your point. It just seemed odd to me that they didn't take that simple step to clarify what is arguably the most pressing question on everybody's mind: is my CC/bank data safe?

This is most likely just me being too paranoid and literal, of course. =) In general I'm not too disappointed with how they've handled this - it could've been far worse.


Passwords could be hashed, but credit-cards are the big one you have to keep in plaintext. If you want to bill the card without asking for the number to be reentered, there's no way to avoid storing the number and expiration date. PCI does mandate that you keep less than necessary to initiate a new charge, though: you are not allowed to store the 3-digit verification code from the back of the card. Future charges from the same vendor can go through based on the stored information (without re-sending the verification code), but charges from a new vendor would need the code, so this is intended to make it harder for someone who stole the saved information to initiate a new charge. A loophole is that in-person charges do not use the verification code, so someone could use the saved information to fabricate physical cards, and try to use them at stores (the U.S. doesn't typically use either chipped or PIN-protected credit cards, so cloning a card from the number is relatively easy, prevented more or less only by the heuristic fraud-detection algorithms).


Credit cards don't need to be kept in plaintext. I'm a big supporter of having the frontend encrypt the data using a public key. The private key is stored only on backend servers.

So now if something needs to kick off a billing process the frontend sends a signal using a defined service method (preferably something so simple that it is secure) and then the backend goes off and decrypts the data followed by doing the actual processing required.

If the frontend and backend are on two seperate networks, and the frontend is only allowed to talk over TCP/IP port 5930 for example to the backend, now you have reduced your attack surface tremendously while making customer data more secure.


The last system I worked on that actually stored it’s own CCs (Lately it’s been all tokens, all the time, for me) did roughly this with the added, fun, feature no SSH or remote shells on the backend box.

You sent messages to add or charge our client’s credit cards from the front end - on the ultra-simple protocol, to the 1 (!) open network service on the backend. And that’s all the input it took from the network.

If something more complex was needed someone with much higher permissions than I went to the server room and typed into the terminal. Which really minimized attack surface.


Couldn't you also do something like this? Store each user's key, transformed in such a way that you could get it back only if you have the password. Serve the key to the user's session on login (maybe -- depending on how long you store the session, you may want to require password reentry to initiate any charges). Encrypt all sensitive data with the user's key, such that only that logged in user can read it back.

The major drawback would be the same as the benefit. Since you can't know your users' CC numbers, you also can't make recurring charges.

Pipe-dream solution to that -- you should be able to get a token from your payment provider that authorizes you and only you to charge the CC. Should that token leak, you barely even need to revoke it. It can't be used by anyone else, because you need both the token and your company's api secret to charge anything, and even then, all they can do is send (easily refundable) money to your account.


Even better - how about simply doing a form of OAuth with the service provider? A token would be authorized for recurring billing or anything else. I think Verified By Visa is an example of something like that...

If such a provider could also SMS you on your chosen # to confirm the purchase then the system would be secure!


> ...but credit-cards are the big one you have to keep in plaintext. If you want to bill the card without asking for the number to be reentered, there's no way to avoid storing the number and expiration date.

Not necessarily, if you're using a payment gateway that supports token billing...


Yes, but then you're going to start running into the "don't keep all your eggs in one basket" situation. Sure there are companies out there that will store that information for you, so you don't have to worry about living up to the standards of storing it, but what if that company is compromised? You can't just say "oh they should just let someone else deal with storing that information..." SOMEONE is going to have store the actual information in the end.


When it's the bank that issues you the token, the buck stops there. The bank has the card details anyway.


> A loophole is that in-person charges do not use the verification code, so someone could use the saved information to fabricate physical cards, and try to use them at stores (the U.S. doesn't typically use either chipped or PIN-protected credit cards, so cloning a card from the number is relatively easy, prevented more or less only by the heuristic fraud-detection algorithms).

This is untrue. The magnetic stripe contains significantly more data than what is printed on the card and much of that Discretionary Data (DD) is used during authorization of 'card-present' transactions.


Specifically, the magnetic stripe contains the CVV1, which is used for card-present transactions. The number on the back is the CVV2, used for card-not-present transactions.


> Future charges from the same vendor can go through based on the stored information (without re-sending the verification code), but charges from a new vendor would need the code, so this is intended to make it harder for someone who stole the saved information to initiate a new charge.

That's not true >"charges from a new vendor would need the code". Online credit card transactions only require a credit card number, expiry month & expiry year. The verification code is optional and is used as a fraud check / deterrent. Payments with an invalid verification code are highly suspicious. Therefore, when Apple (or any merchant) asks for the verification code initially, it passes the initial fraud tests and the card is stored as a "verified card" (or perhaps, only verified cards are stored). Further charges are then most probably legitimate (since it passed the previous fraud check).


Purchases of developer memberships are handled through Apple's online store. And that is still up.


There also is the bank account information for apple created payments to the developers... that could be part of the compromised items.


That is in iTunes Connect, completely separate from the developer portal.


The Apple Online Store is a completely separate system. Different servers, very different codebase.


It was down last night though, possibly related?


Probably took it down to check and patch.


Most banks provide the ability to get a billing token to avoid having to store card details at all. In order to do recurring billing you only need to store the token and the CVV on the back.


You do not need the CVV for recurring billing. Indeed, card processing standards explicitly forbid you from storing the CVV at all.


I would imagine Apple would not trust the ability to bill its customers to a third party. They have touted their collection of CC numbers for quite a few years, it's treated as an asset to the company.


I think I would rather someone have my CC number than my home address (which would be the same as my mailing address).


You're not in the phone book, then? County property tax database? I mean, your home address is not exactly something you can really hope to keep secret.


Why?


Credit cards can be cancelled. Home addresses plus other information could be valuable to identity thieves. Maybe it's not that big a deal, but the idea of hackers (criminals?) knowing my name, email and home address seems a bit creepy.


Perhaps if it were a case of a targeted attack I'd agree Identity theft etc doesn't really scale as well as credit card fraud.


I'm imagining bank account numbers over CC info/passwords was the sensitive part.


Bank account numbers are not secret. If you write a check to someone they have your account number.


Well they are in this weird space of being able to be used to pull money from the account though ACH/demand drafts, so they _should_ be secret, but then as you said they are on the bottom of every check. Which is super weird, and not the case in other countries' banking systems.


"First of all, this does not affect iTunes customer accounts—this is a different system and all iTunes customer information is completely safe, Apple told me.

It’s also important to note that the hacker did not get access to any app code or even the servers where the app information was stored. The hacker also did not get access to any credit card information.

The only thing that the hacker could have gotten access to was the names, email addresses and mailing addresses of the developers. At this point, Apple doesn’t know if the hacker even managed to see that information. Worst case, that is all the information they would have seen, according to Apple."

http://www.loopinsight.com/2013/07/21/apple-comments-on-deve...


Alternately, inside the reality distortion field developers’ names, mailing addresses, and/or email addresses is not sensitive personal information.


Anyone who has my name can find my email address with a simple Google search. My mailing address is on all kinds of public records.


"Give me a list of all registered developers email addresses" is a little harder though.


What use is the list without passwords?


Pretty sure it'd be easy to sell that list for several thousand dollars, over and over.


Spear phishing


Spamming them with something that may interest iOS developers.


If by "reality distortion field," you mean the legal community, then yes. Having been through something similar recently, the lawyers will likely be making sure everyone involved understands the concept of PII: https://en.wikipedia.org/wiki/Personally_identifiable_inform...


How are names, mailing addresses, and email addresses sensitive personal information?

I would imagine that for most of the people signed up, it wouldn't be that hard to track down their name and email just from knowing the name of their app.


Name and e-mail, I'm kinda with you. Everyone who uses my app knowing my current mailing address I look at a little bit different.


One of the understood requirements of publishing an app on the App Store is that developers must provide some means for customers to contact them directly (support page, email address, etc). If you're selling apps on the app store, people can already peddle their wares to your email account.

So yeah, developer's names, addresses and emails are not secrets by any means. Why would anyone buy an app from someone they had no means of identifying?


iOS developers != App Store vendors. There are plenty of developers who work on other people's apps for a living.


names, email adresses, and mailing addresses aren't particularly sensitive. These are all pretty easy to get for most people without hacking anything.


Apple apparently doesn't agree that, in context of the data they have on you, your name, mailing address or email address qualifies as sensitive.


"The intruder had good intent with trying to "secure" our personal information. But despite nothing being hacked, as it was only a 'threat', we still need to tear down and build up the system from scratch again. In the spirit of transparency we've waited 72 hours before giving you this nonsense bullshit. Please note that some (that is all) of you will from now on get regular viagra offerings in cyrillic. Good for you!"


  secure
  verb [ with obj. ]
  
  2 succeed in obtaining (something), especially with difficulty


I downloaded the CRL for developer certificates [1] and quickly looked at it using grep:

  grep -E "Revocation Date: Jul 17 .{8} 2013" wwdrccrl.txt | wc -l
      3065
  grep -E "Revocation Date: Jul 18 .{8} 2013" wwdrccrl.txt | wc -l
      2289
  grep -E "Revocation Date: Jul 19 .{8} 2013" wwdrccrl.txt | wc -l
         2
  grep -E "Revocation Date: Jul 20 .{8} 2013" wwdrccrl.txt | wc -l
         0
  grep -E "Revocation Date: Jul 21 .{8} 2013" wwdrccrl.txt | wc -l
         0
These are the two certificates that were revoked on the 19th

  grep -A 3 -B 1 -E "Revocation Date: Jul 19 .{8} 2013" wwdrccrl.txt
      Serial Number: 2628C7F90970D227
          Revocation Date: Jul 19 03:14:04 2013 GMT
          CRL entry extensions:
              X509v3 CRL Reason Code: 
                  Key Compromise
  --
      Serial Number: 1A51ABFA4844BD45
          Revocation Date: Jul 19 03:24:03 2013 GMT
          CRL entry extensions:
              X509v3 CRL Reason Code: 
                  Key Compromise
To generate the wwdrccrl.txt file I used:

  openssl crl -inform DER -text -noout -in wwdrca.crl > wwdrccrl.txt
Just to be clear -- every entry there I see lists the reason as Key Compromise, just interesting that they usually seem to revoke at least 2000 certificates a day but suddenly stopped on the 19th with just revoking 2.

[1]http://www.apple.com/certificateauthority/


That's because the portal is down and people can no longer log in to revoke their developer certificates.


That's what I thought -- but it was also down the 19th yet 2 were revoked.

Probably means nothing however. I doubt that anybody with the ability to get into the system would want to get only developer certificates.


"Completely overhauling our developer systems, updating our server software, and rebuilding our entire database."

That does not sound like an intruder "attempt" by any means.

They got hacked, and they got hacked bad if they're rebuilding databases and overhauling entire enterprise-class systems over there.

Transparent my ass. They're deep in the gutter, 3-days and counting no fix, engineers are probably working 24 hours a day and the entire site is still down. This isn't a small time breach folks. They had to go public considering it will probably be down for a few more days...


A little more info from TC: http://techcrunch.com/2013/07/21/apple-confirms-that-the-dev...

Update — Just got off the phone with an Apple rep, who confirmed a bit more:

- The hack only affected developer accounts; standard iTunes accounts were not compromised

- Credit card data was not compromised

- They waited three days to alert developers because they were trying to figure out exactly what data was exposed

- There is no time table yet for when the Dev Center will return


There is an interesting comment at techcrunch:

http://fyre.it/tjlVmC.4

"[...] One of those bugs have provided me access to users details etc. I immediately reported this to Apple. I have taken 73 users details (all apple inc workers only) and prove them as an example.

4 hours later from my final report Apple developer portal gas closed down and you know it still is. I have emailed and asked if I am putting them in any difficulty so that I can give a break to my research. I have not gotten any respond to this.. [...] "


Looks like Apple is using Google Web Toolkit, those are GWT RPC responses. He probably found an CSRF attack.


I read the comments dismissing apples handling of this. What would you have expected them to do? There is a LOT of forensics going on probably even now trying to get a handle on this. A massive corp isn't going to make an announcement until they have some idea what they're talking about. In my books 4 days is a very quick first announcement from a company of this size.


These details are befuddling. "Personal information was encrypted and cannot be accessed". It can't be accessed because it's somehow stored elsewhere, or it can't be accessed because of the encryption? That is, does the intruder currently own my encrypted data?

I'm also disappointed that it took them 72 hours to tell us anything, and that the update doesn't even have a timeline for when the site may be back. "Soon" is meaningless.


Apple should have kept us informed from day one.

But, in their defense it may take days to ascertain what exactly happened.

Once a system is compromised it's nearly impossible to trust anything about it. Auditing the logs, and reviewing the code, crypto, and the mix of platforms they're using (see https://news.ycombinator.com/item?id=6078854) in order to understand what data could be accessed and fix all vulnerabilities is not an easy task.

The PlayStation store was not down for such a long time without reason.


Yeah I'm confused why companies tells us DAYS after something serious happened as opposed to right away. I can understand waiting a day but 3 whole days?! I just don't understand the delay.

It's our data, we should have the right to know what happened to it.


"why companies tells us DAYS after something serious happened"

Companies are people. And all the relevant parties involved in handling this may not be accessible to make a decision as quickly as needs to be done. Or at least quickly enough to satisfy all people.

Do you feel you suffered any harm in particular by the delay of three days?


I'm mainly complaining about the delay in telling us anything.

What I would love is an update whenever they suspect an intruder has accessed sensitive information. Many websites like Dropbox and last.fm do have a server status where they tell us if they have any planned maintenance or just general status of the server. Why can't Apple and the rest of the big companies do that?

Also, Apple first said it was just regular maintenance. I'm just confused as to why they said that instead of telling us the truth.


Likely because if you say "we are investigating a possible data leak" and then end with "we discovered it was an undocumented maintenance event by someone on the engineering staff, we have added more detailed logging as well as a better maintenance process so we can be clear about this in the future", many people will think the worst. It's unfortunate.


It can take more than a day to know what happened to your data.


Yes, but it doesn't take more than a day to know that an intruder had accessed the system in a way that may have compromised your personal information.

Basically, once the problem was serious enough that they felt like they needed to take the site down, I'm pretty sure they knew which machines had been accessed (or at least may have been accessed). They knew that some of those machines had developer's personal information. They could have posted as much up front, rather than waiting 3 days to do so.


"I'm pretty sure."

No, they likely took the portal down as soon as they knew there was a breach. Highly unlikely they left it up while they investigated, and it takes time to figure out what happened and how much information was taken.

What motivation would there be to wait anyways?


> No, they likely took the portal down as soon as they knew there was a breach. Highly unlikely they left it up while they investigated, and it takes time to figure out what happened and how much information was taken.

They still haven't said anything about how much had been taken.

My point is they knew how much could have been taken. They knew what machines were at risk; hence taking them down. If those machines that were at risk had sensitive personal information, they should have notified the people affected right away, not three days later.

Taking the site down, with no indication of why, and waiting three days to tell people that their personal information may be at risk (and remember, the possibly compromised information includes credit card numbers, as there are a number of things you need to pay for in your developer account) is just crazy.

You should be upfront and transparent when the breach first occurs. Of course you don't know exactly what has been compromised; but they are still being plenty vague even three days later. If they had posted three days ago what they posted today, it would be a lot more reassuring.


As I interpreted it, yes, the intruder does have your encrypted data.


I hope you were using good encryption Apple. This guy cracked 400,000 md5 hashed passwords: http://www.youtube.com/watch?v=0WPny7wk960 He says even with password salts, it could be done.


Unsalted MD5 has been extremely easy to crack for years. Not sure why the claim would be impressive at this point.


To be fair, unsalted md5 hashes are only one step above storing passwords in plain text.


I could crack your unsalted md5 password with my Gameboy Color.


> "In the spirit of transparency, we want to inform you of the issue."

Ha, what a joke, I can't help laughing at that.

With so many third-party Apple developers drinking the kool-aid, and dreaming of becoming rich, I'm not surprised Apple treat them like fools.

Just yesterday on Twitter, some developers were speculating that the site was taken down to be updated with new SDKs for exciting new features and product lines.


I'm pretty sure in a fully honest world that would read "In the spirit of not risking running afoul of the security breach statutes in various states we do business in, we want to inform you of the issue".

Apple is great a lot of things but I don't think even their most ardent fans would argue that transparency is one of them.


Hmm so it only takes a few days to "completely overhaul" their developer systems? Not sure I believe this is what they're actually doing. And why haven't they updated their server software before? I know mistakes can never be completely avoided, but this seems slightly amateurish for a company with so much cash.


> this seems slightly amateurish for a company with so much cash

I know there are people here who probably have been in the start-up space for all of their working life, but never underestimate how piss poor architecture can be at big companies.

I would place a large amount of money that every single person here who has done a stint at a large corporation has a horror story about terrible, awful architecture, outdated practices, and shoddy insecure software. And yes, that includes companies like Facebook, Google, etc.


Case in point: http://bugreporter.apple.com still reflects the UI style of OS X 10.0


  > I know there are people here who probably have been in the start-up space for all of their working life, but never underestimate how piss poor architecture can be at big companies.
Systems that are over 5/10/etc years old and are part of a multi-million/billion dollar system even more so.

  > who has done a stint at a large corporation has a horror story about terrible, awful architecture, outdated practices, and shoddy insecure software.
My god that rings so true. ;_;


There was some discussion in previous threads about how their systems seem (like many others, to be fair) to be a taped-together mess of old perl scripts and server-side includes and CMSs and who knows what else. So they're probably trying to come back online with a cut-down version that presents a smaller attack surface and contains only components that have been vetted in the last 10 years, which is overall not a bad idea.


I've worked on some of those systems. This isn't true.

Most of their web apps are WebObjects/Java apps. Sure some of them would be classed as legacy but it is hardly different from most other enterprise companies. And their systems are completely separate from each other e.g. iTunes, Apple Online Store, ID auth, Developer Portal.


>a cut-down version that presents a smaller attack surface

... or simply a cut-down version they know how to run.

"Mhh, what did that old cron job do, again?"

"I don't know; it was written by Mike, he left 3 years ago."

"oh ok, so not important then."

<12 hours later>

"OMG OUR DB EXPLODED AGAIN! That script was clearly essential! Can you rewrite it??"

"Er, I've looked at it: 2000 LOC of obscure Perl. Gonna take a while..."


>> "Hmm so it only takes a few days to "completely overhaul" their developer systems?"

They've already been down a few days and haven't said when they'll be back up so it's impossible to know yet how long it'll take.


I understand everyone's frustrations with this, and the fact that Apple haven't been immediately clear on exactly what happened. As a developer, I too am alarmed by what has happened.

But these things are complex, and it takes time (i.e. a few days) to fully and properly evaluate what has happened and what information leaks/security breaches have occurred.

Let's give this a reasonable amount of time, and only then pass judgement on their handling of the case.

I don't want to appear like an Apple apologist - and maybe it is a serious fault on their side. But in fairness I do think it's reasonable we give them time to evaluate & respond appropriately.


No reason to be up in arms, folks. They've got the marketing team working on this too.



Marketing has financial responsibility but engineers still develop and maintain it. And there is lots of cross collaboration with iTunes Store and Apple Online Store teams.

WWDR is all about evangelising the platform to a technical audience. Of course it belongs in marketing and not engineering i.e. it involves road shows, presentations, reach out activities etc. Not everyone that is technical are developers remember.


Never underestimate the skill of the 2AM janitorial staff, too.


Uh, how does this "encryption" work?

For the website to show these details (and it does, in part, use these details in the interface) it must be able to decrypt these on the web applications side. Ergo the keys for decryption must also be on the server or derived from the users passwords, both of which make the use of encryption a fairly worthless venture.

ED: As another commenter mentioned in an earlier thread, lots of other AppleID facing applications are gone as well ( https://ecommerce.apple.com/ ), so it would be interesting to find out how far this all goes. The websites don't seem that far disconnected from the information in iCloud.


I think the 'sensitive personal information' is passwords, so the names, addresses etc which are displayed on the site were not encrypted.


Maybe they phoned for a ransom after breaking in?

Your post is pure speculation and depends heavily on what Apple means by 'sensitive'. I'm guessing that Apple means your CC numbers, certs, shared keys, etc.

Possibly also your support tickets, your bank numbers, etc.

As for how encryption works, I'd suggest Applied Cryptography by Scheiner. I think there's a problem in that book about Bob keeping speculative posts to Alice secret from Eve. After reading that book, I'd suggest applying for a job at Apple to give you first hand knowledge of what they're actually doing and then you could make an informed judgement about what may or may not have been exposed.


AFAIK, it was a white (or grey) hat hacker. See the comment on the techcrunch article. He nabbed Apple employee details, as proof. But they are probably worried that someone else has also done it.


I am aware how crypto works. I was commenting on there being no source for a key in this situation, rendering it a fairly useless venture.


In a case like this, there would (normally) only be one secret to find. (IVs, or at least the information IVs are derived from, and so forth would be stored with the data.) That doesn't necessarily make it easy to break if the key was securely handled, but it does make it catastrophic if the key is determined.

Apple's email essentially says, "we don't think they have the key, but..." And a complete investigation, along with changes to the system and an opportunity for users to change data as soon as possible under the new system, is the right way to go about it.


I got this email about an hour ago. I feel sorry for the folks who are "updating our server software, and rebuilding our entire database". Songs will be sung in the opsen bars about about this battle.

From the sound of the email it suggests they have records of some data (perhaps not sensitive data :-) being compromised but no root cause on how it was compromised, so they are re-building systems from the ground up validating, configuring, and then moving to the next step.There are times where this is faster than spending time trying to root cause the exploit.

That said, this is where privacy and security collide. Since logs going back months of what everyone has done on every system really helps reconstruct things, but of course if you have those logs it means that someone else can abuse them.


Good to see some transparency on Apple's part here.

I understand this must be a very challenging situation for them to deal with, and I appreciate the notification. As I'm sure many developers feel, I'd like to know more details, but I'm sure these will come in due course.


It's a strange world we live in when every time we're told by a big corp that our personal info was compromised, we're grateful for being told.

This is the worlds most cashed-up corporation. They could buy entire countries, yet they made a conscious choice not to update their server software or hire more competent sys-admins.

There shouldn't be a way for them to gain marketing wins out of this. There should be a law requiring notification when personal information is compromised.


This isn't about marketing. It's about a security breach. And security breaches take time (> 2 days) to properly investigate and report.

It's entirely possible that this is a massive oversight by Apple and they've been extremely negligent in their security policies.

It's equally possible that there's some bug (that either you or I could easily have made the mistake of introducing) that's resulted in this being possible.

Let's calm things down, give it a few days, and then evaluate. Nobody can make an immediate judgement about the exact causes of problems like this. If you're making judgements at this point, you really have no idea whether you're being accurate or not.

And yes, if it turns out to be negligence on Apple's part, I'll be very angry. But let's wait and see.


My argument was that you were impressed by the press-release. Your opinion of Apple was improved such that you made a post in public expressing your admiration of them for telling you that they'd lost some data that you'd entrusted to them.

This shouldn't happen.

When my 4yr old tells me he did something "wrong" without any prompting (eg. "Dad, I broke your phone"), I'm impressed because he didn't have to out himself, but did so because it was the right thing to do.

Large corporations rarely think in terms of right and wrong... they have a duty to their shareholders, and nobody else. As far as their shareholders are concerned, they shouldn't release damaging information unless not doing so could potentially negatively impact profits down the line. So when Apple tells you they messed up, they're only doing so because they're worried you might find out some other way, which would be worse for them. They aren't doing it out of the kindness of their hearts.

Now if there were a law requiring the disclosure of incidents such as this when personal information is compromised, then Apple wouldn't have a choice in the matter, and they wouldn't be able to fool people like you into thinking they're awesome when they just lost your data through negligence.

> It's entirely possible that this is a massive oversight by Apple and they've been extremely negligent in their security policies.

They just said they'll be updating their software. Why would they do that if they didn't think that that would make the data safer. It's pretty much an admission that they chose not to update the software earlier ie. someone made a decision to use outdated software.


Your post basically amounts to a conspiracy theory.

"It took them 3 days to tell us something happened. Obviously this means they would have kept it secret if it were at all possible."

It takes time to figure out what happened in a breach. That doesn't mean that Apple is some evil company trying to hide the fact that there was a breach.


No, I made no mention whatsoever about how long it took them to tell us. Did you even read my comment?

> That doesn't mean that Apple is some evil company trying to hide the fact that there was a breach.

I never said that they were trying to hide anything. Again, did you even read my comment?

> Obviously this means they would have kept it secret if it were at all possible.

Well yes, that is logical. A corporation would keep such a thing secret if they had a guarantee that there was no other way people could find out. There are good people working at Apple, but they are not Apple. A corporation doesn't have morals. It will not damage itself and threaten profits just for fuzzy feelings, any more than it will drop the price of the iPhone 6 to $20 because that would be a good thing for the poor.


> if they had a guarantee that there was no other way people could find out

Do you believe this is unique to corporations? Would "real people" always do the right thing even if they had a guarantee nobody would e able to tell?


Yes, many people would. I for one.


> And security breaches take time (> 2 days) to properly investigate and report.

We can see that they did their job correctly by the overwhelming amount of details they provided us with.


>> It's about a security breach.

Well, you're very much mistaken. It's not about a security breach at all. If you read carefully, you'll notice it's merely about a "security threat".

:)


> if it turns out to be negligence on Apple's part, I'll be very angry

I doubt you'll need to get angry - because if it's negligence, you/we won't be told.


There is a law that requires them to notify people immediately. I suppose their excuse for waiting three days will be that they did not know for sure whether any personal data was acquired by the intruder. There's going to be a debate on what they needed to know to "reasonably believe" that data had be accessed. http://info.sen.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_13...

(b) Any person or business that maintains computerized data that includes personal information that the person or business does not own shall notify the owner or licensee of the information of any breach of the security of the data immediately following discovery, if the personal information was, or is reasonably believed to have been, acquired by an unauthorized person.

(c) The notification required by this section may be delayed if a law enforcement agency determines that the notification will impede a criminal investigation. The notification required by this section shall be made after the law enforcement agency determines that it will not compromise the investigation.


Transparency? After 3 days? Transparency would be telling us right away (with an update as soon as they know more). They should also tell us what kind of info was taken and what does sensitive information mean to them since they don't seem to be sure about that.

> Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed.

Edit: and seriously, what does this "updating our server software, and rebuilding our entire database" mean?


> Edit: and seriously, what does this "updating our server software, and rebuilding our entire database" mean?

my WAG: paving systems; reinstalling the server OS; updating packages; restoring db from known clean backup; replaying logs/binlogs that are known clean?

That would be my guess.


Also, let's not jump to conclusions until we have more info. This is clearly a very serious and sensitive incident.


Any idea what "rebuilding our database" means? Reticulating the splines? I hear those go out of alignment sometimes.


There are lots of moving parts in a database, and lots of places one can hide back doors for later access -- triggers, etc. If you're not sure how hard you got owned, nuking, paving, and auditing is usually the best course of action.


"In the spirit of transparency". Right, Apple.


In the spirit of transparency, we're giving you vague warning that some information might have been accessed _4 days ago_


That's not a long time to receive a letter like this. That's as fast as Apple instantly responds to anything, esp. considering the weekend.

And the site was down, so it was clear something was going on.

It is also extremely transparent in the sense that people were wondering this exact thing even earlier today, and now received a response detailing that this is an extremely severe breach, as opposed to something else. What more do you want on a Sunday?


Well I'm giving them a hard time due to the massive schadenfreude, obviously. Still, this is very vague about what the 'sensitive personal information' is (passwords?), what was encrypted, what was hashed, was it using a proper hashing scheme, etc.

And announcing it just because people have started to speculate is damage control, not taking responsibility.


>> "And announcing it just because people have started to speculate is damage control, not taking responsibility."

It's possible it took them a few days to figure out exactly what was taken and waited until they had as much info as possible to make a statement. I doubt Apple would be following blogs during a situation like this with someone making the decision: "oh, people have started to speculate about what's happening - I think we should make a statement."


Shadenfreude? Do you mean something other than just enjoying their misfortune?

And 'taking responsibility' means solving the problem. They have told us what they are dealing with. What more do you want from them?


Theres a security researcher commenting on techcrunch claiming he's responsible for the breach here http://fyre.it/tjlVmC.4

His proof uploaded to youtube: http://www.youtube.com/watch?v=q000_EOWy80


Taking 40k records is more than just penetration testing.


For what it's worth, Wednesday morning at 4am I had an email account associated with my developer account compromised(they both stupidly used the same password). This account was used for almost nothing but accessing my developer accounts at Apple. At the time, I thought my Apple accounts might be in trouble and I immediately changed all my Apple related passwords as well as regained control of my email account. I'm now wondering if the breach might have gone the other direction...


At the time I re-secured the two accounts, I also changed my apple developer account to a new email with 2-factor auth. Apple is still sending these announcements to the email I changed >72 hours ago.


> In order to prevent a security threat like this from happening again, we’re completely overhauling our developer systems, updating our server software, and rebuilding our entire database.

I am wondering what was the thought process behind this gem. I think this looks like a knee jerk reaction and it's particularly lacking polish coming from Apple. I mean clearly Apple knows that "overhauling" systems and updating software is no guarantee for future security. It's not a one time fix - it's an ongoing process. And rebuilding entire database - that's just crazy talk! This is especially inexcusable because the target of this update are developers!

Security is hard - you've got legacy crap, 3rd party/unsupported code, you've got open source code and then you have your own code that has evolved to be a Frankenstein. I don't have a problem with Apple getting it wrong once - but the statement does nothing to make developers confident that Apple will finally get web services right.


`rebuilding our entire database`. So the database was... destroyed...?


I'd let them slide on that if they had the brass to announce it was running on CoreData.


> intruder attempted to secure personal information

haha "secure". Am so using that word next time my site gets hacked.


get a dictionary.


Could it be related to CVE-2013-2251 which was released on 07/20? The URL developer.apple.com/devcenter/ios/index.action seems struts alike..


Jeez people, a company identifies a hack attempt, stops it, and makes sure it never happens again. How often do you hear that one? Most companies don't even tell you anything happened and if they are forced to, they don't even admit anything bad happened (we only exposed 80,000,000 credit cards, no biggie).

If my employer suffered this I doubt they'd even tell the employees.

What do all of us do when we find a security issue?


The thing that people are getting annoyed at is Apple are claiming this update to be in the name of transparency, even though it's 3 days late and is worded quite poorly.

>Most companies don't even tell you anything happened

Seeing as you're confident with the "most companies" part, name me 5 big tech companies that suffered a data breach and didn't tell the public, or were forced to.

There are laws you know, about informing people about (potential) security breaches.


You can't have it both ways. They were transparent. Complaining that it was 3 days after the incident is irrelevant since we don't know how much investigation was required for them to understand the problem.


>You can't have it both ways.

Why not? See: Ubuntuforums

>we don't know how much investigation was required for them to understand the problem.

I agree, but i find it hard to believe a company the size of Apple, with the talented force that they have, couldn't have identified that they might've been breached, within 3 days.

3 days.


Thanks Apple! This email was super helpful, now I know exactly whats going on.


I sense some sarcasm here, but I don't get it. Yesterday the site was just down, now we officially know why and have some sense of a timeline. It seems reasonable enough to me—what more do you want?


Then let me clarify...

1) The site was down since Thursday, not yesterday. 2) You can't "overhaul" and expect to deploy "soon", so wtf are you doing apple? 3) "Soon" is not a timeline, at least not in the real world. 4) What info got owned? What could be effected?


This may explain some strange occurrences I had yesterday.

Starting at 7am, I received an Apple ID password reset request every 4 hours and 19 minutes, ending last night at midnight.

This Apple ID is also the login for my personal developer account (several years old). My developers IDs used for work never received a password reset request.


I highly doubt the hackers plan was to get email addresses and try to brute force from there... just doesn't make sense.

If you search Google, people are all the time receiving password reset emails going back years, even repeated ones.

Email addresses are in the clear all the time, and I've never heard of them being considered sensitive before. You should assume everyone has your email address.



I wonder if the hackers managed to get code signing keys out? Ultimate jailbreak?


I highly doubt Apple keeps their master keys anywhere near a public facing web server.


The dev center seems to be able to autogenerate code signing certificates at least. But maybe those can be revoked via online checks. I wouldn't mind having a wildcard enterprise cert with a 20 year expiration =)


Access to an API to get certificates from a different server one at a time is different thing from having the actual private signing key.


Certainly, but an unrestricted code signing certificate would be quite useful too (until they are revoked)


Mmmm... maybe. Okay, let's say you can sign code as anyone, even Apple itself, and create rogue apps.

Now, how do you use that information to compromise iOS devices? You probably won't be able to get it in the App Store, and the iOS devices won't install from anywhere else. You could make an Ad Hoc distribution package, but for that you need to know the UDID of each device and convince your victim to download the rogue app from somewhere other than the App Store.


You can install .ipa files easily via http and mobilesafari, and enterprise certs are valid for ALL UDIDs :)

So (again, assuming no revocation), you could set up a web based alternative app store, or re-sign cracked apps/games, or just enjoy being able to run code on your own devices (and distribute to others without going through the app store) without maintaining the $99/year subscription, or you could start linking/redirecting unsuspecting web browsing users to install malicious apps (would only need 1 confirm click)


If you have Apple's code signing keys, you can boot whatever software you want to on an iPhone. This has never been possible outside of Apple, except in the very rare cases of bootrom exploits (see limera1n).


Which wouldn't take very long. I'd be amazed if Apple wasn't using a hardware security module to store their intermediate signing keys and logging the keys generated from it.

I wonder where the CRL for the dev certificates is -- we might see an update to it soon.


Some info on enterprise adhoc ocsp services are here: http://stackoverflow.com/questions/9216485/how-to-manage-ent...


I can't feel too bad for Apple. They use WW/Struts but when was the last time they contributed to the project? They never have. Open source volunteers do their best but unless big corporations want to spend their own money, and do their own security assessments, and contribute back anything they find, what do you expect? It's great when you get things for free, but when you're sitting on billions, send some back to the community you're using code from.


Can it be related to the similar attack on the ubuntu forum? Maybe it was a single group of hackers targeting the servers in which they know a lot of developers have an account


Is the encryption not good enough (and I mean in general when sites get bcrypt'd passwords stolen, etc) when owners are worried the encrypted data is in the hands of intruders?

As a developer I'd still be concerned if I lost such data when encrypted - so I understand - but what measures can be put in place so that as a developer/site owner you're without uncertainty that the encrypted data will never be encrypted by the attacker (eg, would take trillions of years).


If anyone thinks this is the complete truth, well be prepared to be fooled many times more. I mean the thing is down for 3 days now. This must be a huge breach.


No. That tells us nothing about how big the breach is. Only how much effort it is taking for them to be confident that they've properly patched it.


I bet Forstall did it.


Yep, I can confirm I just got this as well.


+1


Same. It'd be helpful if they'd have a permalink for the email, as I'd imagine a lot of developers (including myself) have/will post to hacker news.


I figure that's probably why they don't have a link for the e-mail - they don't want their developers posting it publicly.


Imagine what you could do here: - break into facebook or twitter or any other high profile dev account - reissue new code signing keys - crack the latest public app and patch in a backdoor - code sign with new keys and submit as an app update


How? They took it offline.


Yes, when they discovered it. We don't know for how long they have actually been compromised. Also, imagine if it happens again and is not discovered.



That's why I started the original post with "Imagine..." :)

(Plus, a single PR release about one incident doesn't exclude the possibility of other (known or unknown) incidents taking place)


Well at least it was "only" the dev center, and not iCloud and iMessage!


It's not uncommon for developers to use the same credentials for both their developer account and their iTunes/iCloud account. I do.


Apple use a centralised credential system. If I was to speculate I would assume what's happened here is the metadata attached to the developer portal (developer contact information, company info, etc) was compromised, not the actual Apple ID. This would explain why Apple are saying no 'sensitive' information (passwords?) was taken.


Good scary point (I don't). At least I've pre-emptively disabled Find my Mac to avoid another Wired-like remote wipe. Imagine that being pushed to all ios and mac developers at once!


I'm more interested in the identity of the intruder for some reason. Who/what are they? Presumably there are easier targets to steal credit card numbers from, for example.


>> and rebuilding our entire database.

maybe someone dropped or polluted the database after hacking it, so they need to rebuild the entire database from other sources?


Manage your Apple ID/password/security questions here: https://appleid.apple.com


What use would changing passwords be if passwords weren't stolen?

If you think having your email address out there means you are at higher risk of being attacked, I've got news for you...


They said sensitive information is encrypted and can't be accessed, my interpretation of that is that the plain text can't be accessed but attackers may have the encrypted sensitive information (eg passwords). Depending on the strength of their encryption though, and the key used etc etc.. it might be perfectly accessible. In the absense of transparency on actual encryption details, you're probably better off assuming the data is compromised than not.


"Depending on the strength of their encryption though, and the key used"

I trust that Apple is competent when it comes to encryption at this point. I agree that the statement was ambiguous as to whether the data was actually taken.


Hopefully, but from the comments this an old, hacky system based on old software with critical software vulnerabilities. I don't imagine their encryption reflects that, but until it's clarified it's probably better to assume it does.


Apple jargon for "oh "


If the intruder is a patent troll-er, getting developers’ names and mailing addresses can be pretty harmful.


How exactly would that be harmful?

If a patent troller wants to find out who is behind an app, they would go through the legal system and use a subpoena.

Literally no reason whatsoever for them to hack a website to get it.


Nobody would take that risk. It wouldn't be difficult to figure out how they got all the names and addresses and would seriously backfire.


Thanks Apple. Now what really happened?


Is there any other source that this actually happened besides from a guy posting some text on HN?




I received the same email.


I also got the same message. It's a real email from Apple. Also, expect to see stories about this on the usual tech sites within minutes.


wow if they're "overhauling" everything that means Apple knows that hackers got some or all developers' info so it's not just that they can't "rule it out" they just don't want to publicly announce it.


I got a feeling that the most outraged never used Apple developer portal in their life.


glad that i use a password manager and disable no-paste from firebug in order to login.


> Sensitive personal information was encrypted

sigh Tell us exactly what was and what wasn't encrypted.


How is a developer's mailing address not a sensitive information for that developer? How does a tech company get away making a blanket assumption like that?


Is there a database of intrusion attempts (and successful ones too) made at tech companies?


Until I see an email from Apple myself I will not see this info as credible.


I got the email then came straight to HN to check the discusson.

  Received: by 10.50.11.202 with SMTP id s10csp27972igb;
          Sun, 21 Jul 2013 16:01:44 -0700 (PDT)
  X-Received: by 10.68.172.34 with SMTP id az2mr27321730pbc.201.1374447703980;
          Sun, 21 Jul 2013 16:01:43 -0700 (PDT)
  Return-Path: <developer_bounces@insideapple.apple.com>
  Received: from msbadger0508.apple.com (msbadger0508.apple.com. [17.254.6.162])
          by mx.google.com with ESMTP id yo6si9958126pac.15.2013.07.21.16.01.43
          for <XXX>;
          Sun, 21 Jul 2013 16:01:43 -0700 (PDT)
  Received-SPF: pass (google.com: domain of developer_bounces@insideapple.apple.com designates 17.254.6.162 as permitted sender) client-ip=17.254.6.162;
  Authentication-Results: mx.google.com;
         spf=pass (google.com: domain of developer_bounces@insideapple.apple.com designates 17.254.6.162 as permitted sender) smtp.mail=developer_bounces@insideapple.apple.com;
         dkim=pass header.i=@insideapple.apple.com;
         dmarc=pass (p=REJECT dis=NONE) d=insideapple.apple.com


Ok, don't think it's credible. The rest of us who've received the email can talk about it in the meantime.


You will get one soon...

http://i.imgur.com/PNMZrtn.png


Mine just arrived as I was reading this. I'll bet yours will be there soon.


I got email from Apple about 15 minutes ago with this text. It probably takes some time to send to everyone.


I got kicked out of an Apple store. I questioned a Managers managatorial expertise. I took his angry picture at the door(Eric in Corte Madera). I am tempted to post it on youtube, but feel punishment enough is working there? Oh yea, the reason he was furious at me, is because I didn't like the way he was treating my salesman. I've never understood people who let a title go to their head? Off topic, just venting.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: