P.S. I am sure I will get smashed in the comments, so let me say right away that NSA actions should be controlled and audited by the public (e.g. through our representatives in Congress). I think that the biggest "evil" here are the members of Congress who either approved NSA actions or failed to do their job and monitor/audit NSA properly. In particular, I would point my finger at Sen. Dianne Feinstein [D-CA] who should have been ousted from the office long time ago.
The assumption isn't bad - it's a private network line, not a public internet connection. Nobody else had access to that line, at least they weren't supposed to. Splicing a fiber line is a bit outside the scope of your random attacker. You can't blame Google for not anticipating a hostile break-in by the government. The discussion should absolutely, 100% be directed at the NSA here. To accept that a private network connection is open season for the government to tap is batshit insane.
> Moreover, it is not clear if other governments or criminals also had access to the users' data (e.g. in Google's data centers located outside of the US). So far Google did not produce any public post-mortem thus we have no clue how bad was the problem.
How is Google supposed to tell you if they themselves didn't know?
Although from the leaks it sounds like everyone is fucked thanks to the GCHQ and the NSA getting friendly with each other.
If indeed Google does not know then it's just another sign of security failures at the company. Nobody is perfect and security incidents do happen. A good security will have in-depth defense and built-in monitoring/audit measure that would at the very least allow you to determine what have happened post-factum.
Do you have your own data center building? And if you don't have your own data center buildings, how are you guarding against physical attacks? Because just saying "encryption" doesn't actually mean anything. Encryption isn't free, and at Google scale that can add up. Useless encryption is just wasted power
> For example, I don't want our system administrators to have an easy way to look at the traffic: yes, it is still possible to do but it is harder and requires some very unusual actions that will trigger alerts everywhere.
That can be accomplished in many ways that don't involve encryption. And your servers are all capable of decrypting the data at some point, so you still have to trust your sys admins and/or have alternative systems in place as they still have access to the unencrypted data.
> A good security will have in-depth defense and built-in monitoring/audit measure that would at the very least allow you to determine what have happened post-factum.
How, exactly, do you detect cable splicing? Much less audit said splicing? You seem to be asking for a hell of a lot more than "good security"
Some types of encryption are pretty cheap actually. I used to use special SSL cards in the servers 10-15 years ago but today my laptop would outperform these cards and wouldn't even get hot :) Plus you need to remember that relatively expensive public key encryption needs to be done only for key exchange. After that you run block or stream cyphers and those algorithms tend to be really fast.
So far I haven't seen any evidences that there was cable splicing. Thus using occam's razor I would assume that the hack was much simpler than that. To detect the issue, I would start from reviewing the visitors log to the data center (assuming there is a visitor log).
I'll re-iterate that security should be built on defense-in-depth principle. Every single protection layer will fail or someone will go around it. The assumption that a data center is "safe" is a bad assumption period. You have to play "what-if" game and think for the attacker.
What? The evidence totally points to cable splicing. What hack involves getting all the inter-DC packets but nothing else? Obviously the machines weren't compromised, or they wouldn't have cared about reverse-engineering the wire protocol. So what are you proposing was hacked?
> I'll re-iterate that security should be built on defense-in-depth principle. Every single protection layer will fail or someone will go around it. The assumption that a data center is "safe" is a bad assumption period. You have to play "what-if" game and think for the attacker.
And I'll re-iterate that you're asking for a goddamn magical pony.
Side note, if your data center isn't safe go get a new one. Seriously. Most DCs have tons of security to make them safe. That's not an assumption.
I don't think there are any evidences at all. As far as I know, the only known thing is that NSA was able to obtain the un-encrypted google traffic. For example, it could have been backdoor in the router, one extra cable in the switch, or a few other similar low-tech options.
> Most DCs have tons of security to make them safe.
Don't disagree. But this doesn't make them invincible from other attack vectors (e.g. rogue employees). I actually heard the same argument from quite a few people during interviews and I usually don't hire them because you have to be paranoid to get security right :)
And "discussed" is not accurate. It was proposed by a few but rejected by most as paranoid.
So when someone says: "You're just being paranoid", my reply will be: "Better paranoid than wrong."
You don't use telnet when you access your home server(s) from your laptop ... that's basically what they were doing.
They skipped over a zero-cost, obvious best practice, and I think we should be suspicious. Either they've run that part of their network in a stunningly negligent fashion ... or this was the ingress they gave to the NSA which could be plausibly denied later.
This was a point-to-point cable. The only access possible was physical, by digging it up and splicing it.
Obviously that attack was possible, but arguing that this is somehow "the same kind of attack" as running tcpdump on a router to sniff packets is just insane, sorry.
"The" government? The lines were also being tapped by organized crime, China, France, etc. Google severely failed at data protection.
Basically, I think Google's decision to do not encrypt the traffic is a gross negligence and I would love to see how someone would sue Google for it.
Wow. Just wow.
If the courts get involved it should 100% be at the hacking perpetrator, not to the victim.
(And this is why some of us were concerned about CISPA, which uses the identical language. Note CISPA's proponents have quietly faded into the woodwork post-Snowden revelations.)
1. Data at rest (Adobe) vs data in travel (Google).
2. Software Hack vs Hardware hack
The Adobe data was sitting on a server in a datacenter, it was accessible from the internet on some level. The Google data was taken, apparently, from a dedicated, google owned, unshared link (quite likely a fibre-optic tap)
The methodologies, skill levels and required hardware for the penetrating the above two types of setup are wildly different.
I blame adobe for getting a server hacked, it happens a lot and and they ignored a lot of body of knowledge built up over the years. I do not blame Google for getting their inter-datacentre links physically compromised by a security agency of the US government.
Nor do I blame them for (incorrectly, as it turns out) deeming that an unlikely scenario and therefore giving it low priority.
I would blame them for not doing anything about it now that
they know it is happening but that does not seem to be the case.
(I fully expect companies to encrypt data between datacentres if they are not on dedicated unshared links)
Indeed they do! From personal experience, Cisco was hawking its TrustSec inter-DC encryption solution five or six years ago, even over dark fibre.
There are numerous network devices that can handle AES-256 on 10 Gbps links, as a matter of routine, whilst doing 'mundane' switching for the day job.
If you have the money there are dedicated hardware that can handle the same at 100 Gbps. IP Cores is one from memory that produces the circuitry for that. They can throw compression in there as well if you like.
Encrypting data links isn't magic. Google just didn't do it.
You don't need appliances here as they can't handle the load, build the encryption into your application.
"Google’s encryption initiative, initially approved last year, was accelerated in June as the tech giant struggled to guard its reputation as a reliable steward of user information amid controversy about the NSA’s PRISM program,"
* Database: SSL connections for MySQL
* Memcached, Gearmand, and other tools that don't have built-in SSL support: simple home grown message level encryption (AES256)
And of course, there are VPN tunnels between data centers in addition to the above.
> And of course, there are VPN tunnels between data centers in addition to the above.
Could you please be more specific on the VPN solution that you are using? How do you manage the shared keys? How do you make sure 'system administrators can't easily read the traffic?"
Which could be done legally via 'Cisco Service Independent Intercept (SII)' built into IOS to comply with CALEA (Communications Assistance for Law Enforcement Act). And not so legally via user-escalation exploits within the same service.
Anyway, props for making the effort. I too am interested in your key exchange methods.
Phone is I think an obvious (and now clearly wrong choice) although maybe always suspect if you are concerned with dark fibre . The endpoint security of a device generating and transmitting the key now also being a risk. How far up the chain do you worry?
An airgapped device to generate the key and a single person travelling between datacentres seems the secure (although costly) solution. Obviously if TSA/customs remove device from them for inspection or connect it to anything it needs to be thrown away (or moved to insecure duties) and the setup process restarted.