Hacker News new | comments | show | ask | jobs | submit login

Lets start from the beginning: the NSA "hack" became possible because Google (and its security team) made bad assumptions about the security of the connection between Google's data centers and did not encrypt the traffic. Basically, this is security 101: protect data at rest and protect data in flight. So, sorry but I think the better subject for discussion would be how badly Google screwed up, not how evil is NSA. Moreover, it is not clear if other governments or criminals also had access to the users' data (e.g. in Google's data centers located outside of the US). So far Google did not produce any public post-mortem thus we have no clue how bad was the problem.

P.S. I am sure I will get smashed in the comments, so let me say right away that NSA actions should be controlled and audited by the public (e.g. through our representatives in Congress). I think that the biggest "evil" here are the members of Congress who either approved NSA actions or failed to do their job and monitor/audit NSA properly. In particular, I would point my finger at Sen. Dianne Feinstein [D-CA] who should have been ousted from the office long time ago.




> Lets start from the beginning: the NSA "hack" became possible because Google (and its security team) made bad assumptions about the security of the connection between Google's data centers and did not encrypt the traffic.

The assumption isn't bad - it's a private network line, not a public internet connection. Nobody else had access to that line, at least they weren't supposed to. Splicing a fiber line is a bit outside the scope of your random attacker. You can't blame Google for not anticipating a hostile break-in by the government. The discussion should absolutely, 100% be directed at the NSA here. To accept that a private network connection is open season for the government to tap is batshit insane.

> Moreover, it is not clear if other governments or criminals also had access to the users' data (e.g. in Google's data centers located outside of the US). So far Google did not produce any public post-mortem thus we have no clue how bad was the problem.

How is Google supposed to tell you if they themselves didn't know?

Although from the leaks it sounds like everyone is fucked thanks to the GCHQ and the NSA getting friendly with each other.


Well, I feel that encrypting traffic inside the data center is not a bad idea (and we do it at WePay where I serve as CSO). The reasons is that you never know who is listening (big smile here). For example, I don't want our system administrators to have an easy way to look at the traffic: yes, it is still possible to do but it is harder and requires some very unusual actions that will trigger alerts everywhere.

If indeed Google does not know then it's just another sign of security failures at the company. Nobody is perfect and security incidents do happen. A good security will have in-depth defense and built-in monitoring/audit measure that would at the very least allow you to determine what have happened post-factum.


> Well, I feel that encrypting traffic inside the data center is not a bad idea (and we do it at WePay where I serve as CSO).

Do you have your own data center building? And if you don't have your own data center buildings, how are you guarding against physical attacks? Because just saying "encryption" doesn't actually mean anything. Encryption isn't free, and at Google scale that can add up. Useless encryption is just wasted power

> For example, I don't want our system administrators to have an easy way to look at the traffic: yes, it is still possible to do but it is harder and requires some very unusual actions that will trigger alerts everywhere.

That can be accomplished in many ways that don't involve encryption. And your servers are all capable of decrypting the data at some point, so you still have to trust your sys admins and/or have alternative systems in place as they still have access to the unencrypted data.

> A good security will have in-depth defense and built-in monitoring/audit measure that would at the very least allow you to determine what have happened post-factum.

How, exactly, do you detect cable splicing? Much less audit said splicing? You seem to be asking for a hell of a lot more than "good security"


At WePay - no, we don't have our own data centers just yet. In a couple large companies I worked before - yes (and we did encrypt the traffic as much as possible).

Some types of encryption are pretty cheap actually. I used to use special SSL cards in the servers 10-15 years ago but today my laptop would outperform these cards and wouldn't even get hot :) Plus you need to remember that relatively expensive public key encryption needs to be done only for key exchange. After that you run block or stream cyphers and those algorithms tend to be really fast.

So far I haven't seen any evidences that there was cable splicing. Thus using occam's razor I would assume that the hack was much simpler than that. To detect the issue, I would start from reviewing the visitors log to the data center (assuming there is a visitor log).

I'll re-iterate that security should be built on defense-in-depth principle. Every single protection layer will fail or someone will go around it. The assumption that a data center is "safe" is a bad assumption period. You have to play "what-if" game and think for the attacker.


> So far I haven't seen any evidences that there was cable splicing. Thus using occam's razor I would assume that the hack was much simpler than that.

What? The evidence totally points to cable splicing. What hack involves getting all the inter-DC packets but nothing else? Obviously the machines weren't compromised, or they wouldn't have cared about reverse-engineering the wire protocol. So what are you proposing was hacked?

> I'll re-iterate that security should be built on defense-in-depth principle. Every single protection layer will fail or someone will go around it. The assumption that a data center is "safe" is a bad assumption period. You have to play "what-if" game and think for the attacker.

And I'll re-iterate that you're asking for a goddamn magical pony.

Side note, if your data center isn't safe go get a new one. Seriously. Most DCs have tons of security to make them safe. That's not an assumption.


> The evidence totally points to cable splicing.

I don't think there are any evidences at all. As far as I know, the only known thing is that NSA was able to obtain the un-encrypted google traffic. For example, it could have been backdoor in the router, one extra cable in the switch, or a few other similar low-tech options.

> Most DCs have tons of security to make them safe.

Don't disagree. But this doesn't make them invincible from other attack vectors (e.g. rogue employees). I actually heard the same argument from quite a few people during interviews and I usually don't hire them because you have to be paranoid to get security right :)


Seriously? If the NSA wanted to own WePay they would have even with your "security best practices". Sorry bud.


Sure. A court order would do it no problem.


Also: Having someone in your outsourced datacenter splice cables, SPAN ports, install trojaned hardware, etc etc etc...


BTW, "at least they weren't supposed to" is not a good enough argument in security :) You have to think about people who are not following the rules or your security is only protecting from a well-behaving 1st grade student.


There is no such thing as perfect security, only good enough security. At some point you have to accept risks, and the risk of physical network attacks is incredibly small compared to all the other attack vectors. Nobody was well prepared for the NSA's physical network attacks.


Everybody who cared knew that the world's governments tap every fiber they can lay their hands on. It has been discussed on HN with great regularity for years before these NSA non-revelations. Physical attacks were and are a certainty. Anybody who ignores this fact has only themselves to blame. A good argument can even be made that they deserved to be pwned as punishment for their utter fecklessness.


No, what was discussed was the NSA tapping in at ISP points, not digging up cables to splice them.

And "discussed" is not accurate. It was proposed by a few but rejected by most as paranoid.


Wait a minute... "It was proposed by a few but rejected by most as paranoid." and yet is most likely what happened? so the most in "rejected by most" were wrong. Wrong!

So when someone says: "You're just being paranoid", my reply will be: "Better paranoid than wrong."


Even a broken clock is right twice a day.


A broken clock is always unreliable.


"The assumption isn't bad - it's a private network line, not a public internet connection. Nobody else had access to that line, at least they weren't supposed to."

You don't use telnet when you access your home server(s) from your laptop ... that's basically what they were doing.

They skipped over a zero-cost, obvious best practice, and I think we should be suspicious. Either they've run that part of their network in a stunningly negligent fashion ... or this was the ingress they gave to the NSA which could be plausibly denied later.


No, it's not. Telnet sends packets in the clear over many networks controlled directly by third parties and accessible to them as an inherent part of operating their business.

This was a point-to-point cable. The only access possible was physical, by digging it up and splicing it.

Obviously that attack was possible, but arguing that this is somehow "the same kind of attack" as running tcpdump on a router to sniff packets is just insane, sorry.


Other attack vectors: rogue google employee, breakins into the data center, ...


> You can't blame Google for not anticipating a hostile break-in by the government.

"The" government? The lines were also being tapped by organized crime, China, France, etc. Google severely failed at data protection.


Would you still feel Google had screwed up if the way the US government got the data was to burglarize one of their datacenters and tap directly into the machines' CPUs and memory buses?


Yes (search for SSAE16 or SAS70). However, I would not feel the same way if US government would have used Area 51 technology to hack 4096 public key encryption. The difference from my perspective is that in "burglary" scenario (and un-encrypted traffic scenario as well) Google failed to protect against well known threats. And in the "alien technology" case Google did everything you can at the known security/technology level and the failure came from aliens (aka an "unexpected technology advances").

Basically, I think Google's decision to do not encrypt the traffic is a gross negligence and I would love to see how someone would sue Google for it.


> Basically, I think Google's decision to do not encrypt the traffic is a gross negligence and I would love to see how someone would sue Google for it.

Wow. Just wow.


I know that people don't like lawsuits. But for security to work there should be consequences for not doing security right. To give you an example, if a company X doesn't have any risks or damages from lack of security, then company X should not be investing in security to save money. However, if there is a monetary (or other) liability from a security breach, then the company X will have to make a choice and hopefully they will invest in security.


That would make sense if you had a signed contract from Google that they were encrypting their internal traffic. I don't have that and I doubt anyone else does either.

If the courts get involved it should 100% be at the hacking perpetrator, not to the victim.


Side note: Sec 702 of the FAA gives NSA complete immunity from all federal, state, and local laws, criminal prosecutions, and civil lawsuits when doing this kind of fiber tapping. "Notwithstanding" is an extremely powerful phrase. Wildcard. Trumps all other laws.

(And this is why some of us were concerned about CISPA, which uses the identical language. Note CISPA's proponents have quietly faded into the woodwork post-Snowden revelations.)


What if the government kidnapped a Google engineer (or several) and hit them with a wrench until they retrieved the data? That's a known, low-tech threat too.


Absolutely. That's why you have to have logs and regular audits to make sure that employees are not doing things that they are not supposed to do. BTW, one should consider not only kidnapping but just a "rogue" employee. For example, in the Snowden's case the NSA itself put too much trust into system administrators and did not perform audits that should have detected downloads of secure files.


In my scenario, the log maintainers and the auditors were among the people being hit with wrenches.


I can't agree with that, this was on Google's on fiber connections between their own data centers, right? And no other company with multiple data centers encrypts all traffic between them, right? (maybe you'll find a small counterexample but no big one.) So I don't think this is "security 101".


I work for a company bigger than Google, and we encrypt everything in flight between datacenters. It is security 101.


Does your company have dedicated, unshared, fibre between those datacenters?


Consider the recent passwords leak from Adobe: they stored passwords in a dedicated unshared datacenter. Does this make a good security decision to encrypt passwords instead of using a hash because nobody should have been able to access these encrypted passwords? I really don't think so.


There are problems with your analogy.

1. Data at rest (Adobe) vs data in travel (Google).

2. Software Hack vs Hardware hack

The Adobe data was sitting on a server in a datacenter, it was accessible from the internet on some level. The Google data was taken, apparently, from a dedicated, google owned, unshared link (quite likely a fibre-optic tap)

The methodologies, skill levels and required hardware for the penetrating the above two types of setup are wildly different.

I blame adobe for getting a server hacked, it happens a lot and and they ignored a lot of body of knowledge built up over the years. I do not blame Google for getting their inter-datacentre links physically compromised by a security agency of the US government.

Nor do I blame them for (incorrectly, as it turns out) deeming that an unlikely scenario and therefore giving it low priority.

I would blame them for not doing anything about it now that they know it is happening but that does not seem to be the case.

(I fully expect companies to encrypt data between datacentres if they are not on dedicated unshared links)


I hear what you are saying but I think there are similarities. In both cases there was an assumption "X is safe" and then the thinking have stopped. I've heard different version of how the data was taken from the google's link and some ideas were pretty low-tech. The data links have been compromised in the past not only by NSA (search for "Operation Ivy Bells" if you haven't heard this story before) but also by criminals or even competitors.


Yes we do. Still use encryption.


> And no other company with multiple data centers encrypts all traffic between them, right?

Indeed they do! From personal experience, Cisco was hawking its TrustSec inter-DC encryption solution five or six years ago, even over dark fibre.


If you believe the threat is a government agency splicing private, unshared fiber to capture your traffic between data centers, why in the world would you trust equipment from Cisco (who lists "Government" as one of the industries they sell to) to protect you from that?


Well I certainly understand your point, but the question was 'are big companies encrypting their inter-DC traffic' and the answer is 'yes', even if it's backdoored without their knowledge.


Google's inter-dc links are way too big for any appliance type of thing to encrypt. Like most things at Google the scale of their network is incomprehensible to most people.


> Google's inter-dc links are way too big for any appliance type of thing to encrypt

Frankly, no.

There are numerous network devices that can handle AES-256 on 10 Gbps links, as a matter of routine, whilst doing 'mundane' switching for the day job.

If you have the money there are dedicated hardware that can handle the same at 100 Gbps. IP Cores is one from memory that produces the circuitry for that. They can throw compression in there as well if you like.

Encrypting data links isn't magic. Google just didn't do it.


Those were very amusing tiny numbers you wrote in your post.


100Gbps is nothing for a company at the scale of Google. They are probably closer to 10-100Tb/s on there backbones.

You don't need appliances here as they can't handle the load, build the encryption into your application.


In addition to the numbers already posted, you can remember that Google decided to add encryption to the data links in September.


This is not true. Quoting from http://www.washingtonpost.com/business/technology/google-enc...

"Google’s encryption initiative, initially approved last year, was accelerated in June as the tech giant struggled to guard its reputation as a reliable steward of user information amid controversy about the NSA’s PRISM program,"


Well, I do :) Moreover, I encrypt all the traffic even inside the same data center.


could you share which technology are you using to encrypt all the traffic?


* Mid-tier servers: standard HTTPS with nginx

* Database: SSL connections for MySQL

* Memcached, Gearmand, and other tools that don't have built-in SSL support: simple home grown message level encryption (AES256)

And of course, there are VPN tunnels between data centers in addition to the above.


Thanks.

> And of course, there are VPN tunnels between data centers in addition to the above.

Could you please be more specific on the VPN solution that you are using? How do you manage the shared keys? How do you make sure 'system administrators can't easily read the traffic?"


We use Cisco appliances for VPN (a few different models) and indeed there is a shared key that we have to input manually. However, after the key is entered (and configs saved) in order to decrypt the traffic one would need to print Cisco configs which is a very unusual operation that would be logged and then alerts will fire, audits will catch it, etc.


>> in order to decrypt the traffic one would need to print Cisco configs

Which could be done legally via 'Cisco Service Independent Intercept (SII)' built into IOS to comply with CALEA (Communications Assistance for Law Enforcement Act). And not so legally via user-escalation exploits within the same service.

Anyway, props for making the effort. I too am interested in your key exchange methods.


Just out of interest how do you transfer the key between datacentres for setup? Same person travels between them? PGP encrypted email? Or over the phone?

Phone is I think an obvious (and now clearly wrong choice) although maybe always suspect if you are concerned with dark fibre . The endpoint security of a device generating and transmitting the key now also being a risk. How far up the chain do you worry?

An airgapped device to generate the key and a single person travelling between datacentres seems the secure (although costly) solution. Obviously if TSA/customs remove device from them for inspection or connect it to anything it needs to be thrown away (or moved to insecure duties) and the setup process restarted.


I think public key encryption with long enough key is a pretty safe bet these days. Of course, NSA might have new non-public discoveries in math/crypto that might make public encryption obsolete. Or they might have a device form Area 51 that breaks any encryption. However, I haven't seen any evidences of this yet.


Amazon does that.


I wrote my thoughts on the subject in a blog post as well:

http://www.aleksey.com/2013/11/06/why-google-engineers-got-i...


The moral of the story is that there is always a bigger fish.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: