Hacker News new | past | comments | ask | show | jobs | submit login
China Telecom's Internet Traffic Misdirection (oracle.com)
410 points by dbelson on Nov 5, 2018 | hide | past | favorite | 112 comments

Combine this with exploits into one or more broadly trusted certificate authorities (which surely exist) and it's pretty amazing how much data China would have been able to obtain.

Every time I bring up the following point someone chimes in that it's a bad idea, but I still fail to understand why it's not easy to pick which CAs I want to trust by picking a list of entities/people I trust and then adopting their recommendations for which CAs to trust.

This would be a few clicks of UI to let me be intelligently paranoid while maintaining only a layperson's understanding of why (say) Bruce Schneier decides to trust some and not others.

This should absolutely be exposed in browser UIs, esp Firefox which uses its own store. Why can I not easily select/deselect all, sort by country of origin, issuer, plain text search filters, and so on? The ability to click-through, or even to simply display the "insecure" badge, would still be there.

Or, as you said, being able to subscribe to other recommendations would be cool.

Because it is never the "China Government" CA that is the problem, it is a medium to large Western-based CA getting hacked.

China, Iran, etc. don't want direct attribution.

And by "Western", keep in mind Italy and many other "2nd world" countries (to use some dated terminology).

2nd World? Doesn't that usually refer to former members of the old Eastern Block?[1]

To my knowledge, Italy has never been part of the Eastern Block.


Huh, I never thought about Switzerland being a third world country.

Or indeed somalia being a 2nd world country

Could this be done as a browser extension? I'm not too familiar with the browser extension API from mozilla, but can you check certificates of sites?

EDIT: Looks like you can - https://stackoverflow.com/questions/2402121/within-a-web-bro...

You could create an extension which checked certificates of visited sites, which includes information like country of origin, CA, etc. then have an interface for warning about certain parameters and configuring what to trust, warning about anything that doesn't meet your criteria.

The other part would be creating a way/file-format for experts to provide information about which CAs they trust and which are suspicious, then let the extension consume those.

It's not as good as built-in browser support, but it's a heck of a lot faster and more do-able.

Meanwhile, Google's Chrome is going in the opposite direction, trying to show the user less and less such info to "improve UX".

they would want their captive users to get smart and disable tracking features that are essential to their business, such as globally enabled referrer, etc.

just like microsft didn't want windows users to be able set a different default browser...

Exactly. And if you subscribe to the recommendations of more than one expert, a useful option would be to decline to trust any CAs that are mistrusted by any of the chosen experts.

I personally think certificate transparency and Expect-Ct headers will do far more to detect China-in-the-middle.

Nothing is more embarrassing than getting caught with your fingers in the cookie jar.

Besides I trust CAs will be de-listed if proven compromised.

China would only be annoyed if you made it out to be a bad thing. Everybody knows that, like the US, they snoop up any information they can get their hands on. Which makes it even more likely they didn’t do an active attack because there’s more chance of being caught if you don’t care.

> Combine this with exploits into one or more broadly trusted certificate authorities (which surely exist) and it's pretty amazing how much data China would have been able to obtain.

The attacker doesn't even need to compromise a CA.

If someone hijacks the IP address of example.com, he could easily get a valid Let's Encrypt certificate for that domain.

If there's a CAA record for example.com that doesn't allow Let's Encrypt (CAA tag "letsencrypt.org") then this won't work. Let's Encrypt will check CAA (they will use DNSSEC if your domain has that) and verify that their tag is present before issuing if you have a CAA record.

If you're a larger outfit you should pick a trustworthy CA vendor or (two independent vendors according to your risk management profile), and lock CAA to those trusted vendors. You can then agree whatever terms suit your business on top of the Ten Blessed Methods, so as to avoid bad guys stealing your names. For example Facebook has an agreement with their chosen CA that all facebook.com and fb.com issuances get signed off by Facebook's network security people. No "But I'm the head of Asian marketing! I need this immediately" bullshit, it gets a sign off from netsec or it doesn't get issued.

this can easily be fixed by having some sort of mandatory waiting period (eg. 7 days) after issuance before a certificate can be considered valid. ct can be used to ensure no backdating occurs

or dns records validating the ca?

Great point. Ideally it would also be possible to track which certificate was used last time and if the certificate has changed, verify somehow whether the change was intentional.

This did exist in the form of HTTP Public Key Pinning (now deprecated).

Unfortunately, the mechanism which replaced it (Expect-CT) can only help after the fact - someone would have to notice the extra issuance, and you would have no idea whether your traffic was affected.

The party you want to connect to chooses the CA, not you. Are you really not going to use YouTube because you don’t trust the Google CA?

Anyway, redirecting and sniffing traffic is one thing, intercepting and changing encrypted traffic while being undetected is another. It’s quite a stretch really.

> Are you really not going to use YouTube because you don’t trust the Google CA

Not because I don't trust it, but I might because Bruce doesn't. At least, if Bruce stops trusting it I'd like to know why before I decide to trust it again.

I think if you imagine the way that compromised CAs would be used in actual attacks it is clear why we can't expect compromised CAs to be widely observed (they would be used only for targeted traffic), and why we should not trust all CAs equally.

The idea being: stronger market forces leading to more competition among CAs to end up on more people’s trusted list. Combined with the option to serve multiple signatures for the same cert, this might actually just work :)

How does a user distinguish?

For instance say you want to go to example.com, and it says “you need to trust X CA”. Ignoring that a user doesn’t know what that even means, all of history demonstrates that a user’s goal when encountering a barrier in the way of something they want is to get rid of the barrier. Arguing for user education isnt the right response because technology needs to be designed to work with humans, not the other way round.

Instead we defer to groups like the various CARB orgs to ensure that the trust stores contain only robust CAs. Historically there have been few teeth as evicting CAs from the truststores is hard - look at how long the Symantec distrust is taking, and look at how many people (incl. HN commenters) have argued against it.

That said the addition of things like CT logs has finally given trust stores a view into what is being issued, so can finally detect poorly run CAs at the time that screw up, rather than maybe catching them months later, if at all. That then provides the evidence needed to justify distrusting a ca or auditor.

That is all a large amount of exceedingly technical work, that no regular user could hope to grasp and make a reasonable choice about.

Instead the market pressure is as it should be for CAs now: ensure you are following the rules, or risk distrust. That is pretty much the most effective market pressure you can have in the CA market.

Remember that the resources, scripts and images can be loaded from hosts using different CAs. A page doesn’t even have one CA. The only thing that is even remotely feasible is blocking one or more CAs and that causes so much disruption normal people are never going to do that.

Exactly - that’s why Symantec was able to get away with it for so long.

because tomorrow your bank starts to only have a cert from some CA you didn't trust before.

having to manually select/add CAs is akin to not having ssl and using pgp everywhere. and you know how well that works out even for technical folks.

If my bank appears to switch to a CA with a record of issuing fraudulent certificates or enabling same (cf. Comodo, Symantec), that's exactly what I want to happen!

I realize the desire to control my own security is thought by browser-makers to be the death of ecommerce.

I still want it.

Ok, what happens if it switches to a CA you haven’t heard of?

What happens when some 20 year old gets their first computer, it doesn’t have any CAs in the trust store so how do they find the good ones? How do they know what the good ones are?

You think you want it, but you don’t necessarily know which CAs are good - have you read all the audits (which you can only trust if you got over tls, which requires you trust it).

What happens when you get a site saying “you need to trust ‘Google CA 1’/‘LetsEncrypt’” (or whatever their public CA is). I’m joking of course - the name is meaningless - there’s no root of trust to verify that it’s not someone making a certificate that just claims to be that CA. So you actually need to know the public key for each CA that you think is trustworthy.

If it was all CAs by default, but then allowed you to remove the ones you didn't trust, then that would help a lot. Then the vast majority of users will be going with the browser recommended CAs, but advanced users can customize. If there was a distribution format for experts to "vote/warn" on which CAs are trusted, then you could subscribe to their lists with a common configuration of (Trusted by All, Trusted by At Least One, Not Warned By Any, Not Warned by All). That way you could get updates automatically from your favorite experts. We often outsource things like reviewing all the advisories in this way. Now we just need someone to develop this in. Could it be done as an extension?

I don’t know about windows or Linux, but you can already do this on Mac. Keychain access let’s you change trust settings of each root, including individual “purpose” granularity (eg restrict to just mail servers, etc)

you know deep down that's not the rigth solution. if you have to accept any CA for each site, then next day you will be complaining why you have to click "yes" on that damn dialog for every new site. and then ask browsers to just accept it and not bother you :)

I understand the sentiment, but somehow I feel that more people will trust Alex Jones on this than Bruce Schneier.

Actual experts aren't respected by the general population enough for this to be a net positive change is my guess.

That’s basically PGP.

Distributed trust and vouching systems are going to be the next big thing.

Sounds a bit like how ad-block plus handles the blocking based on lists to which you subscribe, just with certificate white/blacklists?

I'm continually amazed at how insecure almost every aspect of internet routing is - it mostly boils down to a sort of "gentlemen's agreement" that everybody will follow the rules.

Such is the nature of BGP. Something like SPF (eg: an authorized AS list for an IP block) and DMARC (reporting about who tried to broadcast what IP block and was rejected) would be great, perhaps even have the latter component convey attack info so ISPs could deal with infected clients automatically.

Basic security mechanisms when it comes to large ISP networks are a pipe dream though, instead we get vendors pushing extremely vulnerable Juniper gear cause its reasonably priced, meanwhile these boxes have new root exploits found multiple times a year. None of the vendors give a crap about security, Cisco pays it some lip service (to win gov't contracts) but charges a premium for basic features.

I'm amazed telecoms let's this happen, routing massive amounts of traffic the wrong way, must cause a lot of latency, right?

Sure, but how many users care about 20ms to their local data farm versus 70 cross country and maybe 200 to China?

Right, they generally say "my phone is getting slow, time to upgrade"

Internet routing (BGP), SMTP, and DNS (not inclusive, just off the top of my head) were developed during the very beginnings of the internet, without much thought into today's use and scale.

Today you'd do better, with hindsight being 20/20.

That's certainly true. But now that we have the benefit of hindsight, isn't the only reasonable option to start to take the steps to correct the obvious problems?

One of the best steps is modern protocols. China - or whomever - can collect all the QUIC packets they want and it won't tell them much. The incentive for these games goes way down when all you get is some connection metadata and cryptographic line noise.

Not if you control CAs. Cert pinning only works in a limited amount of cases, and certificate transparency only works with CAs who have agreed to implement them (Which is not the vast majority).

Um, you're aware that Chrome requires SCTs (the proof that a certificate has been logged) when connecting to a site right? Do you think "the vast majority" of CAs deliver a product that doesn't work with Chrome ?

CAs aren't mandated to log certificates for you (and indeed some offer the possibility to deliberately not do so for reasons I'll get to in a minute) but if you run a mass-market CA logging certs by default is the only possible way to remain in business since otherwise your entire customer service budget will be spent explaining to customers how to log the certificates and make them work with Chrome.

Firefox and Safari have announced plans to require SCTs but without a specific version or timestamp deadline. Apple's language says "calendar year 2018" but that's probably ambitious. It scarcely matters, Chrome is already too many users for a commercial CA to ignore.

So, why aren't all CAs logging every certificate and baking the SCTs into the final certificate? Well, when a certificate is logged that makes it public, but power users may want the ability to sidestep that. For private systems they may just have decided to never run Chrome (and good luck to them in the future when IE6 on Windows XP is the only option left that doesn't check CT). But for public systems if you're technically capable you can get yourself unlogged certificates, then at launch time log them, collect the SCTs and deliver those to the TLS client rather than baking them into the certificate. Google does this, a few branding practitioners do it. It's very important to get it exactly right because if you screw up your certificates are worthless until you fix it. But if protecting naming is important to your business it's an option.

SCT does nothing by itself except some additional fields in a cert. By itself, it is of absolutely no protection whatsoever.

SCTs are signatures from log servers. So the presence of the SCT means now not only the CA vouches for this certificate, but also the signing logs vouch for having seen this certificate. Chrome has a policy baked into it about which logs it will trust.

Under current policy this "nothing" means Google plus at least one independent log operator claim to have seen it and logged it. This eliminates the scenario in which a powerful adversary obtains certificates but only shows them to a single victim or small victim group. Whatever they did, everybody will see it.

CT will take time, I wouldn't be surprised if it catches on and becomes are requirement further down the line..

But sure, it won't happen overnight.. just saying gaps are closing :)

Finishing the entire Certificate Transparency system will take time, but the elements that exist today already work fine. Install Google's Chrome browser. The browser checks for SCTs (the proof that the certificate was logged) and will reject new certificates that don't include such proof. It has been doing this since April.

Try this URL: https://invalid-expected-sct.badssl.com/

If you visit that in Chrome it gives you a full page interstitial warning it's bogus and if you click past the page is labelled "Not Secure".

In other popular browsers it works fine, because it has a perfectly nice certificate but the Bad SSL site is deliberately not presenting the SCTs for it. [[ It's hard to do this by accident, most places that give lay folk a certificate will assume your goal is to have your certificate accepted, so they will log a "pre-certificate" for you and bake the SCTs inside the certificate they give you and you can't remove those ]]

But yes, fully completing Certificate Transparency will be more work, we need a Gossip system so that monitors can consult each other to detect a split horizon, and mechanisms for clients to show summaries of what they know to determine if there are conflicts.

What we have now is like if you have a house you've half-built, there is no roof over two rooms, and no electricity, and the floor is bare dirt. But, it's still a house, and in a rain storm it's better to be inside that unfinished house than out in the cold and wet. The people outside in the rain don't think "That guy's house doesn't have triple-glazed windows" they think "Lucky bastard isn't out in the rain like me".

> Not if you control CAs

And so begins the ever greater attacker regression.

Yes, with a "but" the size of celestial bodies: it's a herculean effort. Witness how long IPv6 has taken to obtain traction (and the lack of any traction on DNSSEC, and the resulting DNS over HTTP shims). These are improvements that occur over years, if not decades and require substantial human and financial resources to deliver on.

The "DNS over HTTP shims" are not the result of DNSSEC taking too long to be adopted, but rather the fact that DNSSEC doesn't provide the protection that DoH does. People have a lot of weird ideas about what DNSSEC does; in particular: it doesn't encrypt queries.

Why do you think IPv6 never took off? Do you think the format of addresses was less human readable, and therefore that’s what led to its slow adoption? What if the address was instead displayed as a mapping using a data format like JSON?

Networks found ways to reduce IPv4 usage, or support dual stack early on when necessary. Turns out every internet endpoint doesn't need to be directly addressable, and most Internet use cases are one to many (CDNs to eyeballs).



Because it takes effort and CAPEX to deploy it and most ISP are for-profit entities.

No doubt. I didn't mean to trivialize the effort required to address the existing issues with scaling and securing the global Internet.

At this point I'm inclined to think you'd be more likely to get bogged down for decades bikeshedding behind proposals in a standards consortium that has no actual power to enforce them, and the results would be a horrific mishmash with terminal second-system syndrome...

Do you know of any interesting alternative protocols proposed recently?

And that agreement is trying to be enforced using PKI: https://blog.cloudflare.com/rpki/

RPKI doesn't stop BGP hijacks.

Sure, it's better than nothing but RPKI doesn't validate the entire path, only the origin.

If you want path validation, you need BGPSec. But BGPSec is basically un-deployable in real-world networks.

Why is BGPSec un-deployable?

And state actors are proving themselves to not be gentlemen at all.

A "gentlemen's agreement" that was designed to sustain a nuclear strike...

The two aren't at odds. Packet routing was designed to survive sudden and severe loss of network paths, but it still assumes that participants on the network are cooperative players.

In other words that they were confident there wouldn't be a russian evil maid anywhere on a US wide network.

You got me here. I have no clue if this scenario was considered, and how this affected the design process. It was the Cold War after all.

I didn't know that, was this the motivation of the current design? is there a source to back this up?

BGP was designed far after that era of ARPANET

I'd like to point out that government used to run by agreements that were like that, and look what has happened in that domain. I say this as a warning what the internet could become.

Your analogy is confusing. I have no idea if you are talking about international, national, or local agreements. I also have no idea what your opinions are.

Your comment simultaneously contains almost no information is super off topic.

CT and Chinese ISPs have been hijacking user traffic for decades, profiting off of it by selling traffic dump to data exploiting companies, insert ads in webpages, steal social media tokens (for follower boosting and ads retweeting).

I've found China Unicom openly hawking their data mining products. https://imgur.com/a/uNxA50K

Have you got a translated version of that screenshot?

This is one of the reasons TLS/SSL and crypto is so amazingly important.

Go ahead, monkey around with BGP, since I have the public key of the recipient of my packets I can detect this and block any type of misdirection.

> Go ahead, monkey around with BGP, since I have the public key of the recipient of my packets I can detect this and block any type of misdirection.

And how did you get that public key?

An attacker could pretty easily obtain a valid Let's Encrypt certificate using a BGP hijack.

Also, the CA system is in bad shape - CAs have been hacked and certificates were leaked. Not to mention that some of the CAs your browser trusts are not entirely trustworthy or are located in untrustworthy countries. Oh, and from time to time there are attacks against TLS itself (e.g. https://drownattack.com/)

Because the public keys are baked into the OS trust store. For the exact reason of not being able to get the keys from the internet if you don’t already have a root of trust.

The other issues (trust worthiness of CAs in countries that have the ability to compel a ca to issue a fake cert -Australia say), are intended to be mitigated by the CT logging that is now required by the major trust stores. Sure your Aussie CA might issue a fake certificate, but in doing so they ensure they get a global distrust...

In order for CT to really work, we will need a better way to handle actually distrusting CAs. I think that includes a way for a site to have multiple different certs at the same time, so their one CA isn't a single point of failure.

Without this, we will always be dragging our feet in dropping CA trust, because it will leave some perfectly valid sites shit out of luck.

The dream is definitely not trusting certs which haven't been written to a log. I think that the path is actually in sight too. The CAB forum seems relatively on board.

You can experience this dream today by simply installing Google's "Chrome" browser. If you prefer a different browser you probably don't have long to wait, Firefox and Safari have announced plans to check CT (Apple says in Calendar Year 2018 but I won't be astonished if that slips) and it's something Microsoft's browser team are contemplating - if you care about trust in the Web PKI you obviously shouldn't use Microsoft's products anyway, but if you do...

the CAs are the only ones opposed.

We should definitely talk more about those CAs and should totally have a way to force only certain CAs should be able to give out certs for a domain. Oh wait, it's called HPKP and it's being removed D:

HPKP was a bad standard - there’s no way it could be used safely at scale. There are just too many ways to accidentally screw up, and that’s before you start dealing with actual attackers.

CT allows you to detect misissuance - theoretically you could have a monitor service that watched all the logs for changes to your domains.

Longer term something (no opinion stated on exactly what) needs to be done to rectify the trust model for BGP and DNS

Expect-Ct anyone?

Then we can delist compromised CAs, yay :)

(Sure, it'll take time, but gaps seems to be closing on so many layers)

> An attacker could pretty easily obtain a valid Let's Encrypt certificate using a BGP hijack.

Whoah, I have never realized this.

Is there some way to include some key in the DNS entry or something to mitigate IP hijacking?

Does HSTS protect against this?

Let's Encrypt is already taking steps to mitigate this. BGP hijacking is a noisy event - it should be possible to see that routes have changed recently and deny issuance. They can also perform challenges from multiple geos / networks, so that if there's a disagreement among routes, the challenge fails.

More info: https://secure-certificates.princeton.edu/

Somewhat offtopic but which tool shows you the AS number + info alongside the traceroute in the screenshot?

I would guess that the author copied the results into a table and prettified them and added in details like location.

At the top of the screenshot it says "traceroute from London to ..." - no traceroute program knows where it is in the world!

Also the locations of each hop in traceroute NY > Chicago > Ashburn etc., no traceroute program will know where in the world those IPs are. I suspect the author has guestimated based on the reverse DNS record for the IPs and latency.

Traceroute does have the ability to show you the ASNs in a path but that is based on a WHOIS lookup of the IPs that it's discovering. So it could be wrong by assuming the IP address of each hop was announce by the ASN that owns it.

thousandeyes.com, a network intelligence platform, gives you all that information in one place.

OK, so I'm sitting here, posting to HN in Firefox. And if I like, I can open a terminal and run something like:

    traceroute news.ycombinator.com | grep -f chinese-ipv4 -f chinese-hosts
And indeed, there could be a Firefox extension that did that, right? So at least, users would know.

Its difficult for the "average user" (define as you please) to know what what path should look like though. Lots of ISPs will have private peerings to others ISPs/content providers/carriers etc. which aren't publically listed anywhere.

I'm not suggesting that the (say) Firefox extension would show the path. It would just show whether the path included devices in whatever country. In this case, China. Users wouldn't need to know details. There are many sources of geolocation data that the extension could draw upon.

Tangent, but are traceroutes spoofable (barring timing differences), or would they break too many other things to be practical? I'm wondering if anyone might do that to hide their tracks.

Yes. You can set your reverse DNS to whatever you want if you own the IP blocks.

See also: https://news.ycombinator.com/item?id=5192656

"Loading..." the page doesn't work without JavaScript enabled for no reason.

It's using ajax to fetch the actual article. Seems a bit strange since it's static

Imagine that: Oracle doing something more complex than necessary. /s

If BGP4 were designed today, it would look very different.

How about just globally blocking AS4134 and AS9318?

You will be surprised how many companies already doing so

According to traceroute, I wonder what makes United States safe and China not. Both not safe.

Hanlon's razor has been raised on NANOG.

This is so stupid that we keep doing business with the Communist Party of China.

I just don’t understand why the telecom agreements are not reciprocal. If no foreign nation is allowed to put a POP in China, then why is China allowed to put POP’s all around the world?

Or the government of Australia which has laws allowing similar...

Its not as though our domestic technology vendors care about security. JunOS is constantly having new vulnerabilities found, and Cisco ain't much better, but charges a premium price as they are viewed as the market leader and pay some lip service to security.

lol. typical anachronistic oracle. their blog fails fail to render on 2 out of 3 browsers I tested. What is this? 1995?

Can I ask what browsers? If you've disabled Javascript, I'd argue that's the anachronism.

firefox mobile with uBlock origin.

Edit: ha! ironically, Oracle site about china spying on you won't load the content unless you allow google analytics code to run. If google analytic code fail, the rest of their code also fails.

I can read the article just fine on Firefox for Android with uBlock origin. It also loads with no problems through my pi-hole, which blocks Google Analytics.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact