BCP 38 is nowhere near usual. Lots of networks, including some very problematic big ones (cough Hurricane Electric cough), do not implement it as a matter of course. The AWS Route53 hijack last month which resulted in downtime for a number of sites plus a six figure coin theft could have been prevented by adequate filtering.
They may also not even have a duty of care in the first place, as to the truth of any metadata they're passing on. As a sibling comment pointed out, it's not as if there are laws for this.
On an airgapped lab it is bad practice. Same -though DNS related- with using *.local as a LAN TLD
An edge router is not one of these cases though.
Some of them were made more lenient because just as Microsoft finally stopped encouraging this nonsense, the Kubernetes people picked it up from ancient Windows Server folklore and apparently decided to make it web-scale.
It's been a while, but I can't count the number of times that I've seen that.
What's enormous about an IPv4 /12? :)
When the German army requested an allocation of IPv6 address space, they were given a /28, but complained that 2^100 IPs is not enough for them and they actually need a /22.
Did they want each bullet to have a /64?
I get the impulse to say "you used it wrong, now it's broken", but we didn't get to a functioning worldwide internet with that attitude. We got here by observing what people were actually doing and coming to a consensus view on what to break and what to carefully tread around (you know, UX). This is an obvious example of the latter and the fact that APNIC let CF use this space for a production platform in the name of breaking shit is frankly disqualifying (in terms of their overall trustworthiness as curators of essential IN infrastructure).
However, the RFC1918 IP ranges have existed for a very, very long time, as have the standard documentation/example IP ranges which Cisco, Juniper and others have been using in their training and example publications since 1995 or so. People have had more than twenty years to number their internal networks into the ranges that, also by consensus, the global internet community has decided to make non-globally-routable (192.168, 10.x, 172.16, etc). RFC1918 was published 22 years ago so there is really no excuse.
If you are using 1/8 in the year 2018 for your own internal production traffic, you are wrong and should feel bad. IANA and APNIC (and APNIC's contracted partner, Cloudflare) should be able to begin using ranges as granular as an individual /24 within this /8 on the public Internet without worrying that people who have misconfigured their shit will have a broken experience. It will take time for people to move their misconfigured erroneous configurations into normal RFC1918 IP space, but it will happen eventually. Or maybe not, if v6 adoption speeds up this becomes irrelevant.
It is/was not uncommon either. It was never a concern since it wasn't allocated. It being a concern is a very recent phenomenon.
Yeah, no. It sounds like you don't really know the history of that block. Or maybe you missed the part where I said it was used as a loopback address. Maybe both.
126.96.36.199/8 was unallocated and was also part of many peoples bogon filters at their edges.
Thanks for posting this.
IP : 188.8.131.52
Time: Tue May 29 15:28:30 BST 2018
What happens when you go to: http://184.108.40.206/ ?
That should come up with a 'domain for sale' page, that's the same server.
$ httpstat http://220.127.116.11/
2018/05/29 10:54:04 unable to connect to host 18.104.22.168:80: dial tcp 22.214.171.124:80: connect: connection timed out
UK : 126.96.36.199 : Tue May 29 16:06:38 BST 2018
What was the problem?
Thank you for all your assistance in this - and also everybody else that helped to pinpoint the problem.
Resolves for me.
They used AS55994 in China mainland.
In fact, China's ISP do filter via prefix and they all enable URPF.
In China, IDC can't announce non-cnnic addresses
a) don't announce shit you don't own
b) know how to set up ACLs and prefix-list filters on your own egress IP space announcements which face towards your peers and IP transit upstreams.
Conversely, as a big ISP which has many small ASNs downstream of it, be responsible and set up filters on your own ingress which prevent your customers from announcing mistaken shit to you.
Using an example of a clueful and attentive major ISP: For example if you are a small to medium sized regional ISP and buy IP transit from NTT, one of the world's top-ten global commercial transit providers, they actually do take the time to verify each and every prefix you announce to them and will require an interaction with their NOC if you want to announce a new /22.
I wouldn't be surprised if this becomes a semi-regular occurrence.
Maybe a wider question: is there some way to prevent BGP hijacking?
As for prevention, the only thing that will work is proper use of IRR/route registries and RPKI validation of peer announcements. Which a great many ISPs do not currently do.
The other method is more blunt, and can be more effective if the people with 'enable' on various ASNs' core and edge routers actually have a spine. ISPs which repeatedly announce space that they're not allocated (as per RIPE, ARIN, APNIC, AFRINIC records) should be depeered by their local peers, and their owners/operators publicly shamed. It's a reputation thing. As a neighbor of other, more clueful ISPs, it's basically the same thing as being a bad neighbor by leaving garbage all over your front lawn and causing a public nuisance with loud parties and trashy behavior.
I've been out of the space for almost as long, so would love to be wrong, but I think its fair to say that not much has improved on this front since then.
My ping to that address went terrible for a brief window today - https://i.imgur.com/KjCcBeT.png
Wonder if this was the cause.
*edit: I'm in Cape Town and the ping looks what was routing to a DC down the road decided to go to Europe instead.
If malicious, this could be someone trying to redirect 188.8.131.52 traffic elsewhere.
There have also been a lot of historical examples of misconfiguring BGP (the way big internet networks talk to each other and discuss where to send packets), such as a florida ISP accidentally claiming the best route to some major internet service and getting flooded with everyone's traffic until it died.
BGP is also really insecure (back in 2008 pakistan effectively brought down youtube for instance through BGP -- which doesnt require any authentication to claim you have the best route to X).
BGP4, which is one of the fundamental building blocks of the global Internet, relies on trust between BGP peers. ISP A says to ISP B, their peer, "hey I'm responsible for this chunk of publicly routable IP space, please send all traffic to ASN number N for this particular block".
This works as long as everyone configures their IP space announcements and prefix-list filters correctly.
A lot of less clueful ISPs in the world do not verify the IP space announced to them by their peers (BCP38 is your friend!). This results in things like the time that a telecom in Pakistan hijacked the IP space for most of Youtube about ten years ago and successfully DDoSed themselves, while also causing a major youtube outage.
This will keep happening until various ISP peers properly implement prefix-list filtering, ACLs on their edge BGP connections, and verifying peer announcements via things like various route registries.
In the case of DNS it’s particularly nasty as the attacker would control address resolution (say redirecting traffic for your bank to a phishing site) for everyone using 184.108.40.206 without more specific mitigations. Combined with long DNS cache times this could be a problem for a while.
A bit of a Hanlon's razor situation.
TLS, on the other hand, does address this attack, because controlling all the traffic to a TLS-protected site still doesn't give you a private key that produces a valid signature on a certificate for that site.
Well, unless you can also fool Let's Encrypt from all their locations around the world. Then you can get a Let's Encrypt certificate.
If my host is configured to use DNSSEC would that prevent sites from resolving?
If DNSSEC is not employed and a connection is directed to a malicious site (using https) wouldn't that prevent the connection?
(I'm afraid I'm out of my depth on the implications of this aspect of networking and wondering about the security implications for me since I'm using Cloudflare DNS servers.)
There’s a lot of ifs on both those roads, though.
I hope you're not trusting 220.127.116.11 either: https://twitter.com/bgpmon/status/445266642616868864
My goal was to provide actionable information quickly.
I never said it was specific to Cloudflare. :) It is specifically a mistake which would break an assumption - that putting 18.104.22.168 into your resolver results in an answer from Cloudflare. DNS doesn't necessarily have any protections (not current, so maybe they were added?), so the only level of protection is that the IP address routes UDP traffic where we expect it to.
It also isn't a long-term problem, it only remains for the length of time the route is wrong.
It could also be argued that we're already trusting every router between the device and 22.214.171.124 anyways, so there's not much difference. Except that there's already a trust relationship between those groups, and the new route subverts them.
It's the same level of risk if someone had done a BGP hijack of any backbone router.
So no. The only thing protecting you would be to have the expected hash of the certificate you expect to see (TOFU - Trust on First use, though you're screwed if you didn't contact 126.96.36.199 before the incident!).
In addition to a signature of the parent cert, the DNS stamp for Cloudflare DNS says that validation must be done against dns.cloudflare.com so this would require getting a certificate for cloudflare.com.
TLS is not a silver bullet. If an attacker controls the host behind what everyone believes to be 188.8.131.52, nothing is to prevent them from applying for a legit certificate.
* They need to do that, and get the resulting certificate, and install it, during the attack. The weirder the product (and certificates for IP addresses are relatively weird) the more humans end up involved in your order, and humans are slow.
* This leaves a smoking gun in the Certificate Transparency logs. So we all get to know (in maximum 24 hours but usually the reality will be minutes) about this extra certificate.
2) Who exactly is staring at CT logs and going "oh, I don't remember this domain using this CA, maybe I should investigate this" ? Sure there's a record of it. Doesn't really matter during an attack, because public attacks like this aren't intended to last long.
All you need is a half hour or less to steal a couple hundred million from a bank, or cryptocurrency wallet, using this attack. That's more than enough incentive for most unscrupulous 3rd world hackers.
If you're an authoritarian government, you could require CAs in your country to selectively quiet CT logs by certain users, and just issue certs willy-nilly for your private government org for MITM purposes. Google Chrome would detect them for Google-owned properties, but smaller sites would never know. And spy agencies can use this at their leisure and basically never be held accountable, because world politics.
Let's face it. The CA system is a joke and BGP is the butt of it.
Let's Encrypt does not offer certificates for IP addresses. They choose only to offer DV certificates using methods 184.108.40.206.6 and 220.127.116.11.7
How much of your hypothetical half hour will you spend trying to figure out why your chosen ACME client reports "Policy forbids issuance" when asked for an IP address?
As to who is staring at CT logs, well there's the fun thing about the design of CT, the _logs_ aren't where you would be staring, you would be looking at a _monitor_ and a monitor can be configured to do whatever it so happens you think is important. We know that commercial CAs already sell monitoring as part of "Enterprise security" type offerings.
Facebook took... I want to say minutes here, but I can't find an exact timeline, to spot that a certificate had been issued by Let's Encrypt for a name in their DNS hierarchy and begin investigating what went wrong.
Certificate Transparency isn't finished. As it stands today your authoritarian government "only" needs to ensure that nobody notices these shenanigans as unavoidably they create a "smoking gun" which would lead to their pet CA being distrusted. That's a tall order, but certainly not impossible in the short term, or for attacks in which the bogus certs are shown to a small number of individual targets rather than a broad population that will invariably notice by accident.
But longer term CT is intended for use with a gossip protocol so that it's impossible for the pretence to be kept up. Sooner or later a node somewhere will end up realising that there's an inconsistency, either it has seen SCTs that weren't logged, or it has seen logs that aren't consistent with the logs other nodes see, either of which is a matter for distrust.
BGP has no relationship to the Web PKI, which is what I presume you're referring to by "the CA system". The relatively small number of parties interested in BGP have developed their own PKI to fit their needs.
And sure, if you have a couple billion dollars, you can set up fancy infrastructure and response teams to monitor the whole web (that you are aware of, that participate in CT) for a strangely-issued cert.
Or you could just get a cert issued by the CA you normally use. In which case, now we have to track some kind of "customer number" per CA. If you use more than one CA (Google does) now you're tracking different customer numbers on different CAs. And all that has to be standardized.
Most of these "fixes" for web PKI's glaring holes are intended for large multinational corporations, or are optional, or specific to a particular browser, and don't address the main concern: _do not trust an IP address to be who it claims to be_.
1. Agree contractual terms with particular Certificate Authorities in which all certs for names in your hierarchy need explicit approval from your security people.
2. Set DNSSEC-secured CAA records for your names forbidding issuance by other Certificate Authorities.
This funnels requests from hypothetical bad guys into your security people, which is exactly what they don't want. It loses you the shiny capability to do "spur of the moment" issuance, but presumably if you want these sort of terms the phrase "spur of the moment" causes you to start writhing and clutching your throat anyway. Insider threats will usually be much _worse_ than outsiders.
As to things being "specific to a particular browser" we can't and don't want to be able to force, say, Microsoft and Apple to do things just because everybody else decided they're a good idea.
The same with the trust store programmes. I can't make Microsoft take this seriously, but Microsoft can't make me use their SChannel and associated trust store. Maybe you have a lot more pull with Microsoft than I do.
If we're in a world where the "easiest" way to get a bogus certificate is now to do global BGP hijack of an entire /24 then I think we're on the right track.
There are hundreds of CAs, and not all of them are going to verify from multiple POPs. You only need your attack to work on one CA for it to be effective against every client on the web.
Even if all CAs verified from multiple POPs (not likely) the attacker will just increase their attack to advertise from multiple ISPs/POPs. The attack is virtually the same, the only thing you need is more network access, which is not hard to get.
Last time somebody insisted on this I actually counted, I forget the answer I got, but it's less than three figures.
You can get a bigger number if your idea of "effective against every client on the web" is "It works in Internet Explorer". Microsoft is very... liberal in accepting new CAs controlled by corporate or sovereign entities.
But if you expect "every client on the web" to include Chrome on Android phones, Safari on iPhones and Firefox everywhere, not just Internet Explorer, then you're talking about dozens, not hundreds, mostly because of the work done by Mozilla.
And most of those CAs are fairly small. Forget a nice API you can just make an HTTP request to and get your certificate in seconds, some of them are going to expect you to wait until business hours and talk to them on the telephone.
I looked into pinning but the big services warn against it as they could change their certs at anytime, which is fair enough.
They'd have to convince a large enough percentage of the internet to accept their routes. Automated DNS services like letsencrypt make sure to take measurements from many places around the world to prevent things like this right?
If you're using DNS-over-HTTPS, you should be safe though.
dnscrypt-proxy enforces pinning (the parent cert signature is included in the DNS stamp required to connect), and I guess Firefox and cloudflared also do.