After reading their explanation, I don't understand the purpose of 1e100.net. Maybe someone who is more technical than I am can explain how it is used to identify Google's servers and why that's useful.
On that note, I should answer their question: Was this article helpful? with No.
Servers need a name that's different from the product(s) they're serving.
For example, when I look up google.com and then reverse-lookup the IP address, I get sfo07s13-in-f14.1e100.net (you'd get something different depending where you are). The name sfo07s13-in-f14 could tell someone where to look if that machine is misbehaving.
If they'd named it sfo07s13-in-f14.google.com, then browsing to that URL sends google cookies. If it's some server from a recent acquisition that may not be up to Google's level of security, that's dangerous.
Even fairly small companies are well-advised to have a domain name for their brand and a separate domain name for their infrastructure.
Adding to your last paragraph: Medium to large sized ISPs and web entities are also highly likely to use a third domain, which never touches the public internet. You might have:
a) your public web presence, www.widgets.com for marketing, sales, customer web portal, customer billing, and so forth.
b) your domain name used for public reverse DNS for your ARIN, RIPE, APNIC etc IP space, for an ISP that has its own AS, such as as12345.net. This is the function that the 1e100 domain serves for Google.
c) your internal domain name that is used by your management network to address every piece of network equipment, hypervisor, virtual machine and so forth. This could be something like "widgets.internal". The company internal DNS servers that never touch the public Internet will be authoritative master/slave for this, and your intranet clients will be set up to query these DNS servers. Your reverse DNS for all of your RFC1918 IP space (10.x, 172.16.x, 192.168.x, etc) will have forward/reverse matches for everything that you manage and monitor in your network, so that automated tools can crawl the network and auto discover/auto-provision new equipment.
Internet users outside of your intranet will never see item C, but it definitely exists in a lot of companies' infrastructure.
Is it not recommended for the Part C to be done with a third, yet publicly owned, domain? Instead of widgets.local or widgets.internal, it would be widgets-internal.com or something.
Struggling to remember why, but I heard people mad about .local
To avoid stupid software breakage you should never use .local for anything. Choose any other TLD for internal use, doesn't matter what it is, as long as it isn't a real TLD that exists in the new gTLD and ccTLD system.
On a large scale for all internal stuff you are going to have your own, internal root and intermediate CA. The root CA signs the intermediate CA, the intermediate CA then signs SSL/TLS certificates for your internal infrastructure things.
As an example for a mid sized ISP with such a setup, the internal ticket portal for the NOC is accessible only while physically in the office or when on the VPN, speaks TLS1.2 only to standard web browser clients, and its URL is https://portal.tickets.burrito
where the "burrito" is obviously not the real name, I've changed it for this example, but the choice of TLD in your own internal DNS infrastructure is totally arbitrary.
Then the individual clients all have the public certs for the internal corporate root and intermediate CA installed in them, so that they trust certificates signed by the internal CA.
If you're big enough to care about having a serious multi-state-scale intranet, you probably don't want to be reliant upon external CAs to sign your stuff. Having your own internal CAs lets you do a lot of other things as well, without additional cost, such as sign per-client-device certificates as well.
Please do not suggest people use TLDs that are not a part of the gTLD, ccTLD, sTLD, or any other TLD system. Just because they are not used now does not mean they will never be used. We just saw the effects of that with 1.1.1.1 DNS system, where people assumed it would never be used, and when it does, now things break.
Currently, the only reserved TLDs are:
- [RFC 6761] example
- [RFC 6761] invalid
- [RFC 6761] localhost
- [RFC 6761] test
- [RFC 6762] local
- [RFC 7686] onion
With the exception of localhost, local, and onion, these can be used without any worry about future use. Any others should be considered real TLDs and not be used unless you actually own the domain name under that TLD.
First, you're absolutely correct that people should not just choose random TLDs and attempt to use them. But I think you misunderstand me: I'm talking about doing purely internal DNS for things in private IP space that will never touch the internet. IP space that will never be announced to BGP peers and is thoroughly firewalled off from the Internet.
Let's say I have an ISP named Burrito Corporation.
I want to uniquely address equipment in my private management IP space VRF and have a proper hostname for every piece of equipment, both for "forward" and rDNS.
It's a pretty common setup to have internal BIND servers which are authoritative master/slave for the TLD .burritocorp , which also have an ACL allowing the internal IP space (10/8, etc) to query them. Then set up all internal DNS stub resolvers/client devices to query those servers.
Now I have a core POP in Toronto and I've decided to use airport codes for site names, so the internal management IP interfaces for things at that site might get a hostname which is hierarchically contained under yyz01.ca.burritocorp
It is unlikely in the extreme that ICANN is ever going to create a TLD named "burritocorp".
I am not suggesting that people go and try to use self-created TLDs for things that have real world IPs in the global BGP4 routing table. See section 6.1 of RFC6761.
I think the risk in that case is that I, a cloud computing startup also called BurritoCorp, get .burritocorp registered as a real TLD to host my state-of-the-art cloud services that perfectly solve the peculiar infrastructure needs of on-demand Mexican food delivery consumer brands such as yours, but sadly you have to block the entire TLD to prevent collisions with your internal infrastructure, because when I first launched, I caused a four-hour outage for you.
Internally your OS resolver will be configured to point to a different DNS than anyone external to your network. You essentially have a private branch of the DNS at that point. The only real potential for conflict is when the FQDN of an internal server is the same as for an external server. In that case your internal DNS will serve up a record pointing to your internal server, and external users will get a record pointing to an external server. If staff want to order fresh burritos over the Burrito-over-TCP protocol that will have to use separate equipment. But nobody outside your intranet will ever see a name conflict.
I understand, but 1.1.1.1 was used by people for internal use, for example as a default for portals on wifi routers. Because of that, now anyone that wants to use the 1.1.1.1 DNS server can not when they are on wifi routers that use that IP as the traffic does not go where it was intended.
For example, my username is enzanki_ars. Let's say I set up enzanki.ars at home as a host for an in home only server. Now let's say that a new country registers .ars as a new TLD, for example Argentina as ARS is the ISO 4217 currency code for the Argentine peso. Now I am stepping over someone else's TLD, an someone could even register enzanki.ars before me and start using it.
My personal policy is "just because you can doesn't mean you should." If I wanted to disrespect the process we have in place to prevent people from stepping over other people's domains, I would have set up enzanki.ars already. Until then, 192.168.0.0/16 is how I reference my hosts at home.
The 1/8 IP block, unlike anything from RFC1918, was never supposed to be used by anybody internally. RFC1918 was published 22 years ago so people have had ample time to stop being foolish.
With a totally internal DNS setup it doesn't matter even if ICANN does decide to create a ".ars" TLD someday. In such a theoretical setup, your clients are all querying your own internal DNS servers, and you are not publishing or pushing anything to the root nameservers. Your use of internal hostnames and rDNS for ".ars" internally does not conflict with or hurt anybody's use of their own domains in the real-world .ars out on the public Internet, and neither does their use of the domain affect your non-internet-connected management network. There is no "stepping over other domains".
> In such a theoretical setup, your clients are all querying your own internal DNS servers, and you are not publishing or pushing anything to the root nameservers.
And in the usages of 1.1.1.1, no one was publishing anything on BGP or pushing it out to global routing tables. However, they were building their own infrastructure (and, in the case of consumer routers, their customers' infrastructure) on the assumption that this portion of the namespace would never refer to an external resource they'd actually want to access.
Similarly, if you use .ars, you're taking up that namespace from the perspective of your internal users. If at some point it turns out that there is some resource under that name that you want to access, you're going to have a hell of a time rebuilding your infrastructure to not use that name.
tl;dr don't use random names in a globally-managed namespace. Even if you're the only person seeing those name -> resource mappings, you can end up with inconsistencies between your own usage and the globally-managed version.
> Similarly, if you use .ars, you're taking up that namespace from the perspective of your internal users. If at some point it turns out that there is some resource under that name that you want to access, you're going to have a hell of a time rebuilding your infrastructure to not use that name.
No, you're not, because an internal management network does not have a gateway out to the internet. The management VRF does not talk to the global routing table. There would be no way to access a public .ars website even if you did not take its namespace.
If it is never going to be exposed to the public internet, you might as well use .com, then at least when it does get connected to the public internet you’ll get feedback that it is broken pretty quickly.
I could see an IT department using these (misguidedly) for purely internal purposes: .cam .camera .coop .drive .equipment .institute .media .network .systems .wiki. These are now registered TLDs. You are begging for someone to misconfigure something if you use trivial TLD names internally.
There are two interwoven threads of logic in my post. You've mistaken the second as limited to the first when it is building upon it.
To clarify:
It /used/ to be that .local wasn't mentioned in any standard.
Don't do anything like .local (a non TLD and not part of a standard), because there is now a history of standards being created that break existing private deployments, and also because OTHER TLD like things can now be purchased as TLDs. That is why the current best practice is now to have an internal subdomain of a real domain (even if not globally published).
> The Internet Engineering Task Force (IETF) standards-track RFC 6762 (February 20, 2013) reserves the use of the domain name label local as a pseudo-top-level domain for hostnames in local area networks that can be resolved via the Multicast DNS name resolution protocol.[1] Any DNS query for a name ending with the label local must be sent to the mDNS IPv4 link-local multicast address 224.0.0.251, or its IPv6 equivalent FF02::FB. Domain name ending in local, may be resolved concurrently via other mechanisms, e.g., unicast DNS.
Somewhat common pattern for (C) is using names like foosrv45.hide.example.com if you don't want to register another domain for that purposes. For convenience of the DevOps team it even makes sense for such hostnames to be publicly resolvable (to your internal IPs).
One issue with doing this is that it tends to break in various wonderful ways when you internally use Windows and Active Directory.
For various security reasons it's a bad idea for your internal hostnames to be publicly resolvable from anywhere on the internet, unless you're on a VPN, and therefore possible for your client device stub resolver to query the internal DNS server that only talks to internal IP space.
Even if the internal hostnames only resolve to a non-globally-routable IP address somewhere in 10/8 or 172.16/12, etc.
In my opinion the convenience gained is more significant than these mostly theoretical "various security reasons".
In other words when exposing hostnames and addresses of your internal infrastructure to the public internet has meaningful security ramifications then you have considerably more critical security problem.
I hope you never have to experience the non-theoretical side of that decision.
The benefits of being able to resolve internal IPs external isn't even really a benefit - if I can't resolve internal.hostname.domain then I'm not on the VPN, which means I can't reach the machine anyway, so where's the benefit?
A perfect example of why this threat isn't merely theoretical is the $36k bug bounty that Google paid out recently[0]. With additional knowledge of their internal network exposed by the information leak from DNS, the damage a blackhat could have done is untold.
The problem isn't the exposition of hostnames and addresses, the problem is if blackhats do manage to get access to the internal network (through whatever means; breaking into the VPN, a hole in the firewall, social engineering, RCE in the public facing website as in Google's case), it's undoubtedly easier if they've been given a list of juicy targets, rather than have to discover them for themselves.
I couldn't have put it better myself. I don't think the person who wants convenience has enable on any ASes routers, or at least I hope not. One thing I find interesting is that they're using 169.254/16. Doubtful Google uses DHCP for anything, their automated provisioning tools are probably quite unique. My best guess is that their stuff in that particular part of app engine is using the space because they've actually near exhausted the rest of rfc1918, which is pretty impressive.
The only other organization I know of that has done that is Comcast, which has been a leader in forcing vendor ipv6 support because they literally exhausted 10/8 for their management networks.
And yet Google still believes VPN as a security boundary is an anti-pattern since the whole intranet is as weak as it's least secured node. https://cloud.google.com/beyondcorp/
If you already have a proper, working VPN solution for remote access into the internal IP space, which any ISP will already have for network engineer staff and noc purposes... What great convenience do you gain by making the DNS and internal IP space scheme public?
I said nothing about making it public, because I don't view the possibility of someone capturing DNS packet of the kind "A? fooserv34.hide.example.com" as making anything public. (And even if your authoritative DNS server would allow AXFR for whole hide.example.com you would only leak information that is useful for somebody who also have capability to find out the exactly same information on his own).
The convenience is about not having whatever VPN solution you use to inject DNS recursors into laptop's OS, which end ups being not reasonably solvable when you have two such VPNs you have to use at once.
Who said anything about making IP addresses public? You seem to be assuming registering a domain name implies delegating that domain to an actual name server that must respond to public requests. These assumptions are both incorrect.
Registering a domain prevents those names from being used for other purposes in the future. Your domain does not have to have valid NS addresses. Even if a domain has NS record with a valid address of the authoritative name server, that server does not have to respond to requests from the public internet. It doesn't even have to have to be accessible on the public internet.
Register a domain for internal use and run an internal name server on e.g. 10.x.y.z that handles your private IP space, and configure the local recursive resolver name servers to hand out your internal server's address when asked about the associated NS record. At the same time, set the real NS records for your internal domain to e.g. a traditional DNS hosting service that returns a CNAME pointing to your public domain. (the public internet only sees *.private.example.com as a CNAME to www-public-name-com.example.com)
maybe .local was not the best example. In my opinion it's recommended to do with something that's not a public domain at all, since your DNS is entirely internal, you can set up bind9 servers to be authoritative for the root. You could have it by anything totally arbitrary of your choice for a top level domain, that is a non valid public domain or TLD, like widgets.burrito
> In my opinion it's recommended to do with something that's not a public domain at all ...
The recommended practice, AFAIK, is to register a public domain for the purpose. See the recent HN discussion of .home.arpa for all the reasoning and arguments and counter-counter-counter-points.
Really large ISPs have even planned for the disaster-recovery scenario of a collapse of public DNS infrastructure, so their internal management network and hostnames are not related to the existing gTLD or ccTLD, or legacy TLD in any way. There is no need to involve ICANN-approved TLDs in your internal DNS infrastructure, particularly if you have management interfaces for a lot of things that are firewalled off or air gapped from ever touching the public internet.
What does ICANN have anything to do with this? If the global DNS system crashed and burned, you would be in exactly the same place as if you had invented a TLD as you have suggested. But, if that doesn't happen... And the way more likely outcome of your "made up TLD" becoming real happens instead, you're in a way worse position.
So far I've read you suggest .taco, .burrito, and .burritocorp as good TLDs to pick. This is terrible advice.
Purchase a public domain name from a real TLD - use it knowing it's yours, and never going to conflict.
I'm using made up words as placeholders like foo, bar, or anything else. Not suggesting you use your own tld of burrito. The point of using a nonsensical example is that your TLD choice for hostnames and rDNS in a management VRF can be entirely arbitrary, since it is not part of "the internet".
Typically the name would be something relevant to your needs, such as the name of the company, or AS number. I think what you don't get is that the public root name servers do not need to have anything to do with an entirely internal dns infrastructure.
Not suggesting icann is going to crash and burn either, but that "real" TLD relevance to what you do in rfc1918 IP space in a network that is not routed to the internet is minimal at best.
Bet you $5 that Google's actual internal dns for the oob, SNMP management, automated provisioning tools which control those 1e100 hosts is not 1e100, and is not a zonefile you can find cached anywhere public. They built their own internal dns for it. Have seen the same at two large CDNs. Their internal authoritaive DNS servers for their management IP space have no connection whatsoever to the public internet or to the lettered root nameservers.
> I'm using made up words as placeholders like foo, bar, or anything else. Not suggesting you use your own tld of burrito. The point of using a nonsensical example is that your TLD choice for hostnames and rDNS in a management VRF can be entirely arbitrary, since it is not part of "the internet".
1.1.1.1 wasn't part of the internet either, and, guess what it does today?...
> and is not a zonefile you can find cached anywhere public
You seem to be conflating owning a legitimate domain name for the purpose of internal use with making the DNS records from that domain's zone public. One does not imply the other.
Buy the domain, know you will never conflict with something, and you're done (bar renewal!).
It's quite amusing you are promoting the use made up TLD names for internal use but previously you were deriding others for doing the same with 1/8 IP space(pre-2010) for the exact same internal only isolated use case.
I'm trying to picture a co-worker telling the team they went ahead and put all the management interfaces in the .burrito domain in order to save the company $10 in domain registrar fees.
Especially in that disaster-recovery scenario, when you're figuring out how to combine the different namespaces that different disconnected fragments of the internet are using, you really don't want namespace collisions.
Namespaces are abstract things that exist outside of the realm of actual network connectivity, and enable you to plan for all kinds of hypotheticals.
So, for example, the domain google.com points to the domain sfo07s13-in-f14.1e100.net (depending on where you're browsing) which then points to an IP address of the server that's serving you the content.
google.com -> subdomain.1e100.net -> server IP address
Then when you run a reverse DNS on the IP address, you get sfo07s13-in-f14.1e100.net.
rDNS: server IP address -> subdomain.1e100.net
So, like you said, if something is wrong with the server, you can run a reverse DNS on the IP address to get the subdomain sfo07s13-in-f14.1e100.net.
Is this the correct understanding?
But don't you already have the IP address of the misbehaving machine?
Lots of people look at IP addresses. Most of the people who find the "1e100.net" records in their network logs would have no easy way of knowing that 1.2.3.4 (or whatever) was a Google IP. The reverse lookup allows you to easily find out what domain it belongs to (1e100.net, which a Google search shows is a Google address).
You can also embed other information in the reverse name, such as the region, datacenter, floor, switch, rack, or unit number of the machine. It can be incredibly easy to locate the machine by just looking at its reverse name. "Oh, 1.2.3.4 is in us-east-1, on floor 3, in rack 27, unit 5. Let me go check the network cable."
The name sfo07s13-in-f14 is mnemonic to Google engineers -- it's probably in San Francisco data center 7. Also, that server probably has both an IPv4 and IPv6 address. Also, sometimes IP addresses change but you want servers to keep their identity. So names are convenient.
>If they'd named it sfo07s13-in-f14.google.com, then browsing to that URL sends google cookies. If it's some server from a recent acquisition that may not be up to Google's level of security, that's dangerous.
Sorry, I'm slightly confused.
I browse newproduct.google.com. My browser calls the DNS, asking for the IP. The IP comes back as 192.168.0.1 [1].
It connects to 192.168.0.1. Gets hit by an XSS, and sends your cookie value to evildoer.example.com.
How would it help you that the reverse-IP of 192.168.0.1 issfo07s13-in-f14.1e100.net? The browser doesn't know that. It thinks its going to newproduct.google.com.
Your example is not the same as the one in the comment you replied to. You picked a product hostname. The example was an infrastructure hostname.
The point is that Google (or any company with the same mindset) scopes down the number of machines that can receive your google.com cookie. Even their own machines often don't need it to do their job, so it's not worth the security risk to have your cookie sent more than necessary.
Sorry but I dont follow. I hope you can clarify. What's the
disadvantage for providing a PTR -> sfo07-blah.google.com instead of sfo07-blah.1e100.net?
Browser will send cookies to X.google.com and it will not to X.1e100.net. Ok.
But how is that problematic in this case? Why will X.google.com misbehave when it recieves google's cookies? Why will it be online in the first place if it is not yet up to apropiate security standards?
For normal purposes, DNS translates domain names into IP addresses (e.g. youtube.com to one of Google’s IPs). However, you can also do it the other way around: a reverse DNS lookup where you ask the associated domain name of an IP address.
Google decided to let all their IPs return a reverse DNS lookup under the 1e100 domain name. It makes things simpler when you want to figure out who’s connecting to your servers, for example.
Consider a scenario where you have a log file and in the log file is an IP address (172.217.5.14). You are not sure whose IP address it is. So you run the following commands:
# dig -x 172.217.5.14 +short
lga15s49-in-f14.1e100.net.
ord38s19-in-f14.1e100.net.
# dig lga15s49-in-f14.1e100.net. +short
172.217.5.14
# dig lga15s49-in-f14.1e100.net. +short
172.217.5.14
The first command (dig -x) checks the PTR record for the IP address 172.217.5.14. It returns two PTR records: lga15s49-in-f14.1e100.net. and ord38s19-in-f14.1e100.net.[0]. Those are subdomains of 1e100.net, which we know Google owns. However, you can set a PTR to pretty much whatever you want, so we now take an additional step as well. We run the dig command again to check the A records for the domains. This returns the same IP address we started with, which is good. Since Google controls the DNS for 1e100.net we can be reasonably sure that it is in fact a Google server. This is called Forward-confirmed reverse DNS (FCrDNS) and is one tool you can use to determine the ownership of an IP address. For example, it is frequently used as a weight in email spam filters. Although, because of the intricacies of email, in that case it is usually not used for identification and instead used as a general purpose check to determine whether a mail server is rogue or not, since spam servers very often do not have proper FCrDNS.
There are other tools to determine who owns an IP address, like whois, but in some instances one will garner useful information and the other will not. So it's nice to have both at your disposal.
[0] As a side note: the trailing . in those PTR records returned by dig is not a typo. All domains actually end in a dot, it's just usually implied.
> Those are subdomains of 1e100.net, which we know Google owns
Sorry but to the average user, the domain name 1e100.net doesn't ring a bell at all at this point. They would still have to look up the IP in ARIN/RIPE/etc to see that the IP range is effecively owned by a company called Google.
Do you really need a hostname at all? Wouldn't be the ARIN/RIPE/etc entry be sufficient to know who "owns" said IP address?
See, I probably would have reached for whois before dig. Partially because reverse DNS seems less likely to be populated with useful info, in my limited experience.
rDNS can be populated with a great deal of useful information, if you are trying to diagnose an asymmetric routing issue between two internet service providers. Particularly if both of them have had the forethought to give reasonable, understandable, hierarchical names to their globally distributed POPs. Other things like "ae" that show up in a traceroute can be indications of an 802.3ad aggregated link, which juniper calls an Aggregated Ethernet. Same as interface abbreviations for Cisco and juniper you will find like "hu", "te", "xe", etc.
One example: say you have a $200/mo dedicated server customer, as an ISP, you're giving them a /29 of public IP space. That /29 exists as a vlan subinterface of one of your juniper routers and is trunked across the datacenter through various switches to the server. Let's say it's vlan 2659. Somewhere in the public rDNS for the default gateway IP of that /29, you would have the string "vl2659”.
Their infrastructure probably allows them to host multiple products in the same subnet, it's possible any given (IP belonging to a) physical host could be hosting a youtube API one minute, google search next, gmail the next. So they picked a subdomain that belongs to none of these, to eliminate any confusion.
It's good practice to have working reverse DNS for all of your public IP space. Using a short domain name is convenient. There are a number of ISPs that own their AS number as a domain such as as12345.net and use that for their rDNS, so it will show up nicely in traceroutes either direction.
Google has done something a little bit different here, it's not their AS number, but same general concept.
Its the domain name they use for their servers. Its generally a good practice to have a dns name that maps to a particular server apart form the website its supposed to be serving for administrative purposes.
Try this:
> dig google.com
;; ANSWER SECTION:
google.com. 299 IN A 172.217.164.110
> nslookup 172.217.164.110
Non-authoritative answer:
110.164.217.172.in-addr.arpa name = sfo03s18-in-f14.1e100.net.
I don't see any requests to 1e100.net when loading Google sites like google.com or youtube.com. I see domains like gstatic.com and apis.google.com, but but not 1e100.net.
You can think of all user-facing domains in a system as an interface or API, which abstracts away implementation details about which specific servers are behind them.
This allows flexibility in infrastructure — you can swap machines in and out (e.g. by updating public DNS records, updating the machines’ IP addresses, or adding them to (or removing them from) a load balanced pool behind a reverse proxy). But you still need a way to reference individual machines regardless of whether they’re serving or not.
That’s where domains like 1e100.net come in — a system of concrete (non-abstract) references to specific machines in your infrastructure.
On that note, I should answer their question: Was this article helpful? with No.