In practice, ipv6 often uses only 48-56 bits for global routing with 8-16 bits usable for subnet ids or routing within a customer site. The last 64 bits of the address are used to identify a particular device in a subnet. So, an ipv6 address is a lot bigger than an ipv4 address - but that is because those bits are doing more things than they are in ipv4.
IP4 already has the concept of subnet routing in 32 bits. Let's say we add 1 more byte for country: 0 = Legacy, 1 = USA, … 127 = Vanuatu.
IP6 didn't add one byte, it added 12! I think 64bits is plenty (as mentioned astronomically larger than 32), and still looking for reasons to change my mind.
IPv4 subnetting and IPv6 subnetting don't really work the same way, though. With IPv4, there are (almost?) no ISPs that are handing out class A blocks to consumers. The vast majority of consumers will get a single IP address - and then they have to use a NAT setup to create subnets on their end.
IPv6 is _dramatically_ different. Its actually possible to get a /48 or a /56 - sometimes just for free as part of general operations. That leaves 8 or 16 bits for the customer to create hundreds or thousands of subnets. Unlike with IPv4 where most customers don't get more than one IP address, you don't have to use a NAT setup if you have a /48 or a /56. Even if you only get a /64, that still leaves 64 bits to give every device its own globally routable address without having to setup NAT.
Do we need 64 bits to identify individual devices in a subnet? IDK, maybe not. but if you are doing an apples-to-apples comparison, with IPv4 you have 32 bits for global routing. With IPv6 you have somewhere between 48 and 64 bits. The remaining 64-80 bits of the IPv6 address don't have a good analog in IPv4. So, in a lot of ways, IPv6 is a lot like taking IPv4 and expanding it to something between 48 and 64 bits and then the remaining bits you can think of basically almost like extra fields to encode information that IPv4 doesn't support.
My answer to that is that there's no way to know in advance if 64 (or any other #) of bit is enough or not. Tech history is full of examples of arbitrary limits being defined thinking that there's no possible way anyone would need to exceed them, only to find that everyone needs to exceed them later on.
Is 64 bits enough? Maybe. Maybe not. But if you're wanting to future-proof, then setting an arbitrary limit is not the way to go.
I'd agree, but the historical computing restrictions were usually hitting ceilings over 8, 12, or 16 bits when those things were very expensive.
32bits is just about large enough (within an order of magnitude sense) if the space was used more efficiently.
128bits I've heard described is like "every atom in the universe" big. If so, then 64 is probably enough for every atom on Earth.
Now I've just thought of another angle, similar to UUIDs. They are used because they can be assigned randomly without worry of collision. But I don't think IP6 addresses are being assigned randomly, hmm.
> 32bits is just about large enough (within an order of magnitude sense) if the space was used more efficiently.
Well, not really. Between just the populations of the US, Europe and China (places with high levels of internet connectivity), you have over 2 billion people (this site claims over 5 billion internet users: https://www.statista.com/topics/1145/internet-usage-worldwid...).
> But I don't think IP6 addresses are being assigned randomly, hmm
That is, in fact, exactly how IPv6 addresses are assigned using SLACC with Prefix Delegation. Your ISP assigns you a prefix, and your computer randomly picks an address within it. You can also self-assign a (non-routable, like 10...*) prefix from the fc00::/7 ULA block by randomly filling in the remaining 57 bits to form a /64 subnet.
> Between just the populations of the US, Europe and China (places with high levels of internet connectivity), you have over 2 billion people
You have to consider the context: back then, multi-user computers were common. Each user didn't have their own computer; instead, they had a terminal to connect to a central computer. So a single computer would serve tens or hundreds of people, and as computers became more powerful, you could expect each computer to be able to serve even more people.
Not really. Under ten billion would be plenty if used efficiently. A significant fraction of addresses we want to be private, and not directly routable.
Sure, if you want your refrigerator, oven, and dishwasher publicly addressable on the internet it isn't enough, but you don't actually want that.
Further, 64 bits is many orders of magnitude overkill already. So what does 128 bring to the party, besides making addresses harder to type?
Those estimates typically confuse the 10^100 upper bound on the number of atoms in the universe with 2^100. The 2^128 number of addresses in IPv6 is clearly more than the latter, but dwarfed by the former. There are roughly 10^40 or so atoms in the universe per IPv6 address; by mass that's approximately one address for each combined biomass of earth.
Earth mass divided by 2^64 is roughly 357 tons. There are roughly 2^33 humans on earth, so 2^64 is "only" a billion or two addresses per person; tha's far fewer addresses than the number of human cells out there.
> “why is 64 bits not enough?” Here, why not an option?
Because then you don't really have enough bits to do some nice things:
1. It would be nice for your upstream network provider (and you) if they can delegate some network prefix and thus don't need to concern themselves with the address plan inside your subnet. That means, if there is an address collision on your prefix it's not their problem.
2. Assuming you have been delegated a network prefix, having at least 48 bits for within-subnet addressing greatly simplifies self-assigned addresses:
2.1 You can use the data link layer's address (the 48 bit MAC address for ethernet) which often has weak guarantees of uniqueness (though this isn't really advised today)
2.2 You can also randomly generate addresses or generate them by hashing things, and be reasonably confident of not colliding with another station.
2.2.1 Of course, you want to check that no other station is using the same address, but there's always the hidden node problem: How does station C cope with stations A and B both claiming address XYZ?
2.3 You can (if you want) still use DHCP to assign addresses if you want. But you don't have to.
3. It would be nice if each station was not limited to a single address.
3.1 Among other things, having multiple addresses vastly simplifies building a multi-homed network. For example, you can (today) sign up for two (or more!) ISPs that support IPv6 Prefix Delegation and have your router(s) issue Router Advertisements (RAs) for each prefix.
If a link goes down, your router(s) don't need to do anything or share any state... it just works (yes, really, I've tried it!). Each of your routers, of course, only routes packets for the prefix it was delegated by its upstream.
(BTW, IPv6 also has some neat RFCs for letting foreign hosts know about a prefix change, so you can even transfer existing open connections initiated from an address on prefix A through a different router with connectivity on prefix B)
3.2 For privacy, wouldn't it be nice if you could generate a new address for every external server you connect to? You can do that with the 64 bit subnet address space--remember, your ISP has no say about the address plan within your network.
4. With 128 bit addresses, IPv6 allows you to delegate a /64 to yourself from the ULA block (the IPv6 equivalent of 192.168.. or 10...). You can just randomly pick a 57 bit suffix to append to fc00::/7, and you can be pretty darn confident that nobody else will pick the same one. And you get all the same advantages of self-assigned addresses (mentioned above) within that prefix.
4.1 Having a unique* local prefix maybe doesn't sound like such a big deal until you've tried to merge two enterprise networks that both have hosts on 10..., and clients hard-coded to connect to those hosts.
Finally, think about how a 64 bit global address space would be split up:
We're running out of IPv4 addresses, even with* NAT (and "CGNAT", which is just regular NAT with a fancy name). Most of those endpoints are users, not servers--you need at least 30 bits to give one prefix to each household with a current internet connection. Between the expected growth of internet users and all current enterprise networks, you need well in excess of 40 bits for the prefix.
So you'd realistically get no more than /48 prefix at home (16 subnet bits). That's not enough to do the cool autoconfig and privacy things I mentioned. More likely, you'd get a /56 because 256 hosts "ought to be enough for anybody" and you can't do the cool stuff anyway.
> For example, you can (today) sign up for two (or more!) ISPs that support IPv6 Prefix Delegation and have your router(s) issue Router Advertisements (RAs) for each prefix.
As a user of OpenWrt (which does exactly this) I strongly disagree that it is the right solution. If you announce on the LAN the prefixes obtained from a fast fiber and a slow LTE link, then devices will, well, get an address for each prefix. The problem is that they have absolutely no information to choose the source address correctly - it's all just numbers! So, just by bad luck, they choose the source address from the LTE prefix and waste the LTE data and my money if I use the default setup. Which is why I don't. My network uses IPv6 NPT, but announcing one prefix at a time would have been even better (because I only want fail-over, not real multihoming), although impossible with OpenWrt.
Interesting, but still looks like most of this could be done in 64 bits, or ~4+ billion Internets, one for each public household.
IPX had a lot of convenience features in a similar number of bits. (80, but 24 were wasted on manufacturer, so 56 useful bits)
A rotating address number will not provide privacy if the prefix attached to you is the same. Would be like saying a rotating port number would provide it, but not the case.
No organization will ever need a /64 (not even close, not even wastefully-that's 18 quintillion).
The slow uptake in IPv6 seems to imply that it's over-engineered and people don't care about these potential additional features. I'm a lifelong geek and can barely get interested. Network engineers are maybe .01% of the population.
I guess bits are only getting cheaper in the future so why not spill them incredibly wastefully at anything we can think of, is ultimately the answer. Although it doesn't answer why 64bits was not even an option.
It’s astronomically larger than 32, not double.