What do people do with v6? Have better memories than me? cut-and-paste repeatedly as needed?
While I do less-and-less sysadmin work over time, even on my home network, there are opportunities to get more comfortable with v6 that I've skipped because of this, yet I never see anyone talking about it.
I've got no beef with IPv6; it seems pretty great there being enough addresses that everybody gets one.
That said, being unmemorable is not a feature. Doing nothing is an alternative to doing things 'the right way' and people will choose it. The situation will backslide a little bit - with IPv4 people would do things, with IPv6 some people won't. Unless the goal is to configure IPv6 systems for their own sake, this is an unfortunate and unwanted side effect.
IP addresses change all the time in cloud environments and container orchestration systems. Even if they didn't change, you get better systems if you pretend they might. It doesn't matter if you use containers or not -- if you assume IP addresses could change at any time, you end up with infrastructure that is tied to your service graph rather than your hardware.
If a private domain and/or the server is only reachable from private IPs, then its IP should be in an asset management system somewhere (it should be if it is public too); be that a full blown orchestration system, if cloud servers or containers, or some form of asset register - even if only a spreadsheet - if it is physical servers in your own data centers.
If properly automated, that asset register should drive configuration deployment - you shouldn't ssh to it to fix a broken dns configuration, you should run your deployment with a fixed config.
I get that this is an idealized setup and that a lot of the time things intervenes and it's quicker and easier to do things manually then and there. Been there, done that, but in the long run it pays to invest the time in then cleaning that up (up to and including wiping clean setups that have been manually updated to verify that the automated setups are correct and complete).
The router's IP and a nameserver IP are the only ones you should have to remember in the event of problems, and both of those can be short and simple.
If you refuse to pick a memorable address and refuse to use the tool which is designed to let you handle unmemorable addresses, then you don't get to complain about how hard your addresses are to remember.
$ drill AAAA mayeul.net
;; ANSWER SECTION:
mayeul.net. 900 IN AAAA 2a01:cb14:cce:1200::11
If you need an IP address in multiple places on the same host, it would be better to use /etc/hosts anyways, or use a DNS server, as it makes changing configuration way easier.
I've been playing around with the idea of giving each of my services a dedicated IPv6, so that I can seamlessly switch them from one machine to another.
The very nice part is that I get more IP addresses than I will ever need. Probably. Unless I am into nanobots. I could take an IP address and reroute it over a tunnel, or even assign an address block the size of the whole IPv4 internet and make my server tunnel that to the right target. Waste IP addresses ranges just to get a convenient prefix (::f00d:7ea ::f00d:cafe for the teapot & coffee machine, for instance, or ::1:???? for pro stuff and ::6666:??? for personal things. I don't know). There are so many possibilities, I'm barely scratching the surface.
Now, a couple minor complains I have:
- I don't have IPv6 at work
- My ISP doesn't have IPv6-enabled DNS
- My ISP-provided router sometimes reset IPv6 firewall rules, which is annoying.
cmon, how many bits of information are there in your example vs remembering the last three digits of 192.168.1.xxx.
Compare these two:
The full address I wrote above is 72 to 80 bits, depending on how you count (if the ::11 counts as 8 or 16). So that's more, but not much. And The common part is always 32 bit more than the common ipv4 part.
An easier alternative to hoping on the router would be to assign a port to each computer behind the NAT. That makes 32bit for the IPv4, and 16 for the port. Contrast 64+16 in my IPv6 case (I actually have a /56, but that's no help for remembering it). 1 computer on the network means that you have to know 66% more bits with IPv6. Bump that to 10, and it's just 16%. This is because there is always a common part, and that common part is two times the length with IPv6, so that makes it a constant offset.
When configuring a network, I typically do not know the prefix by heart, so I make sure to have that written on a paper sheet that I take with me.
Plot twist: I do not know my IPv4 by heart. I actually know the IPv6 one better, as I have a dynamic IPv4 at home. And I intensively use DNS/mdns. IPv6 actually makes it easier to use DNS for local network machines, I think, as I feel it is more worth it to fill the DNS entries with it, in case I want to later open some ports in the firewall for the www to access.
We go into the cloud because the cloud provided you infrastructure as a service which is a huge benefit for small and big companies.
This is true in terms of the raw total number of IPv6 addresses.
Everything I've seen discussing them assumes they'll be assigned hierarchically, though, which is a great way to ensure most of them are never used. Block assignments through a hierarchy can chew through any number of addresses no matter how big the space is.
Since an IPv6 subnet can hold effectively any number of hosts, and it's easier to give bigger chunks of networks, the chances that a given org needs multiple assignments is much lower. In turn we use that real limit (routing table size) more responsibly.
If routing tables can be much bigger in the future, we also have a whole lot more room to grow / allocate it differently. The chunk from which allocation is occurring (2000/3, plus the multicast and link local blocks, etc) is less than 1/4th the space; furthermore, more than half of the space within this is reserved.
But at the same time hierarchical assignment allows organization, including route summarization, it undermines the argument that there are too many addresses to exhaust. Addresses can easily be (in fact, are guaranteed to be) exhausted by the organizational system without ever being in use. The goals are in conflict.
But yes, we do need to be careful not to assign blocks that are too large. We're doing a good job of this (see any /8s being assigned?). We're also doing allocations only from 2000::/3, and there are five more completely untouched /3s available, so if we do somehow run out of space in 2000::/3 then we can start over with tighter allocation polices... five times over.
Pretty much this, though with IPv4 as I don't trust myself to remember that either.
I find myself increasingly frustrated with tools which force GUI flows or context switches that require me to do that, though. I prefer a 'edit a bunch of flat files in repos, push, magic config engine makes it happen' flow.
It's quicker for me to copy-paste than it is to type even an IPv4.
I've spent way more time troubleshooting various obscure network problems that ended up being a typo in IP address/port number, than I'd be proud to admit.
While automatically assigned IPv6 addresses will usually use the full 128 bits (last half is random), manually assigned addresses usually don't. So e.g. one of my servers is 2a00:1234:8::10, which is pretty easy to remember. There's going to be no way around memorizing the first two or three colon sections, which are assigned to you, but the last part is going to be as easy to remember as you make it.
2600::1, owned by sprint, which is the shortest pingable IP address I know, is exactly as long as 126.96.36.199, the shortest possible IPv4 address.
$ ping 1.1
PING 1.1 (188.8.131.52) 56(84) bytes of data.
64 bytes from 184.108.40.206: icmp_seq=1 ttl=56 time=26.0 ms
$ ping 0
PING 0 (127.0.0.1) 56(84) bytes of data.
All zeros is self, which is to say generally the same as 127.1.
$ ping ::
PING ::(::) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.071 ms
It's also designed so that the IPs themselves don't need to be particularly hard to remember. Compare 2001:db8:42:1::5 to 203.0.113.42+192.168.1.5 -- the v6 address is shorter than the pair of addresses you inevitably end up with in v4. If you're one of the tiny percentage of humans that actually do need to remember a v6 address, then pick an address like that rather than something long and hard to remember.
I doubt there is much discussion because there isn't really a way around it. The world is full of situations (MAC addresses, VIN numbers, software serial numbers, content hashes, UUIDs, etc, etc, etc) where a global address space doesn't fit in Human short term memory.
Computers are pretty good at remembering numbers for us, that would be my first choice.
The IP internet seems to depend on some degree of manual configuration, which requires identifying numerical IP addresses.
It's puzzling that the IETF hasn't recommended a human-memorable, word-based scheme for prefixes, something like what3words for IPv6.
We could also use syllabic encoding; for example, using 'bdfghjklmnprstvz' * 'aeio', we could encode 12 octet prefix
Which is somewhat easier to read and retype.
Cute idea, although somehow in my opinion the IPv4 haikus tend to sound better than those representing IPv6 addresses.
It's only marginally harder than ipv4.
We could totally just rerepresent v6 addresses as 4 sets of 4 octets, but that's basically remembering 4 ipv4 addresses.
The hex approach used today is a pretty good compromise, but people haven't really regularly operated with hexadecimal digits as a daily representation for a couple decades. Between hex and a direct binary representation there aren't really a lot of standard representations. Even say 16 ASCII characters won't work since there are quite a few whitespace and nondisplayable characters in the set.
base85. Still 20 digits...Not much easier to remember. They are truly huge.
E.g. if you had a /32 and two dns servers, the second one might be reachable at 2400:123:d::2, which is exactly as long as say 220.127.116.11. And because you only get at most one octet to play with on v4, you can't easily make your addresses more memorable either.
If that's not the case, always copy&paste.
The ipv4 shortige definitely had a huge impact on those designs.
I write them down.
I really think the size of the IPs is a major cause of slow V6 uptake.
Put the “interesting” addresses in DNS immediately upon acquiring/configuring them.
When you need a literal - lookup DNS name, copy-paste from one terminal into the other. I do this for both Ipv4 and Ipv6 addresses.
Turns out the human brain sometimes does also have difficulty figuring the differences between 192.168.129.1 and 18.104.22.168 when one of them is mistyped....
The only configuration item that should require manual entry of an IPv6 address is a NIC. Even that has other mechanisms. Everything else should support DNS.
Edit: doh, the second place is the dns zone file
My answer would be to use my phone to take a photo of the address (if not easily accessible via web browser), and then use that to manually enter it. Probably not a situation where you can rely on memory, unfortunately.
APNIC has "0.32" of a /8 left: https://www.apnic.net/community/ipv4-exhaustion/graphical-in...
ARIN reached zero on Sept 24, 2015: https://www.arin.net/resources/guide/ipv4/
LACNIC has 0.09 of a /8 left: https://www.lacnic.net/1039/2/lacnic/ipv4-depletion-phases#t...
What kind of presence do you need? I will soon have to do a small CDN with a POP in ZA. I might as well get the IPv4 from AFRINIC if they have plenty.
Cloudflare abused this long ago by creating a subsidiary on paper in Seychelles just to get as many IPs as possible, while doing no business there.
It might seem like it makes more sense to switch to IPv6 than to deal with the issues of IPv4, but the entities that have to deal with problems that result from IPv4 depletion usually aren't the same entities impeding the adoption of IPv6.
a) struggling to implement proper IP based access control with their available address space.
b) as everyone uses the same 16 or so bits of Private IPv4 space, every acquisition likely causes new IPv4 address space collisions which must be worked around.
I mostly agree, but this overstates thing some. 10/8 offers 24 bits, 192.168 offers 16 bits, and 172.16-172.31 offers 20 bits.
The latter are particularly frequently unused; if you pick one of 172.20-172.28 to cram into you have a high chance of not crashing into someone else when you integrate networks. Particularly if you use the upper half of the resultant class b.
Of course RFC 4193 for IPv6 is very nice; pick your own random 40 bit prefix, and get to run 65536 subnets of arbitrary size. You need to integrate ~2^20-- a million-- networks before you have a 50% chance of having encountered a collision.
haha, I wish. We decided to use them two decades ago exactly because we thought they were less frequently used, but then a decade later docker comes along and uses exactly that as a default NAT subnet. So we get multiple tickets about that a week.
"we recently set up service x, and it seems to be working fine. However, when we try to connect from the campus wifi, or a certain department, the service does not respond".
What happens is the server gets a request from a private IP within the range assigned to docker0, and then the container replies, but because the IP is believed to be local, linux correctly tries to find something attached to the docker0 interface with that IP and fails.
IPv6 lets us build in the sky, where space is no longer a constraint.
Before IANA runout in 2011 we were going through one /8 every month; given that demand has only gone up since then, 20 /8s would be, what, a 12 month supply or so? v4 is just too small, and no amount of pushing allocations around is going to change that.
Does the UK Ministry of Defense actually have 16 million devices that need publicly routable addresses? Or could they be doing what almost everyone else is doing and using a handful of publicly routable addresses and private IP ranges behind them?
It would take them years to renumber off of their /8, at great cost and with the result of an increased ongoing operational cost, and for what benefit? That /8 would last the public internet for less than a month. It's not like the people who are yet to deploy v6 are suffering a lack of time, so buying them an extra month which they would then squander isn't going to be very helpful.
In any case, many of those "wasted" /8s have been resold for $100M+ in the last decade. Grep this page for "formerly" https://en.wikipedia.org/wiki/List_of_assigned_/8_IPv4_addre...
No, we're not out of oil or IPv4 addresses. It's just getting a lot harder to come by either one, and the writing's on the wall that the invisible hand is able to slap us silly if we don't seriously start migrating to the replacements ASAP.
It would be more accurate to say one continent has run out of IPv4 addresses, but even that isn't entirely accurate. As IPv4 addresses have become scarcer and scarcer, it's become common for companies to obtain IP addresses from registries other than the one that handles their region.
In other words: RIPE is in charge of assigning IP addresses. They're running out of IPv4 addresses to assign to other organizations. That's different from a single company running out of IPv4 addresses, which just means they need to buy more addresses (or find an alternative solution).
One of the many things that would speed up adoption is "IPv6 by default". Often it's just a matter of turning on the "IPv6 thing" at the same time as the "IPv4 thing". We need IPv6 name servers added to DHCP leases. We need load balancers to expose IPv6 addresses. We need to provide an AAAA record at the same time as an A record. We need firewall rules for IPv6 at the same time we add one for IPv4. We need SLAAC or DHCPv6 enabled. And we need applications to natively support an "ip address" field that accepts both IPv4 and IPv6 format addresses.
This is precisely why I end up searching for the knob to turn off IPv6 in kernel after installing a new system. That is, if I remember to do so. And then I curse because the system's (or application's) name resolver still keeps performing AAAA queries and breaking because IPv6 doesn't actually work.
Recent example: https://bugzilla.mozilla.org/show_bug.cgi?id=1582686
Please don't enable anything by default unless it's actually working.
For DNS, if your device decides to perform a AAAA lookup (despite not having an IPv6 address), it would succeed just fine, even over an IPv4 connection. (You can query any record type over the DNS connection/protocol, regardless.)
If your device attempted to connect to the resulting address (I don't think most would, but let's assume) it would instantly fail, as there wouldn't be a route to that destination, and fall back to the next record — the A/IPv4 record.
There are literally millions of dual stack laptops, phones, tablets, etc. all out there on backwater residential ISP networks that have no idea what IPv6 is. If dual stack didn't work — it'd be a real bug that would have long since been fixed. Similarly, if a dual-stack device had issues transitioning between dual-stack networks and IPv4 networks, we'd also know, as many phones and laptops do this all day. (E.g., my home network was dual-stack, but my work is not. Laptops — and using multiple OSes, too — moving between those networks can auto-configure themselves appropriately.)
Like other posters are telling you: if you have a device that has issues when IPv6 is enabled, you have some sort of localized issue with your network setup.
One third of traffic into Google comes from IPv6 sources. And they work fine.
When major ISPs around the world are enabling for both their fixed and mobile customers and not getting issues.
The problem is either in your ISP, your network or client configuration.
googling "what is my IP" from Telekom or MNet results in a IPv6. Virtually all routers support this by default.
Unfortunatelly, mobile/4G is still on IPv4. Can't really explain it.
Thank you for telling me that it's broken. That's kinda my point.
Again, I don't have IPv6 connectivity to the world, at all.
Random applications trying to enable and use it per default is bound to fail.
Also that bug seems to occur when IPv6 is disabled in the kernel.
Random applications should be fine with trying the IPv6 address, and if it fails using the IPv4 address.
That should happen transparently. There are millions of Linux deployments that don't require manually disabling IPv6.
You not having an IPv6 address is a separate issue.
`getent hosts` returns v6 addresses (only, no v4) if any exist on the hostname you give it, and thus its output is useless for debugging here. You need to check `getent ahosts <hostname>`, which uses the same getaddrinfo() call that most software (including Firefox) uses for DNS lookup. If you have no v6 connectivity then the v4 addresses should be sorted first in the output of that, and v4 will be tried first by your software. Another good test is `wget <url>` which prints the IPs in sorted order (limited to the first 3 though), tries to connect to them one by one and prints any errors it encounters.
If `getent ahosts` is showing v6 addresses sorted first, I'd be happy to take a look at your config -- if you pastebin the output of `ifconfig` and /etc/gai.conf then I should be able to tell you what's up. Either you'll have a v6 address you didn't notice or you've somehow configured your system to sort them first even when you only have v4.
For some reason, there's a "I saw a colon somewhere, therefore it's IPv6's fault" trap that people keep falling into. Your posts look like an example of that.
My ISP offers IPv4 only. I've used Windows, various flavors of Linux, macOS, iOS, FreeBSD, Playstation 4, Apple TV, and Android, and they have all "just worked" flawlessly without me ever having to manually tune anything. I'm curious what issues you are running into.
Then again, we know news of v4 running out and v6 being the only usable replacement has been running for .. 20 years?
If people start _now_ and test what v6 means, then we get what we deserve. Not pointing only to you, but equally the server ends mentioned in this thread also.
IANA released its last allocations 8 years ago, how much more forewarning do people need in order to actually move their butts?
If you get v6 issues today, it is only because people put their heads in the sand and decided "hey, I have this one machine with a v4 on which I can serve data, I don't need to worry about this issue ever".
Having to fiddle with do-or-do-not-resolve-v6 to make stuff work is something we should have spent time on in 2003 or so.
IPv6 feels like the tech version of climate change. Requires too many to do too much with no or negative short-term payout.
Agree. The difficulty is that the internet is big and has too many actors, so it's impossible to get everyone on board to plan the roll out.
So the next option of just chaotically enabling things here and there without carefully designed fallbacks or coordination in the larger ecosystem is causing precisely the issues that make people scream and look for the switch to turn it all off.
It's pretty crazy if you think. I don't think anyone's deploying large production systems with so little coordination and so much random switch flipping by different unrelated (but nevertheless interconnected and interdependent) parties. I can't imagine why the largest big system on planet earth would just work if people do that.
Check if you have options inet6 set in your /etc/resolv.conf, and remove the option if you do. You can also try setting options single-request and options single-request-reopen. Note that the latter two violate the "don't enable it if it's not working" principle; broken middleboxes are screwing with your client, so if these middleboxes were never fixed, we could never use IPv6. Instead we have shitty workarounds. Sometimes that's just the best option available.
As it turns out, not even the Ubuntu repositories are reachable over IPv6.
I believe this was the PPA server, which held some package I needed for some workaround for all the other connectivity problems.
So let's hope a few important people/companies/institutions start to be slightly inconvenienced. I bet as soon as this rises to the concern of, say, some peoples' Netflix stream stuttering it's a matter of weeks to solve it.
It's common for hosting providers to modify system images to point at a nearer mirror or even a local one they own. Perhaps it wasn't working on your system because of some change you or the host made.
I don't see any reason why, with incoming shared v4 http load balancers and outgoing NAT64, most standard http applications shouldn't be able to move to ipv6-only. We've been slowly turning off ipv4 for our load-balanced web apps with little issues so far.
When your ipv4 peering sessions have problems, as that accounts for >90% of your traffic, you notice and you notice fast. If you don’t, your customers will notice for you.
When your ipv6 peering sessions have problems, you may not notice for days, weeks or even months.
Source: it took 3 months before the first customer complaint came in about the entire v6 network being down on a network otherwise passing over 1Tbps at peak daily.
Which is kind of ironic, because you don't get that with most wireline connections, as they're all done through IPv4 NAT.
 Comcast is surprisingly good here -- they'll give you a /60 (16 /64 networks) if you ask for it with the right DHCP option!
If you set a machine to try just one IP address per second in IPv4 you'll explore a significant fraction of the whole public address space per year. In IPv6 doing that is extremely unlikely to find even a single valid address to connect to in a human lifetime.
DDNS is something that I once thought would be an artifact of a less civilized time, but ISPs gotta upsell, I guess.
At the very least, IPv4 fallback takes a few seconds for each DNS query, and I've experienced it failing completely in some cases (can't recall if I ever figured out why). So regardless it's a PITA when it happens.
I tried disabling the option for it to advertise itself as a DNS and instead insert the local IPv4 address in the DHCP options but to no avail, clients still ended up with public IPv6 as main DNS.
After a few hours of trying to fix it I just turned off IPv6 again.
Can you go into more detail or point me to instructions? I'm on Comcast and would be interested in getting v6 at home.
I run a little OpenBSD router and my dhcpcd.conf is here , but I've done this before on a Linux router box too -- here are some instructions from Arch's wiki . IIRC, the stock firmware on my old consumer router was also able to request a prefix and advertise it on the network.
Any user that wants more than a single subnet at home is supposed to receive a /48 by the same recommendation base. A lot of ISPs settled on /56s. Comcast is a bit more stingy than some.
That introduces a whole other set of problems.
And finally, the space is huge -- an ISP would typically get a /32 at least (Whois says that Comcast is using a /26 just for the Bay area), so if every customer gets a /60, that's 2^28, or 256M, delegations. (With the /26, Comcast could hand out 16 billion /60 delegations to their SFBA subscribers.)
I actually played with NAT64 for a bit (IPv6-only internally, all IPv4 space is mapped to a special /96 in IPv6-space and NAT'd at the router, and local DNS server on the router returns translated AAAA records for IPv4-only hosts), but dropped that when I simplified my router config. It works, it just takes a bit of config :-)
Practically, my laptop is in some Wifi network and has an IPv6 address, but that isn't accessible from the outside because the Wifi/router box blocks incoming connection. That is a wise decision overall, but unforunately replaces the "can't connect because NAT" by "glorious IPv6, bit still can't connect".
And I'm not the owner of those boxes.
I've had a static IPv4 subnet allocation from AT&T on AT&T U-verse, and tried using it on a couple of consumer routers for the local network. Well, guess what — all these devices from major brands that are called "routers" don't actually route between the interfaces, after all — all they did was NAT between my public IPv4 allocation and the main IPv4 address assigned by DHCP. And disabling the NAT simply disconnected the networks — instead of rounting one interface to the other. I actually opened up a support ticket with ZyXEL, which got escalated to their engineering managers, and they did confirm my finding that their routers had a bug in sysctl settings that stops them from routing the interfaces (e.g., sysctl net.ipv4.ip_forward not set to 1).
Anyhow, back to T-Mobile US IPv6 on a hotspot — when I originally tried ssh'ing back to my own box via the public IPv6, things didn't work, either; but I then found out that it was some sort of a local policy on the local machine, because another laptop without a firewall was able to receive connections on IPv6 from the internet without any issues.
SSH is normally not enabled, either.
You can configure firewall rules on that.
In fact, the default is almost always to disallow all inbound and allow all outbound.
sudo ufw default deny incoming
and your laptop is now secure.
Yes, and that precisely why I use it in my home network, and will continue to use is with IPv6. I explicitly don't want any IP addresses in my green zone to be directly accessibly from my red zone.
You need to be using a firewall. NAT is just an extra, useless, complicated and unnecessary thing to be adding to the top of that; one that makes it hard to understand how your network is even working, and which makes it harder rather than easier to secure.
Since you already have a firewall you can just add a single rule that block incoming connections and that's all you actually need.
However, NAT is just a mapping from an internal IP to an external Port number. With pure NAT, once you have created a connection to the outside, anyone can send packets to that external port, and have them delivered to your device.
Of course, this mapping is inherently lossy. You are mapping the port space of one device onto N devices. This does mean external IPs can only send your devices packets on ports they have previously used as source ports, but you are still not safe. You might own devices that, for example, have vulnerabilities in their networking stack. Not that rare. You also might not want to disclose the number of devices active in a network, allow them to be fingerprinted by their response, or similar. Some UDP client applications might also not check the IP addresses of incoming packets. UDP server applications also commonly use the same socket for connections to other hosts as it listens on.
So, to elaborate on that last example, you might run, say, a video surveillance server. It might listen for RTP on UDP port 12345. It then uses that same socket to send a UDP packet to licensecheck.example.com. That packet will have a source port of 12345. NAT will then map internal_ip:12345 to external_ip:xyz. Anyone can now view your cameras on external_ip:xyz.
So either way, you'll want a stateful firewall. Deny incoming, allow outgoing. That's what firewalls are there for, and that's what gives you your security, NAT or no NAT.
$ telnet <ip> <port>
$ ip route add 192.168.1.123/24 via 22.214.171.124
$ telnet 192.168.1.123 <port>
Indeed, I never said otherwise.
What I'm saying is that I want to disallow all incoming connections to any machine that isn't in my yellow zone, and selectively forward some traffic to servers in my green zone. I can absolutely use router and firewall rules to accomplish that. But when I do, I've essentially implemented a NAT.
Think of it like this. You need to go to the internet for something. Your NAT will just kludge the packet to make it look like it came from the NAT machine, altering the port and IP.
That port and IP combo needs to get back to your PC so the router keeps the mapping it used in a table and looks up all inbound packets to match on that table.
So if you have sent a packet, you’ve opened a random port.
NAT tries (lazily) to use the source port that matches the destination port. So if you ssh outbound; most NAT implementations will open port 22 to your machine.
But the stateful firewall that usually comes with your router will block random inbound connection attempts or packets from your destination that are not acknowledged.
Tbh, NAT+firewall doesn’t give you anything extra (security-wise) than a plain stateful firewall gives you.
Most people when they say NAT they mean PAT, that solution translates basically takes an outbound connection, it allocates a port and maps that port to that local computer.
That creates an illusion, of security, but that's all as a side effect, NAT inherently doesn't care about security. There are multiple ways to bypass it, most known is UPnP where hosts can actually request to open an inbound port.
The correct way to block inbound traffic is actually to use firewall. If you have a statefull firewall (currently pretty much all of them are statefull) you basically block inbound traffic to your network and that's it.
UCLA has /16 address space and it doesn't use NAT it just gives static IP addresses to computers. By default all incoming communication is blocked, but you can submit request to have certain ports opened.
Also from my own experience, if you have network with multiple devices, and you want to do something more advanced like traffic shaping, NAT actually makes things much more complicated. You need to keep in mind if you're doing filtering before or after NAT is applied.
For example I had a person on my network catch a virus and started sending mail, which got my mail server blacklisted. So I decided to block outbound connections to port 25, forcing everyone to use local SMTP server. In PF the filtering rules apply after NAT, so at that point the user has the same IP as the server. To correctly filter I need to tag the connection when NAT happens and then perform filtering on tags. Without NAT I could simply just use source IP address to do the filtering. Things become even more complex when you also want to do QoS and label traffic.
To do NAT you do need firewall, and if you have a stateful firewall you can just do the filtering by firewall. NAT was never designed as a security feature and shouldn't be treated as one.
With multiple hosts it’s going to depend on the implementation, but it will likely forward it to something it thinks can handle it, because most NATs try to be as transparent as they can, to not trip up unknowledgeable consumers.
NAT is not, and has never been a security measure. Just run a damn firewall if you want to drop incoming traffic. The firewall gives you everything you want; you aren’t gaining a damn thing from the NAT.
NAT != firewall
Again, assuming I am not running any special software, what sequence of commands would you type to send a packet to 192.168.1.123 natted behind 126.96.36.199 ?
https://tools.ietf.org/html/rfc2663 might bee a good start to see where flaws are and how it works. And it's also a great way to see how a destination machine that you're talking with can puncture through your NAT, cause you did already.
And Pwnat didn't strike you as "interesting" in that it could join 2 internal networks as if they were the same collision domain, but not really?
This is exactly what the job of a firewall is.
NAT is a hack due to IP address exhaustion. Any security is a side effect. Nothing that says "NAT" on the tin is obligated to protect you.
NAT on your own network if you really drink that kool-aid is fine. I'm more concerned about CGNAT. If ISPs implement NAT (called CGNAT), it means you can't accept incoming connections without your ISP approving/supporting it. I don't want to have to beg my ISP to open ports for me and question/approve why I'm doing it.
Well, I agree (if you include the router in that). I never asserted otherwise. In fact, this is how I've implemented my NAT.
NAT itself would only be security-through-obscurity.
If someone (your ISP, someone who compromised your ISP's router, someone who happens to be your neighbour on an ISP with misconfigured routers ...) sends a packets directly addressed to your internal address range to your WAN interface, that is of no interest to the NAT function and will simply be routed to the LAN interface unchanged.
That is: Unless the device happens to also have a stateful firewall that would prevent that. But then, the firewall would work just as well without NAT.
If you have a router with NAT and without (stateful) firewall or RPF, and such router receives a packet on wan port with dst address from your private LAN range, then such packet is just passed to your LAN (as NAT does not apply here). Obviously, such packet is unroutable on general Internet, but could be send by our ISP or by other customers of your ISP if they are on the same network (e.g. wifi AP) as your wan port. There are also more sophisticated ways, like abusing automatic decapsulation of packets (if enabled). Fortunately, in most cases a router with NAT also has enabled stateful firewall, so this is moot.
You buried the lede here. This makes a huge difference. I am much more willing to trust my ISP (a major company who would be quickly found out if they were trying to hack random people) than the entire set of people on the global internet.
Including HN, moments before reading this page.
It's true that outside networks usually cannot send a TCP SYN to your local network without IP spoofing or other hackery.
But a firewall is sufficient; there is no need for network address translation.
No? Oh, okay then.
I believe it's less than 50 lines of Terraform code to enable it for the standard Application Load Balancer use case.
I sell some software that runs on servers, and we used to get so many tickets that came down to the server's IPv6 transit being busted in sometimes obvious, sometimes subtle ways. And then we have to fight with their NOC to convince them something is broken.
IPv6 is an afterthought in many networks. There is less peering, less monitoring, less users overall to uncover issues.
It got so tedious to deal with that support burn that I just changed our software to not dual-stack any of its outgoing connections. Not heard a peep since.
The more you use ipv4, the more reliable things are.
Someday that will hopefully change.
I noticed this at my parents' house, where their new wifi setup was toggled to "ipv6 only" mode, and HN didn't work.
ping: news.ycombinator.com: Name or service not known
There is no AAAA record :/
As a user, this sounds like a feature -- but really, there are a ton of other metrics to track most users by (assuming we're talking about abuse done via the actual website interface rather than just flooding ports with packets from a specific IP).
If you're accessing a website, you're almost always going to end up getting its ipv4 address. This is significant, because still the entire internet doesn't support ipv6. (I'm looking at you Sonic.) If website providers can't get a dedicated ipv4 address, they could use some sort of 6to4 like tunnel to support ipv4. But because of the way the current dns software works, by default all of their traffic will run through that ipv4 tunnel.
This leaves ipv6 almost exclusively used for p2p or other types of home connections. The future of the internet is shaping up in such a way where websites run over ip4v and end users to have a sort of firewalled ipv4 access, then use ipv6 for everything else.
A law that requires ISPs fully support ipv6 may be beneficial.
Even companies that were given Class-B netblocks with about 65,000 addresses seems a bit wasteful. I remember when I started an ISP in 1996 it was like pulling teeth to get a Class-C with 255 addresses but there Ford sits with 16 million of their own.
There aren't enough addresses in v4. Reclaiming one or two /8s here or there won't fix that. Before IANA runout in 2011, we were going through over one /8 per month, and demand has only gone up since then. Those class Bs you're so worried about would last the internet maybe 2 hours each.
You can push as many allocations around as you like, but there just plain and simply isn't enough v4.
All of the client resolvers I've ever used wait for both records to come back, and your DNS resolution returns both sets of addresses. I think OSX does some sort of racing here, but it still gives v6 some amount of head start (~150ms?).
Also, Google's published v6 stats say that 29% of their users reach them over v6. That doesn't seem to match with "you're almost always going to end up getting its ipv4 address".
And that 3ms is likely smaller than the latency in the lookup, which is generally going to be the dominating factor.
Still happy with my XS4ALL connection on fiber. Proper /48 subnet and static legacy IPv4 address since ~2011.
AFAIK, it's a handshake between the two ends, made trough a third party server. Both parties need to contact the third party server, then they get an IP:port combination.
So you would have to do that, then quickly enter that information in the game client and server, while making sure that the TTL doesn't expire.
You might also have to fight your OS a bit.
Moreover, STUN doesn't work on symmetrical NAT, where the router uses the incoming address of a packet to route it trough the host.
I don't know PCP; I'll look into that. IMO, the solution to the GP's issue is a VPN. Hamachi used to be a popular solution among gamers, I think.
Yes: "symmetric NAT" is useless, but I have not yet seen a CGNAT deployment use symmetric NAT. I would just go so far as to say "symmetric NAT is an example of a technology that is not and never will be good enough to be called carrier grade" ;P. So like, that NAT can and has sucked shouldn't be a demonstration that all NAT everywhere is entirely useless.
It could very well be that CGNAT deployments don't have support for PCP, (or the related NAT-PMP and UPnP mechanisms that it is a successor to), which would suck. I haven't checked this yet. However, if they do, you should be able to do just about anything you could on any private NAT setup: these protocols let you ask your NAT to open a reverse port mapping to your system on a routable IP address, and you can use them with external tooling (they don't require you to be inside the app listening on the port or anything).
Most people who just browse Facebook and Instagram, watch YouTube and Netflix won't notice a thing.
Specifically what I need is something which can handle my ISP only handing me a /64, and can smoothly handle prefix changes (update firewall rules etc).
Ideally also being able to register/update LAN clients on dyndns service so I don't need to install an update client locally on every device.
But IIRC, you can fetch GitHub Pages custom domains over IPv6 with HTTPS too, but you don't get the correct certificates, because they simply don't have an IPv6 range pointing to the newer HTTPS-supporting infra.
And they've had it in few select residential areas as a trial since 2018 autumn .
I am sure there must be some tools to smooth the efforts, but it is not native in the protocols.
Is the price going to keep appreciating as supply falls until there is no more supply? I.E. is there potential to treat ipv4 addresses like a commodity?
I don't see ipv6 being widly adopted (major cloud providers only offering ipv6 for new servers) in the near future.
* that is to avert impending address exhaustion, but it happened, and ipv6 sat idle and did nothing.
This is the first time that RIPE has ever had more applications for v4 space than v4 ranges to satisfy those applications with (even with the strict limits on how much people can request).