Hacker News new | past | comments | ask | show | jobs | submit login
NSA CSI IPv6 Security Guidance (2023) [pdf] (defense.gov)
64 points by codesniperjoe on Jan 22, 2023 | hide | past | favorite | 53 comments



From the recommendations document:

> The assigned IPv6 address incorporates media access control (MAC) address information from the network interface and may allow for host identification via interface ID, network interface card, or host vendor.

How long has it been since NSA has looked at generally-available OSs with IPv6 support? IPv6 "Privacy Addresses" are a thing that's on-by-default everywhere (and a damn thorn in my side). SLAAC has been using a identifier that's a combination of a randomly-generated ID and the subnet that the address is being generated for rather than the MAC address of the NIC for address generation for ages. (This is yet another thing that I revert back to the old behavior.)

They go on to recommend disabling SLAAC and using only DHCPv6. Does NSA know something exploitable about common DHCPv6 implementations that we don't? ;)

> ...a dual stack DNS implementation may need to support both A and AAAA records.

It's weird to say "dual stack DNS implementation". DNS servers can store A and AAAA records, regardless of whether their host is doing "dual stack" addressing or not. (If yours cannot, then by golly, you fucked up when you wrote your DNS server.)


> They go on to recommend disabling SLAAC and using only DHCPv6. Does NSA know something exploitable about common DHCPv6 implementations that we don't? ;)

This is what they say

> NSA recommends assigning addresses to hosts via a Dynamic Host Configuration Protocol version 6 (DHCPv6) server to mitigate the SLAAC privacy issue. Alternatively, this issue can also be mitigated by using a randomly generated interface ID (RFC 4941 – Privacy Extensions for Stateless Address Auto-configuration in IPv6) [1] that changes over time, making it difficult to correlate activity while still allowing network defenders requisite visibility


Debian 11 VMs that I was setting up last week were getting non-privacy SLAAC addresses. So I am skeptical of how common it is to default to privacy addresses.


Strange:

> A solution to this are IPv6 privacy extensions (which Debian enables by default if IPv6 connectivity is detected during initial installation), which will assign an additional randomly generated address to the interface, periodically change them and prefer them for outgoing connections. Incoming connections can still use the address generated by SLAAC.

* https://debian-handbook.info/browse/stable/sect.ipv6.html

* https://manpages.debian.org/bullseye/ifupdown/interfaces.5.e...


Yeah. I've run several major Linux distros, Windows 10 and 7, and several major versions of OSX. All had "privacy addresses" on by default, which is annoying as shit.


Piece of advice: setup a dedicated firewall’d vlan for iot and obsolete Lin/win devices, regardless of v4 or v6.


Interesting that they prefer dual stack to tunnel. I would have thought running your own 6to4 at the network edge would have been more preferential.


What would be the advantages of running 6to4 on your network edge?


My thinking was that it would be a single point of ipv4 traffic, rather than having to maintain all the components for dual stack. But thinking about it more, 6to4 probably increases the complexity of firewalls.


Ah, yes I see what you were thinking.


TLDR: Avoid it if you can!


That is the exact opposite off what it says. The US government has actually mandated IPV6 only networks in the next few years.

>At least 20% of IP-enabled assets on Federal networks are IPv6-only by the end of FY 2023;9 >b. At least 50% of IP-enabled assets on Federal networks are IPv6-only by the end of FY 2024; >c. At least 80% of IP-enabled assets on Federal networks are IPv6-only by the end of FY 2025

https://www.cio.gov/assets/resources/internet-protocol-versi...


Aren't these comments getting a bit old at this point? Running dual-stack should not be any more difficult than just running IPv4. There is a plethora of automated deployment tools and I'd hardly think people are DHCP'ng addresses to their servers. You don't have to use SLAAC and can statically assign addresses just like IPv4. Even for your dual stacked devices getting IPv6 addresses via RA can be tracked back to their IPv4 DHCP bootp requests.

I'm making the assumption here that anyone concerned about their network attack surface is actively capturing network or netflow data in which tools like openargus[1] or Arkime[2] make all of this collectable/searchable. Additionally most network devices support mirror/monitoring to offload data if you aren't working on the scale of needed dedicated taps/aggregators.

[1] https://openargus.org/ [2] https://arkime.com/


In the context of network intrusion detection and providing secure online services, I agree with you.

However, if this guidance is trying to influence government office routers and internet gateways... It's a different story.

A transition from IPV4 to IPV6 creates a new per device tracking capability that leaks internal network structure. This in my opinion is worse than internal domains getting certs from Let's Encrypt https://crt.sh/?q=twitter.com cr: https://shkspr.mobi/blog/2022/01/should-you-use-lets-encrypt...

The dual stack, DHCP and SLAAC can go a long way in adding some anonymity.


Realistically though what information can you glean from a hosts IPv6 address that wouldn't already be part of WHOIS? With IPv4 you already know there are only (3) rfc1918 reserved ranges. Anyone can use them as they see fit so seeing a 10/8 address in a email header doesn't automatically mean the company is huge its just what they picked. Myself, i've just never really bought into the whole "dns naming" or discovering private address ranges giving anything away. With existing NAT device tracking moved onto more unique features such as browser, screen size, etc. such that IP address tracking is probably not as accurate.


> A transition from IPV4 to IPV6 creates a new per device tracking capability that leaks internal network structure.

I doubt it. Your load balancers will be the only addresses that will be addressable anyway. Your IPv4 load balancers will also be "leaking" IP addresses.


You're thinking of the server side, not clients.


Clients that aren't misconfigured will use random IPv6 addresses that rotate. The usual default is once per day but that's a mere preference, you can make your computer take a new IP every minute if you want.


You can still see subnets though which was the original point.


With many ISPs handing out /64s and others handing out /48s and /56s to households, it's difficult to tell a subnet from another IP.

Even still, this information is pretty useless. So what if you know my current subnet is 3a80? That won't help you get past the firewall.


Clients use random IPv6 suffixes.


They do feel a bit old. Especially considering that is not the "TL;DR" of the paper. The paper makes no statement on whether or not it is a good idea to use ipv6, only that the US Government is transitioning and some guidelines on how to do that.


That's not at all what I got from this.

Ipv4 security guidelines do not look much different.

TLdr: be aware of the differences and prefer ipv6-only instead of dual stack if you can, to reduce complexity.


You are getting downvoted but ipv6 was ratified in 1998. The sunken cost fallacy is real here. At what point or threshold should there be a proposal for a simple address length extension of IPv4. Even in cloud providers who have an army of sysadmins and netadmins they don't support v6 in private networks.

Let's be very honest here, does anyone have a good reasom to believe another 25 years would mean ipv6 would displace ipv4 or even solve the address shortage when cgnat and other workarounds are profitable to network vendors?

https://en.m.wikipedia.org/wiki/Sunk_cost_fallacy -----

My controversial solution is to stop using numbers for addressing on layer3. A new IP protocol should have hierarchial domain name addressimg. So google.com would have .com as the top domain you would have routes for each TLD with non-ISPs default routing tlds like .com, ISP networks would resolve the route for .google under the .com routing table and so on. Upper layers would be oblivious except that you have less code now. On LANs you can create whatever domain hierachy works for you so long as the TLD is part of a predefined list. TLDs will have a fixed maximum length of 128bits for routing performance amd such. PKI/TLS would work just fine except now you have an extra layer of security in that ISP routing tables would have to also route to the wrong AS and can implement source route (customer1244.telecast.isp) validation to make mitm only slightly harder and address spoofing ddos impossible. So forget about numbers, ascii is also numbers. You are already doing this with v6 and 2600:: and other prefixes. As for layer3 translation, I have an even more controversial idea that will also solve wifi security and lan based mitms for good but for another comment.


> ratified in 1998

While this is true World IPv6 Launch Day in 2012 is the date most people point to for earnest IPv6 deployments. It was also not completely ratified until 2017.

> At what point or threshold should there be a proposal for a simple address length extension of IPv4.

If you pass a IPv4v2 packet it will not be routed. You'll need to replace all networking equipment to support IPv4v2...which is what we've done/currently doing w.r.t. IPv6. The engineers who wrote the spec were very much aware of how much "we've got one shot at this" was.

> another 25 years would mean ipv6 would displace ipv4

We're at over 50% deployment in the US. Again, it's closer to 10 years.


A few notes on the timeline.

The IETF has a two-level standards system consisting of "Proposed Standard" and "Internet Standard". IPv6 was first published as "Proposed Standard" in 1998 and finally transitioned to Internet Standard in 2017. Although officially Proposed Standards are supposed to be treated as "immature specifications", as a practical matter, people routinely deploy on them. Whether an RFC is advanced to Internet Standard is less a question of whether it is mature than whether the editors and/or WG bother to advance it. Here are a number of examples of widely deployed protocols that never advanced beyond Proposed (1) all versions of TLS (2) HTTP/2 (3) SIP (4) QUIC.

I think choosing 2012 as your start date is pretty generous. Proponents of IPv6 were telling people to start deploying long before that. In fact, the IETF sunsetv4 WG, dedicated to sunsetting IPv4, was formed in 2012 several months before World IPv6 launch day. Arguably, World IPv6 Launch Day was a reaction to the failure of v6 to get large-scale organic deployment 12ish years in.


> If you pass a IPv4v2 packet it will not be routed. You'll need to replace all networking equipment to support IPv4v2...which is what we've done/currently doing w.r.t. IPv6

That was never the difficult part. Mosr corr routers and expensive gear supported ipv6 many years ago.

> We're at over 50% deployment in the US. Again, it's closer to 10 years.

That means almost nothing. Even if you have 100% deployment, it is more expensive to maintain v6 by server admins,developers and consumers alike, especially in the not so rich countries. It just adds more maintenance cost, it isn't economically practical to expect it to hit critical mass and the everyone stops writing v4 specific code and config. IPv42 or whatever will be a good solution will be economically viable requiring the smallest change by end users and producers. V6 was developed by a committee of network engineers that only saw things from a network operator and vendor perspective. The lesson from sunken cost fallacy is that existing investment cannot be used to justify continued investment and in this case the problem of v4 shortage has been addressed by other means in a way that will keep it alive for decades more.

In my opinion, a solutiom that requires a firmware update that can work with existing ASIC and is economically viable is possible but the discussion about that isn't even happening. Billions will be wasted on the hopes that decades from now ipv6 can stand on its own.


Slightly off topic but IPV6 is a massive security hole for regular consumers. NATs sucked when you trying to connect to your favorite MMO but that is because they created a default drop rule for all special inbound ports.

I was shocked to see that as soon as your ISP switched to IPV6, your host is now directly addressed. As a by product of skipping NAT you are now relying on every machine having proper firewall settings. [UPDATE: or the router drops incoming IPV6 connections w/ it's firewall]

Just think about how many windows machines out there have Remote desktop enabled but were only safe because they were not publicly accessible or the hospital machines that are still running windows XP. God help us.


> I was shocked to see that as soon as your ISP switched to IPV6, your host is now directly addressed. As a by product of skipping NAT you are now relying on every machine having proper firewall settings.

When my ISP started handing out IPv6 addresses, my Asus RT-AC68U by default blocked incoming IPv6 connections unless they were replies to previous outgoing connections.

That is to say: stateful firewalls exist in the IPv6 world just like they do in the IPv4 work.

Just because your laptop or desktop gets a globally routable address does not mean that anyone can hit it.


Thanks for sharing that, good data point for drop incoming.

I had a nighthawk, I ended up setting up the ipv6 rules.

The TLDR on the debate so far is if router shipped over the last 20 years have both drop IPV4 and drop IPV6 incoming.

In my opinion, NAT was an added layer on top of firewall rules because inbound ports had to be mapped to a particular host and port since the router would not know which host to send to. This created a default opt out experience because for a port on your machine to get accessed, a packet must pass inbound rules and match a port map table entry.


> In my opinion, NAT was an added layer on top of firewall rules because […]

… there were not enough IPv4 addresses to go around, and so you only got one, and if you had more than one system at home, too bad… until NAT got invented.

Back in the dial-up days, you had only one system connected to the Internet—the one that was connected to the modem—and it got the the IP address directly. It was only later with the always-on nature of cable and DSL ISPs that sharing a connection became a thing. IIRC, you used to connect your computer directly to the {cable, DSL} modem without an intervening router, sometimes using USB, as computers having built-in (Ethernet or Wifi) networking wasn't really a thing:

* https://support.dlink.ca/ProductInfo.aspx?m=DSL-2320B


I'm not so concerned.

For one thing, a /64 issued to a house is a pretty daunting search space for the scanning worms of yesterday.

Another, computers today do come with firewalls that are enabled by default and tricky to disable.

Third, the industry really had to start taking memory safety and attack surface seriously after the Blaster/Sasser/MyDoom days. We see another article here on HN every week about another company categorically solving memory problems by adopting Rust.

Finally, having remote desktop shouldn't be a problem if people don't know your password, no? It's not like there is a firewall stopping baddies from guessing your Gmail password.

I realize that a NAT/PAT device does incidentally serve as a stateful firewall for many homes, but I think it is less important with modern OS's than one might think.

Now for the hospitals still using Windows XP...yeah you're right about that. I'd like to see regulators start fining companies for using obsolete hardware and software.


>For one thing, a /64 issued to a house is a pretty daunting search space for the scanning worms of yesterday.

Eh...

-----

We have outlined a number of techniques that scanning worms can use in an IPv6 Internet to locate potential targets. These techniques are equally applicable to the current IPv4 Internet, albeit not as efficient as random scanning. Although “conventional” address-space scanning is prohibitively expensive in that environment, we believe that the diversity of sources we discussed (which is by no means exhaustive) guarantees a rich target set for worms.

---

https://www.cs.columbia.edu/~smb/papers/v6worms.pdf

A lot of them do rely on getting that first host infected though, but that's not exactly dissimilar to IPv4 networks as well.

>Finally, having remote desktop shouldn't be a problem if people don't know your password, no? It's not like there is a firewall stopping baddies from guessing your Gmail password.

That actually begs an interesting point. IPv4 allows for services to block use IP profiling to limit an attacker's attempts to brute-force / semi-brute-force a password or other attacks like a DDoS. What would be IT / Security processionals response when an attacker can just jump to another IPv6 address and resume the attack?


I think the best practice is to rate limit by /24 in IPv4 and by /48 in IPv6. That way all the attacker's IPs are treated as a single user. These have corner cases like if the attack is coming from inside the house but they're decent defaults.


>For one thing, a /64 issued to a house is a pretty daunting search space for the scanning worms of yesterday.

/64? You should be getting at least a /56


> For one thing, a /64 issued to a house is a pretty daunting search space for the scanning worms of yesterday.

Some math to illustrate this:

* IPv4 has 2^32 addresses

* 2^32 ≈ 4 billion

* in mathematics, a^x = a^(y+z) = a^y * a^z

* so: 2^64 = 2^32 * 2^32

* therefore: 2^64 = four billion IPv4 Internets

One IPv6 subnet can fit many, many entire IPv4 Internets.


IPv6 not having NAT doesn’t make it incompatible with stateful firewalls. You can still have routers doing drop inbound by default.


And ISP supplied devices generally are. I don’t really know why people think this is an issue.


Might have learned something today, I always replace the stock router from ISPs.

Easy to test, can someone on a cable box try to reach an open port on their host on IPV6 vs IPV4. My belief is that a majority of setups (maybe not HN hackers) will able to hit a host's open port on v6 and fail on v4.

NAT is definitely an added layer though.


> Might have learned something today

Yet you continue to speculate about it and spread baseless FUD.

Consumer ISPs supporting IPv6 provide routers blocking inbound access by default. The interface to open IPv6 ports is usually labelled "IPv6 Pinholes" or similar, and you'll find hundreds of web pages on ISP websites describing the functionality -- just as they have pages on IPv4 port forwarding.

The extraordinary claim that ISPs are supplying routers with such a dangerous default configuration requires evidence.


> extraordinary claim that ISPs are supplying routers with such a dangerous default configuration requires evidence

Its a legitimate expectation and potentially the norm to expect that I can ssh to my desktop with IPv6 w/o configuring my router.

The pitfall comes as a side effect of NAT inadvertently making port access rare.

I am looking for data, inbound blocked ipv6 seems unlikely but I only have anecdotal evidence.


That's not even an anecdote. You are literally just assuming something is true, then arguing vocally with people giving you evidence to the contrary.


It goes beyond that. With IPV4 you have the further protection of private subnets not even routing across the public internet - it’s broke by default, no configuration necessary.

Your attack surface is primarily your firewall which admittedly might be an easy target - but not as easy as an unprotected Windows box.


Private address ranges are a human convention and there have been instances in the past of upstream routers passing them on.[1] Relying on other people to do your filtering for you is a bad idea. I'm going to put the rules in my own router, whether those addresses are (potentially) globally routable or are designated as private.

The use of small private pools has even helped attackers who would inject browser scripts probing the well-known prefixes.[2]

[1] https://serverfault.com/questions/374126/private-ip-getting-...

[2] https://www.bleepingcomputer.com/news/security/new-behave-ex...


Exactly! Duplicating my point in a thread below to drive your point home:

NAT was an added layer on top of firewall rules because inbound ports had to be mapped to a particular host and port since the router would not know which host to send to. This created a default opt out experience because for a port on your machine to get accessed, a packet must pass inbound rules and match a port map table entry.


NAT was created for one reason only: because there weren't enough IPv4 addresses to go around.

Port mapping and connection tracking firewalls were invented in 1989,[1][2] while network translation was created in 1994. [3][4] The private address space was only reserved in 1996.[5] The Firewalls book was published in 1994 (which meant that it was being written in the 1992-3 timeframe).[6]

People were protecting networks before NAT.

[1] https://en.wikipedia.org/wiki/Firewall_(computing)#Connectio...

[2] https://en.wikipedia.org/wiki/Circuit-level_gateway

[3] https://www.rfc-editor.org/rfc/rfc1631

[4] https://en.wikipedia.org/wiki/Cisco_PIX

[5] https://www.rfc-editor.org/rfc/rfc1918

[6] https://en.wikipedia.org/wiki/Firewalls_and_Internet_Securit...


> it’s broke by default, no configuration necessary.

Which is why all sorts of software needs to deal with bullshit like STUN, TURN, etc, to get peer-to-peer connections working. There has to be all sorts of address discovery.

* https://en.wikipedia.org/wiki/NAT_traversal

And even that won't work once you get into CG-NAT with tends to have two layers of NAT.

How much of the centralization of the Internet has occurred because people can't just talk to each other (by simply firewall hole punching via UPnP/PCP)?


It should be possible for every device on the internet to easily communicate with ever other device. A silver lining of the poor designs of the past is that it is practically very difficult to attack services listening on private network addresses. This difficulty also makes it hard to do useful things.

IPv6 is a step in the right direction, but the resolution to security issues can't be more firewalls or more network equipment. It has to start with operating systems. It is completely ridiculous that applications have access to the network and most of the filesystem by default. Operating systems need to give limited access to the filesystem and other services so that a compromised service isn't a big deal. A successful attacker doesn't own the whole system. They owned a poorly written application and the small sandbox that it's in. This is an obvious idea to most engineers, but neither windows nor macOS get this right out of the box. The iPhone's sandboxing model and fine-grained permissions are way ahead here, but there is still more improvement to be had.

And then there's the issue of most applications not requiring network access in the first place. There is no reason for Word, Photoshop, Blender, etc. to ever need access to the network. A firewall that only administrators can manipulate is also not a solution, that has to be in the users hands as well. Reasoning about a global table of rules is the wrong UX.


Here are two rules for the openbsd packet filter. one for ip4/nat one for ip6/direct. they do the same thing.

    match out on em2 inet from ! em2 to any nat-to em2
    block in on em2 inet6 from any to any
Not many people run a openbsd firewall but the point is that with a statefull firewall preventing people from opening an ip6 connection to internal machines is just as hard as allowing ip4 internal machines a connection out.


I'm not really sure its anymore of a security hole than any other device an end user would plug into their network. You can go on Shodan and look at hundreds of peoples devices exposed on the internet over IPv4. The same block in/all could be deployed for IPv6. I don't think most residential ISP's are concerned with protecting end users networks as they are trying to be mostly net neutral.

On the side of hospitals I would think most IPv6 allowances would at a minimum be managed on the edge firewalls which would be a separate device than the ISP's hand-off, host based firewalls aren't a requirement. An assumption on my part but I would doubt for those transit connections the upstream is just "turning" IPv6 on without some co-ordination. Admittedly, I don't know a lot about how hospital networks are run but i'd imagine some MSP involvement for smaller locations, possibly.


Having a globally unique address does not imply having no firewalls.

This paragraph from RFC7934 really sums it up much better than I can:

> Indeed, it could be argued that the main reason for deploying IPv6, instead of continuing to scale the Internet using only IPv4 and large-scale NAT44, is because doing so can provide all the hosts on the planet with end-to-end connectivity that is constrained not by accidental technical limitations, but only by intentional security policies.


Or you could just configure firewall rules on the router and only allow outgoing connections.


thats EXTREMELY naive. depending on ISP provided cheap router for security.

most of those routers are very, very dumb.

do the following excercise. add your routers external address as a gateway for your net internal IP, now from the outside reach your computer. easy. fully standards compliant. pierce your NAT like it wasn't even there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: