Yes, I think disabling it allows for more hole-punching techniques, since you can basically maintain a TCP connection indefinitely and across network connectivity changes.
iirc more like that you basically need the 'loose mode' because most cpe's are bridging wireless and wired networks, but this then won't protect against the more serious shenanigans
Is it common for routers to create a new mapping for TCP packets that is NOT SYN (in figure 3 in this article)? Is there a legit use case for this behavior? Wouldn't it be simpler and more secure not to do that?
B̶o̶t̶h̶ (update: Only Libp2p works with TCP) use similar method for hold-punching. Libp2p can use such method for establishing direct connection between hosts behind different NAT networks.
(edited) Correct, this pickup of ports from the ACKs that the internal host keeps resending only works if the externally mapped port didn't change from the default (same as internal).
Maybe i am missing something but while it is interesting, I dont think it has any real security impact.
Since the threat model is that the attacker and the victim are connected to the same router via the same wifi network, not isolated from each other, in a case where you are using wifi in psk for example, the attacker can already sniff everything from other clients.
Therefore, you can spoof packets by just responding to them directly. It is a lot simpler and takes a lot less time (since you just need to respond faster than the server with the right seq and port numbers).
Once you are in the same network you can do even crazier stuff like arp spoofing and then let the victim think that you are the router and convince it to send all of its packets to you (https://en.m.wikipedia.org/wiki/ARP_spoofing)
Edit: on a second thought, maybe in a use case where the victim and the attacker are in different wifi networks (or just configured to be isolated ), the attacker should be able to perform a denial of service for a specific ip:port by sending RST and then ACK with every possible source port.
Also only works with non encrypted conns (ftp, http), that one should not be using.
And like you say on open or PSK networks you can do worst stuff (if isolation is not enable arp spoofing the default G will be way worst then this)
These are not theoretical concerns though. For example, since SSH uses a trust on first use model as opposed to a central trust/web of trust model, they can direct the victim to a different SSH server if they're connecting to a new one, and intercept all traffic.
(Edit: Even with HTTPS, there are other ways to MITM a connection because each trusted CA is considered an equal. For example, instead of doing a full-on MITM on a service which is likely to be detected[1], a better solution may be to obtain a certificate and target the specific user.)
Re SSH: sure, it can happen, but in practice I'd rate this pretty low with most SSH users...
(1) You need new host which hasn't been TOFU'd yet, and you need it to be exposed to internet with no VPN/mesh inbetween.
(2) You need password-based authentication _or_ agent forwarding, otherwise you won't be able to intercept traffic (with key-based auth, you can pretend to be the server, but that won't likely to last long). While I would not say that everyone uses key-based ssh, they should! It's an easy way to significantly increase your SSH security level.
As for HTTPs MITM, I don't see how this applies. They had to intercept traffic next to a server to be able to issue certificate - and I am pretty sure no one runs servers from the untrusted coffee shop connection! And you already have access to the server's datacenter, then there is no need to exploit public WiFi, you can intercept the server's traffic directly.
The HTTPS MITM case doesn't need to intercept close to the server, it just needs to know that Victim is trying to talk to a particular server. Then, they obtain a forged certificate, and use the same mechanism shown here to intercept the TLS ClientHello and pretend to be the TLS server, with the legitimate certificate. The reason this works is because in a targetted attack, it's much less likely that the improperly issued certificate would become public and (1) be revoked, and (2) cause problems for the CA that issued it.
2: key based ssh won't really help and isn't the magic bullet people seem to believe. Especially nonencrypted ssh keys which is what the majority uses.
sure it is, as long as you don't have agent forwarding
1. The ssh connection is intercepted, malicious server lets accepts arbitrary user's public key
2. Malicious server tries to connect to real server and fails - original pubkey authentication was channel-bound and cannot be reused
3. Malicious server now has to pretend to be the real server without knowing what the real server looks like. User immediately detects the shell prompt is different, the hostname is wrong, all the files are missing, etc... and disconnects.
(You can make some sort of contrived situation, like you only ssh'd to restart service, _and_ you used "sudo" with password _and_ you used ssh command so you didn't see the prompt _and_ this was the first command you did on your new computer... but this seems pretty unlikely)
And encrypted vs non-encrypted keys seems completely unrelated to connection interception attacks - who cares how the keys are stored on the computer? It's all the same on the wire.
You assume that the server already knows the key though?
The assumption was that it is the first time they connect. Which if so would mean that the key was configured out of band, but if we add that possibility you can do the TOFU out of band as well.
Yes, it is more common to transfer keys out of band - but it is getting a bit contrived.
This specific attack is unable to exploit this, because it requires that the TCP connection be already established. By the time the user had gotten the server's public key, you are unable to hijack the connection.
Sure they will, given Chrome's requirement for CT logs, any attacker-issued certificates will be logged, someone will find the suspicious record, and the method will likely be burned from all the publicity.
I have no doubt that there maybe a few dozen people in the entire world who would get attacked this way, but the chances it would be me are pretty much zero.
BTW this attack actually happened with jabber.ru - "law" "enforcement" physically intercepted their Ethernet port, then they got to use their IP address and could issue new certificates.
How does one bypass certificate transparency? I tend to exclude state actors from my threat models (1. they tend to be the all-powerful boogeyman that bypasses all threat models anyways, and 2. I doubt that the NSA is going to get you by way of coffee shop wifi) but I'm actually struggling to see how even they would bypass this particular protection; if Chrome only trusts CT-logged certs, even a targeted one-off is likely to be discovered (albeit, AIUI, after the fact).
Because it’s easier that the server be MITMed for a shorter duration to prevent discovery[1]. It’s also easier if you can just strong arm the CA to issue a certificate or have the shady CA issue it[2][3].
Among all adjectives to describe the security issues of NAT, "few" is not one of them. _Especially_ when you remember that many are under the false impression that they are "protected" by NAT.
imo the protection is real and a good tradeoff for a research project to become a mass medium (i.e. clients usually don't want incoming connections from the internet[1])
don't get me wrong, i like a global address space as much as the next internet native, but it's not for everyone to pay the price for it and nat makes it easy to opt-out of a large portion of the problem space.
[1]
my university used to give public-routed v4 to students connecting their winxp laptop to the network; guess what happend
This seems to suffer from the common reasoning error of equating addressability with reachability. Your university can disable public reachability without NAT eg using firewalling.
Well, of the three mitigations suggested in the article:
1 - Random NAT port allocation
Seems like the answer is "not by default." According to iptables(8), SNAT defaults to using the pre-translated ports whenever possible, so it's up to the connecting client (victim's TCP stack) to randomly select source ports. There is a "--to-ports" option that lets you limit the usable source ports, but it's not mentioned how the ports are selected in that case.
2 - Reverse Path Validation
I'm not really sure how this helps in the described situation. If the attacker and the victim are on the same wifi access point, then both of their traffic will be sourced from the same network interface, so even strict RFC 3704 validation will pass for spoofed packets from the attacker. But for completeness, you can turn that on by setting a sysctl (rp_filter = 1).
3 - TCP window checking
I _believe_ that the vanilla linux kernel does check this by default, as the mentioned adjustments to OpenWRT seem to be removing a non-standard option to disable the TCP window checking
Ah, yes, that's the detail I missed for why #2 is relevant. So the answer to #2 is "not by default, but easy to enable." Though other comment threads here have discussed the potential for this to impact NAT hole punching.
In general there are usually rules that prevent cross-forwarding traffic, and thus traffic over wifi is isolated from each client (unless you need to share a home NAS/printer this should remain enabled.)
Most if not all modern routers have this feature, and also the ability to turn off the dumpster-fire that is UPnP and flaky port-trigger mods. Additionally, some add packet marking rules, which is often also used to enforce fair bandwidth sharing. Yet note this can be a nonstandard controversial gray-area.
One could also use a private cert based VPN + Kerberos with firewall client pre-set rules on any un-trusted/open access points.
I would have assumed linux conntrack to do more rigorous "tcp window checking" as an assumption, however the article also notes they received a positive response from OpenWRT who implemented a mitigation, so I am curious to see the details of that mitigation.
As user zootboy above comments, OpenWRT removed a patch from their kernels that introduced a "no window checking" option for TCP processing,[1] which I suppose was enabled by default.
Note that this means that conntrack's window tracking has to be perfectly in sync with both endpoints (assuming TCP invalids are dropped). Given endpoints tend to be updated more often than middleboxes, this isn't a great assumption to make.
TCP sequence numbers are not a security mechanism, they are a data ordering mechanism. If you want security, use encryption like IPSec or TLS. This has been the state of things since TCP was invented in 1980. Someone new discovers it all the time.
They are effectively authentication cookies, but the cookies are too short, but at least they turn over frequently and you have to send a few gigabytes of data to guess one.
Almost every internet security mechanism relies on nobody guessing secret numbers, but normally they are sized appropriately.
IEEE? You mean the IETF? And I explicitly said "many-to-one NATs" to not have to quibble about network-prefix-mapping NATs like NAT66 which are not deployed in coffee shops anyway.
NPTv6 is slightly different in that there really isn't address-port state (though an SPI firewall could track connection state), but the first /64 of the address is swapped about, and everything else should be left alone:
(Of course I often get grief in networking forums for mentioning NPTv6 because it's "only" an Experimental RFC, and not Standards track.)
The main reason to use these mechanisms is either frequent prefix changes or multi-homing, though documents have been written on how to deal with those situations without translation (often in combination with ULA):
i don't think so because attacker-to-client communication is only used to restore the original nat-mapping.
otoh ap-isolation isn't really a security feature but enhances spectrum efficiency by suppressing client-to-client communication. most wifi's are susceptible to the 'evil twin' attack.
I do not like using Bluetooth. As soon as I figured out how to do bluetooth hacking on my Flipper Zero, I stopped using bluetooth - it is completely vulnerable to the same kinds of attacks BadUSB can do. Scary!
Vulnerability is in Double-Hash Port Selection (DHPS, IETF RFC 6056)
Fixed in Linux 5.17.9 and above using a variant of DHPS.
Linux sysctl:
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
Also Firefox Only HTTPS must be enabled to prevent JavaScript from performing dwell on hidden unsecure HTTP DOM frame.
----
https://support.mozilla.org/mk/questions/1359401
https://arxiv.org/pdf/2209.12993
https://github.com/0xkol/rfc6056-device-tracker
https://www.rfc-editor.org/rfc/rfc6056.html