Hacker News new | past | comments | ask | show | jobs | submit login
How does Linux NAT a ping? (devnonsense.com)
328 points by willdaly 7 months ago | hide | past | favorite | 105 comments



You might be interested in https://samy.pl/pwnat/

    Specifically, when the server starts up, it begins sending fixed ICMP echo
    request packets to the fixed address 3.3.3.3. We expect that these packets
    won't be returned.

    Now, 3.3.3.3 is *not* a host we have any access to, nor will we end up spoofing
    it. Instead, when a client wants to connect, the client (which knows the server
    IP address) sends an ICMP Time Exceeded packet to the server. The ICMP packet 
    includes the "original" fixed packet that the server was sending to 3.3.3.3.
    The packet is INSIDE the computer. This harcoded packet is built into pwnat
    and acts as an identifier for pwnat.

    Why? Well, the client is  pretending to be a hop on the Internet, politely
    telling the server that its original "ICMP echo request" packet couldn't be
    delivered. Your NAT, being the gapingly open device it is, is nice enough to
    notice that the packet *inside* the ICMP time exceeded packet matches the
    packet the server sent out. Your NAT then forwards the ICMP time exceeded
    back to the server behind the NAT, *including* the full IP header from the
    client, thus allowing the server to know what the client IP address is!


To save others some reading:

This trick (ping 3.3.3.3) is used to let a server behind NAT learn the IP address of a client that is also behind NAT, without requiring any non-NAT server (such as https://ifconfig.co).

The main action of this tool is to then create a UDP tunnel between the client and server.

But based on quick reading, the tool appears to assume that the NAT does not rewrite the UDP source port, so it won't work on all routers. STUN (which is used in e.g. WebRTC) implements more sophisticated techniques, and even then there are some cases where it cannot work and the only option is to use a relay (TURN).

I'm pretty sure that the same issue applies to the ping 3.3.3.3 trick -- if the NAT rewrites the ping identifier (as described in the article), the trick would break.


pwnat seems really interesting and potentially easier than my SSH tunnels. Thanks for the link


When a ping is sent from a device on a local network to a device on the internet, the router performing NAT rewrites the source address of the ping to its public IP address and rewrites the ID field of the ICMP packet to a unique value. When the response is received, the router uses the unique ID value to forward the response to the correct device on the local network.


Or thinking about the proper way: how an operating system distinguish between two different ICMP 'talks' to the same destination.

Bam, you only need one computer and wireshark/tcpdump.

Sure, the article is nice and probably is enlightening for someone who never even thought about and doesn't have any networking understanding... honestly it's more about how to make a proper network lab and dig the sources but without thinking.


Taking this thought just a tiny bit further, this is changing a stateless protocol to a stateful one.


NAT stands for Network Address Translation, which means a NAT device maintains a translation table of internal IPs to external, so that it can return response packets coming from Internet to a proper destination on the internal network.

By definition NAT will maintain state which is translation table. Now that table can be dynamic or static, but it doesn't change the fact that there will be some state to maintain.


> By definition NAT will maintain state which is translation table.

Stateless NAT is also possible, but then it has to be 1:1. Which has it's purpose, but is rarely used.

A practical example would be with IPv6 if your ISP doesn't allocate you a static prefix. Stateless NAT would allow you to use a /64 from the private range of fd00::/8 in your local network which the router would translate to your globally unique /64. No state needed, because there would be as many IPs available in your LAN prefix as in your GUA prefix. All it would do would be translating fdxx:xxxx:xxxx:xxxx:1234:1234:1234:1234 to 2yyy:yyyy:yyyy:yyyy:1234:1234:1234:1234 and vice versa.

I've also done stateless NAT on IPv4. When you request more IPs from some cloud providers, they assign you a bunch of /32s, not a proper subnet, virtually requiring you to run a cloud router.


Any NAT that is not statically mapping IP addresses or ports 1-to-1 will require connections to be tracked and hence makes it stateful on the side after the translation (usually outside).

Hence you do need state syncing between firewalls in order for NAT connections to failover correctly, unless it's a statically mapped, one-on-one, one range onto another range, for example.


This isn't really specific to NAT either, connection tracking is required for most firewalls as well even if NAT isn't in play just to implement the most basic ALLOW related,estabalished rule even, and especially, what would normally be connectionless protocols.


Yes, tracking the state of connections (e.g. TCP) is needed enforce rules on OSI layers 4 - 7. That's kinda the typically scenario when we think of connection tracking and stateful enforcement of rules.

I was just pointing out when NAT also requires connection tracking (i.e. when the NAT table needs to be built dynamically, as opposed to statically mapped).


You're confusing tracking the packets with protocol. It's not changing ICMP, it's tracking ICMP packets. That's a totally different thing.


Is or was a thing with NAT. Linux also comes with stateful modules (ip_conntrack*) to track and rewrite higher level protocols, such as FTP control connections.


Ping needs that bit if state itself anyway to match replies to requests.


Why not use the source private IP instead of the “unique value”?


One reason would be to not expose details about your private network to every hop the ICMP packet traverses. Even if knowing you have some 192.168.1.x host is not on its own very useful to an attacker, it'd be preferable to not expose that.

It's another reason WebRTC/STUN was a big issue when it first became widely available, it made it easy to leak details about your LAN to outside servers.


Besides “security” which is a byproduct of NAT and not a goal, there’s the fact that an ip address can change. The routing tables usually go to MAC addresses, not ip addresses. So it is easier to store a unique id that fits in that field, that then points to a MAC address, that then points to a ip address.


But the ID is on the ICMP header or it belongs to the IP part?


It's refreshing to see a "how does" which actually drills down through layers of abstraction all the way to the source code. Nicely explained and very informative!


I came here to write this. Routing and networking is still confusing for me and all the writing about it is usually very "abstract" to me. A hands-on example like this one is really appreciated. Nice work, OP. I'll try to do it myself and follow along.

EDIT: one of the only other posts about this stuff that has made much sense to me is this one from Tailscale. It contains lots of "worked out examples" that really make it clear how everything fits together.

https://tailscale.com/blog/how-nat-traversal-works/


IME if you're digging into the finer points of netfilter, you eventually run up against the limits of published documentation and have to dig into the source code to figure some things out.


Good post.

Coincidently, I was struggling with Netfilter this weekend to enable transparent proxy on my OpenWRT router.

For the curious, the go-to resources for Netfilter are:

1. https://wiki.nftables.org/wiki-nftables/index.php/Main_Page

2. https://www.netfilter.org/projects/nftables/manpage.html


Since there is no port in ICMP, NAT doesn't have to deal with the problem of sending the ICMP echo reply back to the correct port.

ICMP echo requests have an ID, and that's effectively the same as a source port number.

Correct NAT handling of ICMP echo has to remap the ID in both directions, the same way that correct handling of UDP remaps the source port.

Reason being, if the machine behind NAT is being pinged at the same time by two different hosts, and they happen to use the same request numbers, then it is ambiguous.

Another possibility is not to rewrite the identifiers, but keep a list of remote machines associated with each ID. When there is a clashing ID, the list contains two or more entries (remote IP addresses). So then, when a reply is received from the machine behind the NAT gateway, the NAT chooses one of the entries in the list (say, the least recently added one) and sends the reply to that machine. Then removes the entry.


NAT is such a trashy abstraction. IPv4 needs to die.


I have a few devices on my home internet, on a handful of 192.168 subnets

The other week I moved my ISP. The AS my house belonged to obviously changed to the new ISP, and I got a new v4 IP

All I had to do was update my Wan router to forward trafffic from the new Ip.

Instead with ipv6 I would have to change every node on my network, update my internal DNS.

Now in theory I could have my own /48 which I take with me. That relies on my new ISP being willing to advertise it (which my current one does) but it’s not particularly common.

However a week ago my phone line was cut. I got a 5g mifi out and moved my wan connectivity through that until the cable was fixed. Again a nice simple masquerade on that interface and all was good (well not that good - very poor signal where I live)

But the elephant in the room is of course all that ipv6 stuff aside, I still need to run a dual stack (or use trashy nat abstractions). It increases my work for no benefit.

But taking about work, how about there?

I have a fleet of vehicles on internal 172.16/12 subnets, they plug together and route to each other, and route from where they are via a variety of vpn connectivity (hoping that at least one method will work, as there’s rarely a signal in the basements these park)

If I moved them to ipv6 then again I’m back to having to move my /48s. Except these vehicles get internet from various sporting venues - most of which struggle to turn off MITM/443 or unblock UDP, that’s just not going to work in a world where they turn up at 10am Saturday morning and need to be working 2 hours later.

What business benefit is there for me to double the workload and double the risk by moving to dual stack?


With IPv6 you would do stateless autoconfigurarion, so there would be no manually setting of your addresses. The router would advertise the new prefix and everything would just use it.

There would be no DNS configuration at all, all local machines would use anycast DNS for the services and a well known server for Internet addresses.

One of the primary goals of IPv6 was to avoid needing manual configuration if anything on the network. It is supposed to be as automated as possible.


> There would be no DNS configuration at all, all local machines would use anycast DNS for the services and a well known server for Internet addresses.

Assumptions and dragons be here.


MDNS, specifically designed for use on a single vlan, so useless


The solution there is DDNS, which is one of the things behind MS AD and just works, and configuring that on pure Unix infrastructure is surprisingly easy.


Layers and layers of fragile interdependencies, and stop doesn’t remove the need for ipv4


And now I can’t find anything because mdns doesn’t work, half my kit won’t take dns entries, more fragility from systems which don’t exist, or and of course all my open sessions on local networks break as ip addresses change, not to mention all my WireGuard sessions.


> There would be no DNS configuration at all, all local machines would use anycast DNS for the services and a well known server for Internet addresses.

Ok, lets say I have a web server - www.example.com running on 192.168.0.100:80/2001:db8::::::100 port 80 - and a game server - game.example.com running on 192.168.0.99:27015/2001:db8::::::99 port 27015. The IPv4 DNS A records point to a CNAME record of server.example.com, which has an A record for an IPv4 addrss which then forwards via NAT to the above. The IPv6 AAAA records point directly to the above addresses and go via a transparent firewall (which likely does router advertisement).

How do I handle an address change when I change ISPs here?

For IPv4 it's very simple - I update the CNAME record and there's no further configuration required - my NAT works and traffic flows. Assumedly I could automate this simply with a DDNS client my router likely already has built in.

For IPv6, I presume I need to look up all the machines (likely via logging into them as DHCPv6 doesn't appear the norm), then go through an update all records? I understand a static suffix may help on inferring the new address, but surely I either have lots of manual updating to do now or need to run a DDNS client per machine?

I have tried running a dual stack, but every time I try it seems to be significantly more steps and complexity than IPv4, but maybe I'm missing something.


I think a DDNS client per device is the simplest solution. With https://dns.he.net, this can be an hourly cron job that fetches a URL, so no additional software is needed.

Alternatively, if your devices have a stable suffix, https://dynv6.com/ supports prefix updates across multiple records.


> I think a DDNS client per device is the simplest solution. With https://dns.he.net, this can be an hourly cron job that fetches a URL, so no additional software is needed.

Sure, but depending on what you have set up a lot of maintenance and complexity compared to NAT.

> Alternatively, if your devices have a stable suffix, https://dynv6.com/ supports prefix updates across multiple records.

That is cool and interesting, thanks! Like a lot of people self hosting, the hosted nature of it is unappealing to be, but the concept is workable with a self-hosted solution for sure.


You could be using IPv6 ULA addresses internally on your home network to have static addressing. The real solution is moving to DNS names though with your router maintaining them based on DHCP leases or just using multicast DNS (Zeroconf).

In the future you can probably go "IPv6-mostly" with a CLAT engine to ditch dual-stack: https://blog.apnic.net/2022/11/21/deploying-ipv6-mostly-acce...


You could, but now you have three addresses per node instead of one. Plus, the mechanisms for assigning those addresses are weird compared to DHCP and static assignment. I get that it facilitates packets being routed reliably, but some of us want maintainable firewall rules that don't have to deal with IP addresses changing out of the blue.


You can DHCP or static assign those addresses the same. The trick to FW rules is you don't route the local prefix out so you only need rules for anything leaving or anything staying.

If you don't need cross subnet communication of your self hosted services you can also get away with just a static link-local and a dynamic general.


> In the future you can probably go "IPv6-mostly" with a CLAT engine

...although there still isn't any kernel support for the necessary SIIT v4<->v6 translation, so to implement CLAT you end up using unmaintained (and unmergeably bad) out-of-tree kernel modules or unmaintained (and slow) userspace daemons hanging off a tuntap interface.


pf on OpenBSD does it fine.


It does yes. I should have written 'no Linux kernel support' rather than 'no kernel support', sorry. The BSDs are better off than Linux here.


In IPv6 you'd do exactly like you'd do with IPv4, by assigning ULAs (private, local addresses) to your machines from fc00::/7.

With the (IMHO) big advantage that unless some madman has configured NAT66, the traffic over ULA will *never* get out into the internet.

The fact you have GUAs allocated doesn't mean you have to necessarily use them for your internal traffic. Most of the time link local addresses (on small scale, with auto discovery via LLMNR or mDNS) or ULAs are way more convenient than GUAs or IPv4 local addresses.


I do believe there is some kind of 1:1 NAT with IPv6 these days, which is way better than 1:Many of IPv4. There are so many potentially useful applications that are DOA because of v4 NAT being everywhere.


Those applications are DOA because of firewall administrators that barely allow tcp/443 through.


Not sure IPv6 will fix this. Technically, yes it does. But major providers only assigning a /64 to a home user (and charging hefty fees for "buisness use" /48) already leads to IPv6 NAT or segmenting the /64 further - which shoulnt be done.


That might be a 'your area of the world' issue. Every 'major provider' I've dealt with hands out /48s.


Most seem to have stopped and are handing out /48’s in my experience. Do you know any not doing that still?


I'm with one of the biggest german internet providers (o2 Telefonia) and they are not even providing any IPv6 at all (at least not in all regions, and without calling support to enable this feature individually).


Thanks for the correction! Hopefully they get their act together soon.


Why would anyone need /64 if not to segment it further.


So nobody ever again needs to think "what size end user subnet is in use". It's /64, it's always /64. It doesn't matter if you're embedding MAC addresses, using random assignment, using multiple assignments, have 1 device, have 1 trillion devices. It's a /64.


Ok, then why waste a /64 on that? A /96 should be enough, or even a /112.


Well ask the IETF, the RFCs say that sub-segmenting a /64 shouldn't be done. Yet people do, and the result is - well - here be dragons depending on the implementations.


You'll hate CG-NAT even more then.


The first time I encountered CGNAT was such a rude shock. I don't think it should be legal to market it as "internet" to consumers


If your ISP gives you CGNAT, then the best thing you can do is to request a static public IP address. Will probably cost a little bit more but well worth it.


heh ... It's all IPv6 ULA here with Nat66


be mindful of the Lindy effect, an observation on the future longevity of non-perishable things like technology or an idea being proportional to their current age, ipv4 due to its age will likely be around for quite some time to come.

https://en.wikipedia.org/wiki/Lindy_effect


IPv6 needs to die also. It had more than enough time to become dominant and has just floundered.


https://www.google.com/intl/en/ipv6/statistics.html 45% (and growing) of all traffic to Google is IPv6. Hardly "floundered". It's just that most major ISPs in the developed world have so many IPv4 addresses they don't care that much about IPv6 yet.

Now, try starting a new ISP without CGNAT (which will lead to a garbage experience for everyone) or IPv6. You'll have to spend literal tens (if not hundreds) of millions just on IP addresses alone.


25 years and we've only got 45% We should've been at 95% decades earlier if they came up with an actual transition plan.


"25 years" is not fair. There was no immediate need for IPv6 for anyone 10 years ago, so it should be no surprise that it's not at 95% currently.

Now there is.


20 years ago DJB called it [0]. The same problems exist. The only place IPv6 has gained any success is in the mobile market since handsets tend to be homogeneous and therefore configurable, which does allow a decrease in load of CGNAT for carriers. However, this success is not replicated in the broadband realm and probably never will be for all same reasons outlined by DJB. IPv6 is a second class network.

[0]: https://cr.yp.to/djbdns/ipv6mess.html


> IPv6 is a second class network.

Except it is not. Where it works, it works extremely well. IPv6 connections are, by default, always preferred on all modern operating systems.

I’d also take the entire article with a grain of salt, because it calls a fundamental impossibility (lack if interoperability of v6 and v4 addresses) as a “mistake”. Not having interoperability was the only way, not a “mistake”.

Virtually all the pain points have been dealt with. Any further transition to IPv6 is going to happen without anyone really noticing. Except for the couple of gamers and sysadmins who were wrongly advised to "disable IPv6" to fix "connectivity problems".


Yeah, right, that's exactly why we need to kill v6 and create the next version of IP - it's obviously going to happen faster this time.


You need NAT (or something else that is worse in some respects, like port forwarding) in any situation in which your subnet is given only one address upstream, even if it is an IPv6 address.


If your ISP doesn't do PD with v6, their implementation sucks. Even my crappy 6rd setup from CenturyLink gives me iirc an entire /48.


Many ISPs suck. That’s not controversial.

We have to deal with the world we live in, not the world we’d like.


That's why variable length SLAAC has been proposed

https://datatracker.ietf.org/doc/draft-mishra-6man-variable-...


oh no


> Many ISPs suck. That’s not controversial.

> We have to deal with the world we live in, not the world we’d like.

No we don't. Some choose to just put up with shittiness, others enact change.


Priorities. I don't have to put up with PPPoE in 2023, but it's a hell of a lot less expensive than pulling munifiber to my garage (and the monthly fees for munifiber are higher too, so there's no point in time where it makes economic sense), and consistency and stable addressing is currently winning over the promise of 5g/leo satellite.


Sure, but you're choosing that prioritisation. It's not being forced on you.


I'm not even sure if you and GP agree or disagree.

As for me, I want IPv4 to stay forever. It works for me and I don't see any reason to spend time and migrate to something else.


OP here.

Your "ISP" is a sysadmin at work who gives you one address to your cube.

You otherwise like the work and the team, and the compensation is fine.

Now what?


> Now what?

You advocate for change. You make the case.

You might not win the battle, but you're by no means forced to accept the status quo. The more who fight the battle, the more win. The more win, the faster progress, which benefits us all.


I set up NAT, I move on.

If you can solve a problem technically, without involving people, that is best.


Yeah, but there's no reason to do that with IPv6


You mean the unsung hero of the Internet.


IPv6 needs to die. IPv4 using NAT ensures a moderately high level of privacy. IPv6 with privacy extensions does not.


IPv4+NAT still exposes your router's address, which is still problematic, no? If you want more privacy than that you can use a VPN, which should work on IPv6 too.


You need to use firewall in BOTH cases. You're blaming the protocol for completely unrelated reasons.


Is there a better way to not unnecessarily leak addressing metadata to adversarial remote nodes and middle boxes?

IPv6 with assigning end users a whole /64 and end-devices continually churning through privacy addresses is a start. But even then some form of NAT is still required to nimbly use source prefixes from different horizon providers - eg to avoid spilling your geographic location or opening yourself up to low-effort legal shakedowns.

An example: on my local network I've got an everyday web browsing VM and a torrent VM. They each have static 192.168.x.x addresses, both so I can ssh in for administration and also to control their view of network services. They each see a completely different Internet horizon through the router - the web browsing goes out from a rotating datacenter IP, and the torrent one goes out from a consumer VPN. Each of those outgoing horizons uses NAT - any of my hosts using that rotating data center IP appears the same, and any of my host using the consumer VPN appears the same as every other customer using that same VPN node.

What is the no-NAT equivalent of this? Make that rotating data center IP and VPN external IP into subnet allocations, somehow feed that addressing information back to the hosts that are using it, and dual-home each VM with two routable addresses? For equivalent mixing on the consumer VPN there would also need to be some ARP-like protocol that let me continually rotate the address.


> What is the no-NAT equivalent of this?

At least for web-browsing and other HTTP/TCP use-cases: Cut off internet from your hosts and use centralized local proxies for all outgoing connections. Presumably you already have reverse proxies in place for the incoming. There is no need for NAT if all the traffic is taken care of in higher layers. This reduces your consideration to the internet-facing forward- and reverse-proxies only.

Sounds like you already have bittorrent figured out via VPN (Wireguard I guess? Well there we have one more UDP exit-point to consider).

BTW, I largely agree with your sentiment: Benefit of (especially migrating to) IPv6/DS for individual networks is often unclear or questionable and metadata privacy is a valid consideration where I believe correct solutions are not readily available and understood even by your well-intentioned and seasoned senior admins. Maybe globally the number of people who will get this right ranges in the 1000s? 10,000s if we're lucky? How many networks do we need to migrate again for "IPv4 to die"?

I guess the only way forward is for more people to do that migration and share their findings and solutions, though ;)


The general ignorance of the privacy benefits of NAT are what I'm reacting against too. It's certainly regrettable that end users are forced into NAT [0], but since then a shameless surveillance industry has cropped up, looking to exploit every bit of identifying information that it can. And it seems that calls for native IPv6 with everything having its own distinct address generally just ignore the practical privacy implications.

It certainly seems possible to get a NAT-equivalent privacy from properly set up SLAAC. Although a sibling comment says that the proposal for variable length prefixes was just submitted this year?!? Equivalent privacy would also require things like consumer VPN providers allowing you to request a few new addresses every few minutes, whereas NAT makes a shared uniform distribution the default.

Using a proxy instead of NAT is a good point, although there are certainly reasons I moved towards managing egress flows at the packet level with VMs rather than configuring software to play nice with proxies. And spiritually I would say that a proxy is an even more heavyweight version of NAT one layer up.

[0] Although I don't personally think the web would have developed any less centralized without NAT as many people like to imagine


Why not just use a VPN in both cases? That’s more or less what your NAT solution is doing, except without the encryption to the data center.


It is a wireguard tunnel to the data center, but my comment was focused on the addressing.


I wonder if ping could be abused to send short messages for p2p networking over UDP without a central server to handle NAT busting. Looks like someone figured the message part out:

https://stackoverflow.com/questions/31857419/how-to-send-a-m...

Unfortunately ping is handled by the OS so apps on the peer IPs wouldn't be able to read the messages.

I wonder if it's time to provide hooks to some of these services in user space to make true p2p under double-ended NAT possible. At least a readonly event stream or something. It just feels like the barriers preventing that are entirely artificial now.


Minor technical correction, but ping is ICMP rather than UDP.

But I have seen data exfiltration strategies and other communication that uses ping! Nowadays I think it would be nearly impossible for p2p because most firewall default configs will silently drop all ICMP, including pings.


Note that blanket dropping of ICMP will break Path MTU Discovery (PMTUD) so you had better not be tunneling or encapsulating TCP traffic.


Actually, ICMP-based PMTUD is almost dead in IPv4 due to this exact problem (since ICMP isn't a "protected" protocol which is required for IPv4 connectivity), most actual services tend to do the MTU discovery purely using UDP or even using TCP (https://datatracker.ietf.org/doc/html/rfc4821)


That is essentially an reaction to random middleboxes just plainly droping ICMP traffic. If you want to stuff to work you do not want to just drop ICMP. The sane policy is to just pass it through or maybe rate limit it.


Nod, I remember it not being as effective/easy to hide as exfiltration over UDP/DNS too, as there was always less background noise to hide in. That said, I found this with a quick search - https://github.com/utoni/ptunnel-ng for those who still want to do it. A number of hotels and captive portals still let pings through relatively unmolested even if they play tricks with UDP/TCP.

Any significant data over ICMP will always stick out though if anyone is doing analysis. Which isn’t often, frankly, in situations like I described, but…


Interesting idea. It would seem that 'id' is effectively equivalent of (sport, dport), but 16 bits is a much smaller space than 32.

But isn't the main problem with NAT punching that it requires activity on both ends to create a connection? Thus it always requires a coordination server to let node T (target) that node S (source) is trying to talk to it.

You've got me thinking though. I wonder if there is a way to do this with ICMP routing messages - unreachable, TTL expired, etc. You can traceroute to some IP address, and get back packets from other arbitrary IP addresses, and this generally works through NAT. I'm envisioning a host T that wanted incoming connections to pick a random "dummy" IP address , publish (router IP, dummy IP) as its identity, and periodically send packets to the dummy IP address. Now a host S that wants to talk to T might be able to send an ICMP TTL-expired to T's router, pertaining to the dummy address. The router should see this and forward the packet to T.

Of course this is contingent upon if IP addresses in ICMP fields are ingress policed the way the addresses in the IP header have become.

(edit: hah. There is now a top-level comment pointing to an implementation of this idea)


It exists: https://samy.pl/pwnat/

(from top comment)


Thanks, ya amazingly it got posted an hour after I asked!

I wasted two years of my life back around 2005 trying to implement P2P networking over UDP through double NAT with TCP fallback through a central server tunnel. It was before promises, coroutines, green threads, software transactional memories (STMs), conflict-free replicated data types (CRDTs), Firebase, CouchDB, Redux, declarative programming, etc etc etc had really gone mainstream. So I lost all of that time reinventing the wheel, only to come up with something that wasn't deterministic, because I couldn't figure out how to fork/join conditional logic around multiple conditions and data sources. My cooperative threads would just stall waiting for some network event after a user had already shut down the game lobby, and I'd find myself trying to use signals/exceptions and polling throughout my business logic.

That experience taught me that async programming isn't the way to go for mission critical code. The way to do it today looks like Raft, where event streams are centrally handled in synchronous blocking code in either an event handler or a separate thread to create a single source of truth like an STM. So last-known global state is available to all peers, the only differences coming from latency, permissions and distance cutoff. Player input as either events or composed states is sent over the network to update each peer's view of the STM to derive the final global state. Then each physics frame, peers simulate playing forward from that state for dead reckoning and update the scene view. The next frame may have a new STM state, so inconsistencies get resolved through animation metaphors like in CSS and Apple's Core Animation. But there should never be any logic that interacts with the network directly or hand waves its way through paradoxes.

Loosely that means that when you hit another player, you may see it lose a life, but then pop back in if the STM decides that the shot was a miss. The game itself is written the standard way, as if all players are on a single server, using (at most) cooperative threads like in Unity to handle game business logic in a coroutine style, rather than using switch commands to implement a state machine around player modes like IS_SPAWNING, IS_SHOT, IS_DYING, etc like most games do. Since everything is sync blocking within each cooperative thread, determinism is guaranteed by eliminating whole classes of bugs.

It would be nice if someone would put together everything I just said in a single package that's infinitely scalable to any number of peers like BitTorrent and runs on Mac/Windows/Linux without requiring a separate process or superuser privileges. The Raft/STM interface should look like a browser's local storage and provide fully indexed distributed associative key-value pairs as a JSON interface with the same transaction limitations as Firebase to allow updating trees with something close to ACID compliance from databases.

<rant>

If I ever get free of the rat race, I'd like to implement that P2P STM, but without world peace or UBI, I'll probably spend the next 20 years making rent like everyone else. I imagine the lifetimes of knowledge and experience siloed in all of our brains that could get out if we just had an extra 40 hours per week to work on the side projects of our dreams to get real work done and it haunts me. Nearly everything I do now in middle age is a waste of time because it would be straightforward to write better tools to make my job easier, but I'll never have the time to do that. So I toil in obscurity implementing other people's dreams in the hopes that someday one of their successes might help me win the internet lottery. But hey it's a living, and after going through a healing and growth process, just this year coming out of the dark night of the soul, I'm grateful for all of the lost decades and what they taught me about the nature of suffering and why we incarnated in this reality. Maybe the next generation will achieve the revolution that eluded mine.


Not exactly what you're looking for, but your comment about abusing pings made me remember pingfs [1]. It brings an entirely new definition of cloud computing!

[1] - https://github.com/yarrick/pingfs


As IPv6 gains more and more adoption this should become less as an issue if everyone has a publicly routable IP and can avoid NAT altogether.


ping is icmp not udp


It's super confusing because you can use udp to read icmp packets (but not send, iirc), and i might be wrong, but i remember seeing tuts that did this!!


Getting downvoted, so:

https://stackoverflow.com/questions/13087097/how-to-get-icmp...

Using a udp socket is the "classic" way of implementing ping on low privilege syystems


“udp” in this context means unprivileged data gram, not UDP the protocol. For some reason go uses the confusing “udp” name in parts of its API. The docs for this kind of socket seem to only exist on the kernel commit: https://lwn.net/Articles/420800/


You can kindly ask the kernel networking stack to inform you of errors, but that is not the same as "using udp to read icmp packets".


It annoys me when I write blog posts like this, it's so hard to link to a specific line of code and have that link stay alive and useful/fresh over time.

I guess if it's GitHub, you can tie it to a specific commit hash, file name, line number tuple, but if the codebase ever changes a lot its not super useful. I've also not had luck with other, less used git webviews (git.blender.org)


For linux kernel code, you can use elixir, so it'll at least be linked to a specific version. You can use an LTS version if you want the code to have at least some staying power.

https://elixir.bootlin.com/linux/latest/source


Tl;Dr (but do read it, it's very good): there's an id field in the icmp packet and netfilter is aware of icmp packets? Frames? as a "special case".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: