It is written in C++14, is platform-independent and works within a single-threaded reactor environment. Microcontrollers are the primary target for example the stack does no dynamic memory allocation at all. I consider the TCP part to be "pretty complete" at this point, it even has PMTUD which lwIP lacks. The TCP code is also quite commented.
Linux specific code is here: https://github.com/ambrop72/aprinter/tree/master/aprinter/ha...
However note that the LinuxTapEthernet is not directly for my TCP/IP stack but there is a driver abstraction in between, which is also the part that configures/instantiates the stack (aprinter/net/IpStackNetwork.h).
It is possible to try it out if you have the Nix package manager (since the configuration/build system is based on that):
python -B config_system/generator/generate.py | nix-build - -o ~/aprinter-build
M926 INetworkDhcpEnabled V0
M926 INetworkIpAddress V192.168.64.10
M926 INetworkIpNetmask V255.255.255.0
M926 INetworkIpGateway V192.168.64.1
What is to be done is a minimal example for integrating the stack and corresponding documentation. For example how to integrate with the reactor, which is abstract from the perspective of the stack but must follow a specific interface. The stack itself only uses timers since other event sources would be used for I/O by e.g. interface drivers. Currently there are two event loops provided, one for Linux based on epoll (aprinter/system/LinuxEventLoop.h) and one that uses busy looping meant for microcontrollers (aprinter/system/BusyEventLoop.h).
Minor nitpick - I wish the author would link to the previous installments in the series at the bottom of the page :)
Of course in the real world people have tried it, but it hasn't caught on. There are some niche "specialized hardware talking to specialized hardware" applications where it lives on (RDMA in HPC for example).
The limited TCP acceleration features in ethernet cards that are only used in amenable circumstances have been significant sources of headache and mysterious data corruption bugs, but they have eventually become pretty common HW features - not sure how often the feature is actually used.
The need has also decreased in end-user hardware, since broadband has largely stopped getting faster - we don't have the 10-100 Gbit Internet connections that the curve from the early 2000's would have lead to. Instead we transitioned to choppy 4G on iPhones and left broadband to stagnate...
There is a more heavy handed offload in what Windows calls "TCP Chimney Offload" (https://docs.microsoft.com/en-us/windows-hardware/drivers/ne...), but it isn't as commonly used and is generally disfavored.
1 - https://en.wikipedia.org/wiki/TCP_offload_engine
https://wiki.linuxfoundation.org/networking/toe suggests it's done mostly in closed-source firmware.
The feature is usually called a "TCP Offload Engine".
Congestion control and flow control for instance, there are many RFCs that define them.
So yes, please roll your own. But no, please do not add to the wild zoo of errors by deploying it.
I think there's a lot to be said for testing network code against the myriad of real world implementations out there.