In the original TCP implementations from the early 80s, performance was (understandably) not the main priority.. they just wanted to get it working first. Also, nobody was sure what networking protocols would become popular (IP? OSI? XNS?) so a lot of work went into making everything as flexible as possible. This reached an apogee with AT&T's "STREAMS" subsystem (a competitor to the sockets API for writing networking code on UNIX) which was very flexible but also extremely complicated.
What Van Jacobson's work was saying is "look guys: by paying close attention to the fastpath you can saturate your 10Mbps network with TCP traffic" I'm sure everybody who has written a TCP stack has seen this email and has taken the tricks to heart.
These days not all of the tweaks might be still relevant. For instance, the hardware is probably doing the hardware checksum for you. However, the spirit of "do as little as possible in the fastpath" is certainly still followed in modern stacks.
Just as a historical note, classic Mac OS (versions 7-9) used STREAMS for TCP, in the Open Transport networking API. Sockets were provided as a wrapper around STREAMS, which worked about as well as you might expect. It had its fans but most of us were pleased to get real sockets in Mac OS X.
I wasn't aware classic MacOS worked that way. The "sockets-emulated-over-STREAMS" was the standard way of doing things in SVR4-based UNIXes. In the early-to-mid 90s this gave them a poor reputation among early web admins since they just didn't handle high rates of connections as well as BSD or even the early linux stacks.
In the case of Solaris, this was largely fixed in the 2.6 release where they went back to a sockets-based stack and ran the STREAMS stack in parallel. (Actually this was originally supplied as a semi-supported patch to 2.5.1 since the scalability of the network stack was becoming a critical issue at large customers) Many of the OS-supplied services (like rpcbind I think) still used the STREAMS API but the 99.9% of external software that used the sockets API now had a fast native path to the network.
As far as I'm aware this is still the state there today, with STREAMS probably still existing for compatibility but basically ignored by everyone.
There were some interesting things Solaris did with STREAMS.. for instance telnetd and rlogind would just push a kernel-land module that would copy data between the pty and socket. This way you didn't have to make the kernel->user->kernel->user transition on every keystroke. In the heyday of shell accounts this was terrific. Of course these days CPUs are so much faster and everyone uses ssh anyway so it wouldn't be a useful optimization.
STREAMS was an interesting experiment, but I don't mourn its passing at all.
In the original TCP implementations from the early 80s, performance was (understandably) not the main priority.. they just wanted to get it working first. Also, nobody was sure what networking protocols would become popular (IP? OSI? XNS?) so a lot of work went into making everything as flexible as possible. This reached an apogee with AT&T's "STREAMS" subsystem (a competitor to the sockets API for writing networking code on UNIX) which was very flexible but also extremely complicated.
What Van Jacobson's work was saying is "look guys: by paying close attention to the fastpath you can saturate your 10Mbps network with TCP traffic" I'm sure everybody who has written a TCP stack has seen this email and has taken the tricks to heart.
These days not all of the tweaks might be still relevant. For instance, the hardware is probably doing the hardware checksum for you. However, the spirit of "do as little as possible in the fastpath" is certainly still followed in modern stacks.