Hacker News new | past | comments | ask | show | jobs | submit login
Protocol Wars (wikipedia.org)
183 points by dcminter on May 11, 2023 | hide | past | favorite | 116 comments



TCP/IP is so entrenched in everything, literally, it will still be in use when we leave this planet and it gets swallowed up by the sun. The news will report the destruction of Earth and explain that it and its inhabitants longest lasting legacy is TCP/IP, then report that the transition to IPv6 is going well and everyone will be on it soon...


I think the real issue is the operating systems did not properly abstract away the APIs, protocols and networks. On plan 9 the dial string is my favorite part of networking because you optionally specify the network in addition to the address and port in the form of a string: net!address!service. To dial an ssh server on port 1234 you run "ssh user@tcp!server.net!1234". The network database (ndb) can then be setup to alias names with protocols or ports so you can omit parts of the dial string for known services e.g. ssh defaults to tcp and port 22. The dial string neatly wraps up the entire network connection into a single string alleviating the program from having to offer command line arguments for port numbers which leaks protocol details into the code making things ugly and difficult to change.

This lets your server take a dial string and listen on any network - even if that network isn't IP! So say you wanted to bring IPX/SPX back from the dead and use it in any program: so you write an IPX/SPX stack that binds over /net, then tell your server to listen on spx!*!555. Now your client also runs an IPX/SPX stack bound over /net and you hand the client program the dial string spx!server!555 and your program starts communicating over IPX/SPX to the server. Done. Want to use connect to an ssh server using quic on plan 9 (Assuming an imaginary quic stack)? just change the dial string to quic!host!port. Done.

Once you use an OS with clean and simple abstractions you are left craving more of it. I wish more people took an interest in building much more radical operating systems.


TCP becomes unusable under interplanetary latencies; leaving the planet will force us to transition off of it.


Nah. The main impetus to achieve FTL communication will be because it'll be easier than shifting people off of TCP.


It’s all fun and games until you get an ACK before you sent the SYN.


That's a solved problem, just ask the thiotimoline research group.

https://en.wikipedia.org/wiki/Thiotimoline


Tardis Control Protocol.


That assumes that the speed of light is an insurmountable speed limit. We can't be 100% sure of that.

But even if it is, TCP can be used with any latency if you configure the timeouts accordingly and maintain a large enough send buffer to allow retransmissions.


I don't think volume of data average user will want to send to Mars over next 100 years will be a important factor enough. I envision Mars comms using some kind of relay service, to which you'd communicate using TCP


Earth-Mars will be a store and forward network not unlike the old UUCP or BBS networks like FidoNet. You will send blobs of data via something that probably looks like a replicated S3 bucket and get notified when replication of an object has occurred.

IP is usable to the Moon but a lot of protocols would need latency constants adjusted and very large packets would be desirable. Store and forward with custom replication protocols might still be more efficient for large volumes of data.


i'm not sure about that statement. See https://en.wikipedia.org/wiki/IP_over_Avian_Carriers


That's IP (and ICMP for ping), not TCP.


Yep yep yep, Long Fat Pipe problems, right.


More likely, we will end up with 15 competing standard.

https://xkcd.com/927/


Actually, even TCP is going to be replaced. I just want to remind the initial work of Google on QUIC, which now has become some standard. It is based on UDP and re-defining TCP features based on UDP together with cryptography and by allowing IP changes to allow moving clients.


And yet, even with QUIC and a brand new Google phone on a Google wifi network and a Google Fi cell connection, I still can't walk away from my house and have a video call migrate from wifi to 5G without the audio glitching and dropping frames.

All the mobility benefits of QUIC mean nothing when the rest of the software stack isn't designed to let it work.


that's never going to be a seamless transition. the fact the transition happens automatically is a marvel in itself


It 100% could be. Changes you'd need are:

* Android lets an application talk over both wifi and 5G at the same time.

* Android exposes information about signal strength on a per packet basis, so that the application can decide at some point that too many packets are too close to not being received, so it's time to send data over 5G in addition. It should also expose data about packet retransmissions at the physical layer (ie. collisions and backoff times due to another device using the media).

* When the 5G data flow is established and channel parameters like delay and loss are characterized, then stop sending data over wifi.

* And since this process never involved delaying any data by a roundtrip, nor any packet loss, there is no need to drop any frames.

Note that the cell network has been able to handoff voice calls from one cell tower to another without glitches since the 80's!.


I think the factor you're missing is that the WiFi and 5G connections may have substantially different latencies to the other person. Going from a lower-latency to a higher-latency connection will always involve audio dropping out and video pausing. And in the opposite direction, it's preferable to skip over a few frames in order to reduce latency, rather than maintain higher latency.

I wonder if you don't see this with cell phones because the latency is generally identical, or if it's just less noticeable with audio than with video? I guess I'd also wonder if cell towers really do hand off without glitches, since there always seem to be glitches when you're driving, but you don't have the slightest idea whether they're from tall buildings or interference or Bluetooth or handoffs or what, or even on your end or the other person's end.


Video conferencing has latency in the 300ms - 1000ms range[1]. The actual network component of that is pretty small. And video conferencing software already has logic for time stretching/compression to handle variable latency - typically they'll speed up or slow down gaps between words and sentences.

[1]: https://www.mdpi.com/2076-3417/12/24/12884


Actually no -- things like FaceTime are more like 90 ms, while other common products like Zoom and Meet are around 150-200. These are minimums with somebody in your own city. This is actually from my own testing in the past, where FaceTime was the clear winner, since Apple seems to care a lot about latency and its software is custom written for its own hardware.

But networks absolutely can add major latency, have you never had a slow Zoom call? It's because of congestion building up and radio interference, not Zoom's software. That's what leads to things like 1,000 ms latency, which makes back and forth conversation very difficult. Moderate-to-major perceptible latency issues in conversation are always because of the network.

And yes some products do time stretching but that's also what people often call glitches because it's very weird.


> that's never going to be a seamless transition.

Why not? It is not obvious to me why a seamless transition is impossible?

Isn't the whole point of TCP that individual packets can take different paths over different networks and when they reach the destination they can be sequenced? Why should changing the network disrupt the individual packets from traveling independently?


There are several things that make this difficult. Much of the difficulty relates to the device changing it's network address. Seamless transition requires that the application can:

- Find the new address; i.e. Cell provider vs. Residential/Business ISP - Associate the new address with the same flow - Duplicate packets and reassemble them, or change to "better" path interface.


On both android and iOS, a regular app can't choose to send packets over both 5G and wifi at the same time. Thats needed to setup a new connection while still using the old one.


I was pretty sure that android and iOS both have apis for apps to choose between 'whatever is best', 'wifi preferred' and 'cellular preferred'?

I don't know the details, but some iOS network apis (Network Kit?) allow you to set a required interface type somehow. https://developer.apple.com/documentation/network/nwparamete...


Never is a long time. QUIC isn't great for this, because while it has IP address flexibility, it's not designed to have multiple paths simultaneously active.

If you design for this use case, you can make it work today; especially since the user is on a video call and only asking for the audio to be glitchless. Sending audio over both wifi and cell is possible and simple and would solve audio at the expense of double the audio bandwidth. More bandwidth efficient methods are left to the reader.


TCP is fine and isn't going anywhere. QUIC is an overengineered contraption that only exists to serve more ads. SSH doesn't need it. Postgres doesn't need it. Nobody needs QUIC except Google.


QUIC is not without defects, I'm with you there, but almost almost any "web-scale" company dealing with a lot of cellular connections would disagree with your statement. Uber, Verizon, Cloudflare, Meta, Fastly, among others, some of which have reported very significant latency reductions.


> SSH doesn't need it.

SSH's "master" mode connection sharing would benefit from QUIC / HTTP/3 head of line blocking elimination just as much as browsers would. Run a heavy rsync using connection sharing with an interactive session and you'll notice the interactive sessions latency suffering noticably.

Long-lived SSH connections would benefit from QUIC / HTTP/3 session survivability.


The fact that it takes so long to transition to IPv6 and like many other obstruction to technological/social progress are economic and business gain for a certain small group of people and organisations.


I'd say that it's not that there's some gain causing people/organizations to be obstructive, but rather that IPv4 and its address shortage simply isn't a problem for the influential people and organizations, they're doing just fine and for them there is nothing that needs to be fixed; IPv6 is needed by "someone else" who can be safely ignored with no noticeable effect to the bottom line.


I propose the Willis law (are you allowed to name a law after yourself?):

The spread and reach of TCP/IP will accelerate at a pace faster than that of the transition to IPv6 indefinitely, therefore we are saddled with IPv4 until the end of time.


In fact, we know that IPv4 lives in the observable universe. You can travel at the speed of light towards IPv6, but the expansion rate of space time-itself (you can think of space-time as a series of tubes) is not limited by the speed of light. Thus, IPv6 is only theorized to exist, in order to explain other physical phenomena. Worse, even if it was proven to exist by a distant observer, there no way for them to relay that information back to us because our routers here at earth would drop the packets.


It's just plain cost. Reworking things takes engineering time, an engineers need to be paid.

If I were to list impediments to the technical development of the Internet, local telco monopolies would be much higher in the list, but it's another kettle of fish.


The main issue with IPv6 adoption is people who didn't learn IPv4 before NAT, and want to use the hammer (IPv4 NAT) for everything.


A hammer working fine today, that you own, is better than a machine tool tomorrow, that you'll have to pay.


I wish we had an idiom for this. Maybe something bird-related.


Also, developers who don't know how to get their daemons to listen on multiple interfaces at once.


That's not necessary; listening on the wildcard IPv6 address without the "IPv6 only" flag enabled on the socket is enough. A IPv4 connection arriving at it is mapped to a special IPv6 address, so the code doesn't have to care about IPv4.

Unless you were using Windows, where IIRC the IPv4 and IPv6 stacks were separate, and a single socket couldn't be used to listen to both at the same time. (This might already have been fixed in more recent Windows releases.)


People wanted to change bitfield with version from 4 to 6 and increase amount of bytes in IP address filed. Simple KISS.

Instead of that we got fragile, backward incompatible, unnecessarily complex, jack of all trades protocol. And people are surprised that even after decades since the release of IPv6, it is being avoided like a plague.


This comes up often. What do you think would happen if you sent a “IPv4+” packet? All the ossified equipment would drop it. So you have to replace all this equipment, ideally _once_ so we ought to make it count.


Consider that NAT64 is able to translate between IPv4 and IPv6 at the packet level. If the protocols were as different as you suggest, this wouldn't be possible.


Doesn't IPv6 remove broadcast, thus removing DHCP and ARP?

I mean, hosts need to be 'dual stack' to support IPv4 and IPv6 together - whereas TheLoafOfBread's proposal would allow support of both with one backward-compatible stack.


NAT64 is never in a position to need to translate DHCP & ARP, those live fully on one side of NAT64 ("4" side for DHCP & ARP, "6" side for SLAAC/DHCPv6 & NDP).

NAT64 is involved on the level of TCP & UDP, not lower level protocols.


DHCP and ARP use 32-bit addresses, so those protocols had to change regardless.

I would challenge you to find someone who knows the difference between Ethernet broadcast and multicast without looking up the answer. They are very similar mechanisms.


IPv6 lacks back compatibility with the current standard. It cannot succeed.


It doesn't there are millions of IPv6 only devices that can connect to IPv4 only sites. See US mobile carriers.


So.... which will come first, "the year of IPv6" or "the year of linux desktop"? :)


I know you are joking but I don't get the longevity of this joke. I've been running Debian with XFCE for more than 12 years, on PCs custom-built and branded, and on laptops both of enterprise quality and retail junk. Heck, our entire B2B commerce business runs the same setup, with far fewer (read: zero) issues compared to when we were using Windows 98, 98SE, XP SP3, 7, Vista, and 10, at which point we declined Microsoft's telemetry and ads and forced OneDrive shenanigans and promptly switched everyone to Debian+XFCE.

99.9% of the time, modern Linux works out of the box everytime.

So I'd say it's on par with Windows versions, except I practically never need to download install drivers, or to face ads.

Maybe I'm on a different planet on which the year of the Linux desktop has arrived for longer than a decade.


This. I guess one of the main obstacles for the general Linux adoption is the lack of hardware with it preinstalled in the ordinary stores.


The year of IPv6 is already here, my home is already on IPv6 as is my mobile provider. Normal ISP and mobile provider in New York, not anything particularly geeky.

From Google you can see that IPv6 usage is around 40% now and steadily increasing: https://www.google.com/intl/en/ipv6/statistics.html


IPv6 is waiting on stubborn cloud networks and CDNs. Most of the edge now has it. My guess is that it’s mostly reluctance to introduce complexity when most customers are not asking for it. The biggest holdouts seem to be connected to Microsoft, with GitHub being one of the most relevant.

The Linux desktop will arrive if we just wait for Microsoft to keep making Windows worse. Linux doesn’t have to get better. MS just needs to incorporate more ads.


>IPv6 is waiting on stubborn cloud networks and CDNs. Most of the edge now has it.

Depends on the ISP. E.g. a lot of Verizon/Frontier FiOS residential customers don't have IPv6. Google statistics say USA is ~47% IPv6. Frontier FiOS is less % than that.

Example recent thread: https://old.reddit.com/r/frontierfios/comments/v9azjj/june_2...

Google's statistics: https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...


I wish we could start over and redesign everything from Ethernet up to TLS with the lessons we now understand. So many layers could be merged, security could be so much easier, IP addressing hassles could be unnecessary.

But all the stuff that seems obvious now couldn't have been learned without the decades of kind of terrible hackery that is OSI and the associated awful stuff like DNS.

I'm not sure what the moral of that story is or how anything could have been done any better. Maybe the software APIs could have been less coupled to the actual protocol, they could have designed things so app code had to be agnostic to the number of bits in an IP address, etc.


I wonder if we will see more of the network layer move into the application layer, like what I think happened with QUIC.

I’m not really a network guy, but the thing I find interesting about QUIC is that it’s upgradable. Normalising regular browser updates was a godsend for the web, while TCP/IP remains hard to upgrade because you still, in 2023, have to upgrade your whole OS.

Protocols like TLS that sit outside the OS seem to have been much more dynamic, and that too is the promise of QUIC. It might be interesting to find operating system primitives that enable use of the networking hardware without having to implement the protocol itself. Although that stuff is above my pay grade.


The need to upgrade the OS is only a small part of the problem.

On the one hand, the fact the TCP is unchanging has led to network cards supporting TCP directly. You can shove a large segment of data in the network card and it will split the data up in TCP segments and transmit them. There are also big content companies that let the kernel do all TLS encryption, in some cases that can also be offloaded to hardware.

The big problem for TCP is all the boxes on the internet that understand TCP and don't understand TCP extensions. That could be home routers doing NAT, firewalls, etc. Those boxes are not only hard to upgrade, they also break things. QUIC fixes this by making sure that as much as possible is encrypted.

Some ISPs monitor TCP headers and from retransmissions they can figure out where problems are in the network. QUIC will take that ability away.

So QUIC is a mixed blessing.


I disagree.

If there’s a financial incentive (better performance or whatever) then the server side will upgrade; it’s a financial and timing decision with strong motivators.

But the client side problem is 5-6 orders of magnitude larger than the server side problem - and to make things worse, they basically don’t care.

People seem happy enough to upgrade their browser, but upgrading their OS is a big deal. Even I don’t like doing it, and I consider it important.

So I think a reasonable rule of thumb would be, the server side will take care of itself. But if you need your clients to reboot their computer to upgrade the network stack to improve your server performance, well, it ain’t gonna happen.


This is where a lot of high-performance stuff went in the last decade or so. HPC, HFT, AI training, several storage services, and clouds pretty much all do their networking in something other than a kernel. It turns out you can get huge performance gains by not using a one-size-fits-all networking system.

It only takes a few more steps for this model to trickle down to general applications, as HTTP and RPC stacks pick it up.


> I wonder if we will see more of the network layer move into the application layer, like what I think happened with QUIC.

https://lwn.net/Articles/169961/


Or the kernels becomes even less of a monolith on all fronts and more userland-like.

Both io_uring and eBPF would allow us to handle more complex tasks inside the kernel, reducing context switch losses yet still allowing the benefits of a upgrade-able structure without having to upgrade the OS or even the kernel itself.


Yeah. It strikes me that the lack of a stable Linux kernel ABI is part of the reason why we can’t easily upgrade bits of it. Upgrading a kernel module would be a good solution (could even be done by applications) but IIUC its infeasible.

But still, it seems conceivable that a networking kernel module could talk exclusively to an adaptor shim that is part of the kernel.

idk, I actually haven’t so much as compiled a kernel since the late 90s, so I’m pretty much talking out the wrong end :)


> I wish we could start over and redesign everything ...

I wish we all could learn that "because it works now" is a valid reason to resist change, and that assuring back compatibility is a condicio sine qua non for every change that we want adopted by people at large.


"Because it works now" is a perfectly good reason, I was more talking about a hypothetical "wave a wand and change all devices overnight" scenario.

I think an IPv8(Apparently we skip odd numbers) could be a real practical thing one day though, because a lot of things really do with quite work that well. The classic "it's always DNS" meme seems to be very real, tying TLS to domains instead of IPs impedes anything on LAN, IPv6 has too much inconsistency in how many bits are allocated to customers, and would be much saner with a more structured ISP/Region/Subnet/DeviceChosenIDThatIsTheSameEvenIfYouChangeISPs scheme so every ISP got the same number of bits.

Insecure DNS doesn't need to exist outside of MDNS, and even MDNS could at least have pairing prompts, if all records are signed and everything is secured at the IP level(Embedding key hashes in the IP), then certificate authorities don't need to exist either.

MACs don't really need to be a thing either outside industrial niche stuff, if we mostly just do IP traffic, that extra framing and the lookup/translation layer can go away.

Some level of onion routing could probably just be built right into IP itself, there's not much reason an ISP or even the local wifi router needs to know the destination I'm sending a packet to, if there's a fixed heirarchal structure, it just needs to know the recipients ISP, and the rest could potentially be encrypted at the cost of some negotiation setup complexity.


> I think an IPv8(Apparently we skip odd numbers)

A lucky few of us have had the pleasure of working with ST-II, AKA IPv5:

https://en.wikipedia.org/wiki/Internet_Stream_Protocol


> tying TLS to domains instead of IPs impedes anything on LAN,

TLS doesn't care what's in the certs, that's an application concern. IP addresses work fine in x.509 certs, but are discouraged because of difficulty of validating that control of the IP will continue through the certificate validity period. But this wouldn't much help for your LAN use case, assuing you're using RFC1918 addresses, because no one can prove control of those, so no publicly trusted CA should issue certs for them. Same thing if you use ipv6 link local, although I don't know too many people typing those addresses, mostly people want to type in a domain name, and if you make a real one work (which is doable), you can use a real cert; if not, not.

> IPv6 has too much inconsistency in how many bits are allocated to customers, and would be much saner with a more structured ISP/Region/Subnet/DeviceChosenIDThatIsTheSameEvenIfYouChangeISPs scheme so every ISP got the same number of bits.

DeviceChosenID that is the same everywhere is kind of a huge privacy thing that we already had and mostly rejected. Giving every ISP the same number of bits is silly anyway; Comcast needs more bits than [ISP that only serves California], and heirachical addressing is lovely for humans, but not necessarily a great way to manage networks; each regional portion of the network is going to end up doing its own BGP anyway, at which point, the heirarchy doesn't make a huge difference.

> Some level of onion routing could probably just be built right into IP itself, there's not much reason an ISP or even the local wifi router needs to know the destination I'm sending a packet to, if there's a fixed heirarchal structure, it just needs to know the recipients ISP, and the rest could potentially be encrypted at the cost of some negotiation setup complexity.

Source routing exists in RFCs but was quickly dropped. There's a lot of security reasons, but also it just doesn't work that well. The destination is needed to make the best routing decisions.

Say you're in Chicago, sending to me near Seattle, and our ISPs peer in Virginia and Los Angeles (which is a bit contrived, but eh). If you send to the nearest peering, traffic will go east to Virginia, then west to Seattle. If you look up by destination, you'll more likely go west to LA, then north to Seattle and total distance will be much less.


You can prove control of an IP address if they were longer, and the DeviceGeneratedUniquePart was a hash of the certificate.

If you need to renew, you just get a new IP and tell DNS about it, if you're using fixed IPs and can't easily renew without manual work, you're still better off than no encryption.

Instead of proving control of a domain, you'd be proving that an IP is one of the correct ones for a domain.

Tech is advanced enough now that we don't need to conserve every single bit. What you lose in efficiency, you gain in easily being able to tell what part of the IP corresponds to one customer for antiDDOS ratelimiting and the like.

Source routing would probably be best as an optional feature. But if you had a hierarchy with a part explicitly tied to the region, you could source route to the county level at least or even data center level without needing to reveal the exact destination.


you don't understand the power of the Lindy effect. All of the dated infrastructure we have now has ultimately survived the test of time.

https://en.wikipedia.org/wiki/Lindy_effect

by the way DNS is wonderful.


Lindy is pretty much just natural selection for ideas, right? It doesn't prove they are optimal.

It does seem to suggest that frequently switching protocols all the time like we switch web frameworks would be awful though. The benefits of having any universal standard at all can outweigh the flaws in just about anything, even Ipv4 or in some cases even analog stuff.


Exactly. It's like the laryngeal nerve of the giraffe. It did the job for millions of years but there is no good reason other than history for why it still needs to loop around the heart.

https://www.mcgill.ca/oss/article/student-contributors-did-y...


> Lindy is pretty much just natural selection for ideas, right? It doesn't prove they are optimal.

You missed the point. Lindy law proves that being optimal is not necessary for an idea to be successful. Your association with natural selection is valid from this point of view, too.


Then 'we' would miss out on the learning inviting larger mistakes, more likely, at grander scale. That starting over isn't viable is powerful knowledge. The network, if you like, as a learning organism with chaotic knowledge distributed and embedded within.


I'm curious, what exactly would you redesign? Seems to me that it's intrinsically necessary to have DNS to translate between human-readable names and machine-efficient addresses. What am I missing?


The main thing is I would collapse most of the layers.

Something like DNS is necessary, but I'd use individually digitally signed records except for mdns, and if you couldn't reach upstream, you wouldn't ever just drop stuff out of the cache.

I'd get rid of certificates entirely though. Without any insecure DNS, you

I'd also change IP addresses to be longer, maybe even 256 bits. 48 would be reserved for identifying an ISP (And large ISPs would have different codes for different regions), 16 would be reserved for whatever the ISP wanted to do with them, 32 bits would mark a customer, 16 would be for a customer subnet, and the rest would be chosen by the device, based on the hash of a public key.

All communication would be secured at the IP level, if you have the right IP, it's all good, if you want to refresh your "certificate" you just get a new IP and tell DNS about it.

Since the last section of the address is enough to uniquely identify the device, you can move between providers while still keeping your cryptographic identity.

Which also means that any kind of decentralized DHT routing can work transparently, the first half of the address is just routing info you can ignore if you have a better route, like being in the same LAN.

DNS servers could also also let you look up the routing info just given the crypto ID of a node, so you don't need a true static IP.

You could also do the same lookup with a local broadcast, and then cache the result for later use, so your phone can find your IoT devices, and then access them on the go with the cached routing info, if DNS is down.

A special TLD could exist that's just the crypto ID in base64, that doesn't require any registration. You can't spam too many of them, because you can reliably IP identify customer numbers unlike with ipv6 which makes that rather hard.


It's really hard to prove anything about some things being different in this space, I'm certain it's not a unilaterally good thing.


This was still raging when I was a student. From my recollection I _heard_ a lot more about OSI, but everything I used had something proprietary (e.g. NetWare, Token Ring) or TCP/IP.

The people supporting TCP/IP had a head start and out-executed the OSI committees and it wasn’t even funny.


Yeah. Top-down vs. agile. Cathedral vs. bazaar. "The right thing" vs "worse is better". And, as you said, the agile/bazaar/worse-is-better camp totally out-executed the top-down/cathedral/do-it-right approach, to the point that OSI is only a theoretical model at this point, and TCP/IP is running in everything from supercomputers to washing machines.

Another one of the "well-done in theory, but left in the dust by reality" approaches: X.25.


As someone who participated at the periphery of the IETF, I resent the association with the ‘whatever goes as long as it only takes two weeks’ software planning movement. They did understand their limitations, and had a huge focus on practicality (running code), but otherwise they really did try to do the right thing. And aside from some obvious warts, I think they did an amazing job


I never said what you're resenting, nor meant to imply it.

But IETF, with their emphasis on practicality, on "rough consensus and running code", was far more on the agile end of the spectrum than OSI was. IETF would recognize a problem, have an RFC, and have working implementations before OSI had done anything. This meant that if you wanted something, your first chance to get it (often by years) was on the IETF road, not on the OSI road. Repeat that a bunch of times and the people who could benefit from new things all moved to the IETF standards.


Ha! I had worse, when I was a student (in '90s) I was taught some crazy stuff - OSI model but without a single word about the OSI protocols (like X.224) but with TCP/IP pulled over instead. So I was unironically taught that e.g. TCP is a layer 4 on the OSI model and HTTP is layer 7, etc. As if the model was still alive and relevant somehow.


They were still teaching that one decade ago, maybe never stopped.


Much the same here. I dug up this page in Wikipedia when I was reminiscing about my university's network. In my first year there was no "internet" access per se, but you could send email from the Vax VMS accounts via JANET to other sites.

Then the next (?) year there was access to the "PAD" (Packet Assembler Disassembler) to JANET, from which one could issue a command to connect to an X.25 address. I still remember the number of NSFNet relay, 00004001018057 [1], which in turn allowed you to telnet to other internet services. I think I logged into Nyx and a few other BBS-like things.

Then I did an industry year at ICL, a British computer company long-since engulfed by Fujitsu, where they were also X.25 focused for big networks and Novell Netware focused for small networks - but the web was starting to be a big deal and I couldn't get anyone to listen. I have an anecdote about downloading the SLS Linux distro via an archive re-mailer that I won't bore you with right now :)

Then, in my final year back at the university, the web seemed to explode and the X.25 stuff had more or less disappeared.

In retrospect it's amazing how quickly this changeover happened, even if it had been bulbbing away in the background for a long long time. From my perspective the WWW looked like the catalyst that pushed everything in favour of TCP/IP.

[1] If I remember rightly, the adjacent address 00004001018056 was another Vax somewhere that hosted a bunch of LaTeX stuff?


It was easy: the TCP/IP folks had BSD and it was running in the universities, and anyone who wanted to sell to them had to run BSD or else support TCP/IP too because it was the only easy way to integrate into those universities' networks.

Then people who went to school in the late 80s took that to $WORK in the early 90s, and by the mid-90s everything that wasn't TCP/IP had died. Novell's IPX, for example, was a thing still in the mid-90s, but a dying thing. Winsock for Windows 95 was the last thing that was needed to make TCP/IP rule -- after that there was just no way to do anything but TCP/IP.

It was a matter of network effects.


Same here, highschool student during 80/90's, technical magazines were full of this stuff.


Not saying that it's strictly how it has to be done but historically in OT environments the choice was more between (Ethernet) CSMA/CD and token ring (not so much Netware, it wanted to do its own thing) on a segment that IP was transiting.


This explains my childhood video game woes in the 90s.

I'd have friends over, and we'd want to play DOOM multiplayer. The only problem was that as kids we had no idea how networking worked. We'd fiddle around on the Windows networking setup, and there'd be a list of protocols like IP and IPX. Words like Token Ring would appear. Just a blur that 15 year old kid had no chance against.

Curiously despite being a network with only two computers, you couldn't just assign any IP address you wanted. The mask was also an interesting thing, examples would have a bunch of 255s but sometimes a smaller number like 248, and then zeros. I never figured this out until I grew up.

So then we'd goof around a bit more and set it to IPX, which sounded related. Somehow we got that to work eventually.

Nowadays if you just understand IP, you're mostly fine. It's probably easier to learn as well without competing protocols with similar jargon.


I feel like late 90’s/early 00’s, on the other hand, were the ideal time to learn a little bit about networking as a kid. You only really needed to learn about IP addresses and NAT.

I vaguely remember Age of Empires had some options for more esoteric network options, but it was pretty obvious pretty quickly that they were hopeless.

Earlier, as you mention, networking was for grown-ups to figure out only. Nowadays you don’t need to learn anything about networks to play most AAA games (I bet some folks learned a lot setting up Minecraft servers though).


Yep, IPX just worked. It was sort of like how IPv6 auto-assign is now. Except IPX worked.


Doom didn’t even use the Windows networking stack. You just chose IPX from the multiplayer setup menu.


ah yes, and the null modem cable.


I recall the original Command & Conquer and dialling my friends phone number to play 1v1 multiplayer.


I remember learning about OSI in college in ... 2000.

I didn't understand why it wasn't a mere footnote. The professor came across as a sore loser, because it was unambiguously clear that OSI was firmly dead.


… and today we are going to try to fit Ethernet, TCP/IP, and so on onto the OSI reference model.


No, we don't. The OSI reference model keeps getting referenced (hah) in the network courses but it's merely a lip service, nobody's actually taking it seriously: just look at how mobile networks bend over in ridiculous ways to support TCP/IP.


I don't know if I'd say its universally lip service. Perhaps that's true in some areas of tech, but in my corner of the world building an ISP, practically all of network hardware OEMs I'm exposed to (mostly cisco, but also some juniper, arista, and zyxel) use the OSI model predominantly in their product descriptions and documentation.


Actually we do e.g. in ISO TC22 SC31, as part of Automotive Ethernet.


IMO, the worst thing to come out of OSI was the use of numbers for referring to layers. It seems obfuscationist to say "layer 2" or "layer 3" but those seem to have lodged themselves into the industry. For those who only occasionally deal with network stacks "link layer" "internet layer" and "transport layer" are far more mnemonic.



In the mid-1980s, I took the OSI course.

That's a bunch of time I'll never get back.

That was nothing, though, compared to the X.400 course...


OSI concepts are still important, even if none of the layers map to a specific set of protocols and things are blurry near layer 1 and 2.

For example, the fact that presentation and application are separate layers, and that compression/encryption are better in their own layer (presentation) rather than totally integrated in the application.


The OSI presentation layer has nothing to do with compression nor encryption. The top three layers of OSI were much more about modelling mainframe terminal applications than anything modern.


> For example, the fact that presentation and application are separate layers, and that compression/encryption are better in their own layer (presentation) rather than totally integrated in the application.

CRIME attack enters the chat

Compression is not better in its own layer. See the CRIME attack on TLS compression.

Encryption is better at as low an end-to-end layer as possible so that we can have a hope that we can off-load the work to hardware.


We need a time machine so we can extirpate from human memory the monstrosities that are x.400 and x.500 naming.


And why is "big endian vs little endian" a thing in computers and networking? Because it's homage to Jonathan Swift. It references an apocryphal war about which end of a hard-boiled egg to lop off; no doubt a much more vicious affair than the Haiti / Dominican spat over the pronunciation of "parsely".

Parts of the internet are still stuck in the 1980s, my pet peeve being the DNS resolution protocol which calls on recursing resolvers to try TCP after receiving a successful UDP response with TC=1. (Where did this become "only"? When did it become ok not to support TCP?)

Seems like IP works just fine on top of ATM. Maybe the MTU isn't the same as ethernet; too bad.

By the way UDP frags are bad; don't do that. Guess it's not completely transparent afterall and relying on a kludge mechanism as the answer to knowing the terrain (MTU) isn't the One True Way.

TCP is a reliability mechanism built on top of IP. It has enough popularity to have its own protocol designator so that you can't say it's "built on top of UDP". Don't want that mechanism? Don't use it.

Want messaging? Use a datagram protocol; that would be UDP (unless you decide to use your own special protocol ID). It's unreliable? Do something about it. (See where I'm going with this?) Craft your messages so that they fit in a single packet. Invest in some decent networking gear that reports dropped packets and allows traffic shaping. Bonus: get one-to-many and transit-based destination editing.

If you refuse to compromise on your abastraction and are too cheap to afford SDN, create a virtualization layer for your network traffic like everybody before you... it's virtual turtles all the way down or hasn't that occurred to you?


You may want to talk to operators of DNS servers to find out why DNS is mostly UDP. From a software point of view is hardly matters. UDP has some quirks, TCP has some issues, not a big deal.

The big deal for operators is resource consumption and latency. In the vast majority of cases where the UDP reply is not truncated, using UDP is both cheaper and faster.


I have worked for Paul Vixie. (I've met Mockepetris and Liu.) Even Vixie agrees UDP frags were a bad idea for The DNS.

Given the context of where we're discussing this, mind if I ask: Why is UDP so attractive for DNS operators? Is it that it encapsulates the message query / response paradigm so well?

Why isn't encryption of greater concern? How can DoT or DoH be "faster" than UDP? Is it really the protocol or a misattribution of causality actually based on LFNs, buffer bloat, and mis- or mal- construction of the tenet that UDP traffic can be dropped with no consequence (instead of employing traffic shaping, for instance).

Honestly, lay it on me: explain to me about messaging versus pipes using DNS as an example.


Fragmentation is a complex subject. If avoiding fragmentation was obviously good, then UDP packets would always be sent with the don't fragment (DF) bit set. In reality, that would make some situation much worse than allowing fragmentation to happen.

DoT and DoH are certainly not faster than UDP. It is only in the long tail that DoT and DoH work better.

If encryption is required for DNS, then obviously UDP and plain TCP not suitable. But operators of authoritative DNS servers are very reluctant to support DoT. The IETF is still trying to hash out how to use DoT between recursive resolvers and authoritative server.


> If avoiding fragmentation was obviously good

In the general case frags are always bad, and instead of blocking DNS over TCP your firewall should be alerting on them.

In the case of the DNS resolution protocol, UDP frags are double-bad. The protocol (not updated since the 1980s) specifies that TCP retry should only occur if a UDP response is received with TC=1; if the UDP response is dropped, TCP retry never occurs. In the case of a UDP frag if a portion of the original datagram is dropped the response is never reassembled, TC=1 is never observed, and TCP retry never occurs.


Thanks. Apropos "virtual turtles all the way down" I was expecting something along the lines of the well-recognized historical practice of site to site VPNS and tunnelling for DNS backhaul.

It's not fair to conflate "DNS operators" with the relationship between authoritative and recursive operators.

The (main) problem with encryption has to do with context. In order to achieve privacy, key and/or secret material needs to be negotiated between the ends and this context needs to be retrievable for reference. There is no provision in the DNS for that. There is no provision in the transport per se either. I argue that this is appropriate and correct, that generating and negotiating this material is a separate concern and developing the protocols to do so proceeds on a different track (QED).

A better term than "connection" for the concept we are about to discuss would be "circuit". Once in possession of the approprate key material (and the out of band signaling this implies) it's possible for the parties to exchange individually encrypted messages in the open, but this ignores network issues such as dropped packets and fragmentation to name a couple. This could be dealt with but it is not a general solution.

If you're going to solve problems it's generally accepted that it's prudent to solve them generally: hence the notion of a circuit. (Frags and dropped packets are other general problems to be solved by a circuit implementation.)

It's easy to see that such a generalized circuit is a tunnel and that UDP (or TCP) packets of arbitrary size (up to architectural limits, but significantly larger than either ATM or ethernet frames) can transit such a tunnel with guarantees of no loss and no fragmentation.

DNS specifies a very lightweight convention of its own for encapsulating UDP traffic in a TCP tunnel: the payloads are prepended with with a (2 byte) length word [0]. The tunnel protocol (carefully) doesn't even specify that responses via this tunnel need to preserve the ordering of requests. Bearing in mind that context needs to be preserved for each (active) request, the context required for the tunnel itself is amortized by the number of requests using the particular tunnel instance or circuit. (No need to worry about frags, all messages are preserved up to the architectural limit specified by the underlying DNS protocol.)

So, this is one example of solving the message-vs-stream dilemma in the real world. We can see that IP-over-IP is a real thing and that DNS provides a point solution for eliding frame size issues but that more general solutions also exist which also support e.g. encryption.

The key architectural consideration is mostly who maintains the circuit context and where it is maintained.

[0] Yes of course the redundant IP header is omitted for the encapsulated messages.


I really wish we used more technologies coming out of DEC. Only now are we getting stuff like ECN. But I would have really liked to see TUBA be IPv6.


At least we have Windows NT, the spiritual successor to VMS! (Only deep inside, of course.)


if you increment each letter in VMS by one you end up with WNT


TIL:

> During a dispute within the Internet community, Vint Cerf performed a striptease in a three-piece suit at the 1992 Internet Engineering Task Force (IETF) meeting, revealing a T-shirt emblazoned with "IP on Everything"; according to Cerf, his intention was to reiterate that a goal of the Internet Architecture Board was to run IP on every underlying transmission medium.


And now we will have the @Protocol vs ActivityPub war ;P


Related Krazam video: https://youtu.be/NAkAMDeo_NM


Sounds like what's happening in blockchain scene right now.


This is like OG nostr/fediverse/bluesky battle royale




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: