I'm not saying that a non-hourglass approach would've been better, but I often find myself dreaming of a world that's more hospitable to alternatives to IP (in both senses of the acronym, though I'm talking about Internet Protocol here), and its IP's position at the thin waist of the hourglass that makes it resist change.
My problem with IP is that it's not partition friendly. You have to keep your stuff powered on and connected just in case two parties want to have an instantaneous conversation right now, otherwise they can't talk at all. That's not too heavy a burden if you're an ISP, but it's sort of a problem in cases where there are fewer resources available. Like maybe the bits will get there, but they're going to have to wait for the sun to come up and shine on a certain solar panel, or they're going to hitch a ride on a certain delivery truck that's heading that way anyway.
If our software could tolerate reductions in service like this, if its had more modes than up-entirely-and-instantaneous and down-entirely, then there would be fewer incentives for misbehaving governments to attempt to shut it off every now and then, and it would be more likely to be helpful in the event of some kind of infrastructure-damaging disaster.
A big difference between now and earlier is the timeouts.. they weren't there before. When I worked at a customer's place back in 1997-1998 I used a CVS server in my home country as a code repository. I stayed at the customer's place for months, updating code etc. Back then with no timeouts I could simply start an up- or download at any time, and because the net was pretty slow - I used a 19200 bps connection via a mobile phone - it would take its pretty time. At day's end I simply closed my laptop, shut down the phone connection, and went back to my rented apartment. The next day I came back to the office, opened the laptop, connected the phone and re-connected to my home office, and the CVS update continued as if nothing happened.
Now, of course, if anything is idle for two minutes it's basically dead, and if anything disconnects the connection itself is aborted. So now everything is indeed forced to be instantaneous. (Of course there are kind of reasons for that.. with no timeouts any malicious actor could tie up all a server's port, for example, but still..)
That might be fixable -- what kind of timeouts are you hitting, and for what application protocols?
There's no concept of "idle" at the networking (IP, TCP, UDP) level. There are packet response timeouts at the kernel level, but that has always been true.
Some applications (e.g. sshd) also have session idle/timeout settings, but they are just administrative hygiene decisions and usually easy to override. Definitely easy if you control the server, and the ssh client has a KeepAlive config setting if you do not.
> with no timeouts any malicious actor could tie up all a server's port, for example
This might be a misunderstanding. Ports do not get tied up. A server can be overwhelmed with traffic, but that requires packet volume (e.g. DoS attack) and it doesn't matter what ports are used.
All application protocols today have very short timeouts, typically in the few tens of seconds range. Most TCP stacks have a 10 minute timeout by default, if I recall the value correctly - if a sent packet has not been ACKed within that range, the connection is aborted. DNS resolvers typically have a similar timeout.
And as for servers, while you're right of course that server ports don't get tied up, there is typically some memory cost to maintaining an application-level connection, even if idle. I don't know of any common HTTP server for example that, in its default settings, doesn't close connections after some short idle time, for this very reason. More sophisticated attacks, like SlowLoris, have been created for this very reason. A server which doesn't timeout idle connections can easily be DoSed by a handful of compromised clients.
That should be more common in CGNAT or mobile networks than home/office networks. Although I don't know what residential ISPs are up to these days. No doubt some sketchy practices have crept in.
For that matter, it could be a low-RAM (carrier-provided?) NAT gateway (or very high network activity) at the premises, blowing up the NAT table.
Still, could be fixable. If the application protocol has some kind of keepalive heartbeat, or if local hardware can be configured or upgraded.
IP doesn't require connection state. If you want resilience in the face of loss you need to add more, like holding the data until a transmission has been acknowledged as received. TCP does that fine for modest packet loss and short outages.
For longer timescales, I don't think you want some sort of store-and-forward baked into the normal network protocols, because meaning decays with time. Imagine I want to watch a movie, but the service is down for a week, so I request it every night. The network stores those 7 requests and delivers them when the movie service comes up again. Do I really want to get 7 copies of a movie I may have given up on? Or even one? That question isn't resolvable at the network layer; it requires application level knowledge. Better to let application logic handle it.
I agree with your concern about government control, but I don't think the IP layer is a place to address it. I think the work around ad-hoc and mesh networks is much more interesting there. That would also drive more resilient applications, but I think that can be built on top of IP just fine.
Consider a society that has left Earth. If light takes minutes to travel between locations, is a chatty protocol what you want?
Or do you need a whole host of different application-level solutions to this problem?
There's a different lens to look at it. There's an old tech talk that got me out of the IP-shaped box a lot of us are trapped in: https://www.youtube.com/watch?v=gqGEMQveoqg
I got Gemini 1.5 Pro to summarize the transcript, which I butchered a bit to fit in a text box:
The Problem:
The Internet is treated as a binary concept: You're either connected or not, leading to issues with mobile devices and seamless connectivity.
The focus on connections (TCP/IP conversations) hinders security: Security is tied to the pipe, not the data, making it difficult to address issues like spam and content integrity.
Inefficient use of resources: Broadcasting and multi-point communication are inefficient as the network is unaware of content, leading to redundant data transmission.
The Proposed Solution:
Data-centric architecture: Data is identified by name, not location. This enables:
Trust and integrity based on data itself, not the source.
Efficient multi-point communication and broadcasting.
Seamless mobility and intermittent connectivity.
Improved security against spam, phishing, and content manipulation.
Key principles:
Data integrity and trust are derived from the data itself through cryptographic signatures and verification.
Immutable content with versioning and updates through supersession.
Data segmentation for efficient transmission and user control.
Challenges:
Designing robust incentive structures for content sharing.
Mitigating risks of malicious content and freeloaders.
This all is correct, but it's not a reason to abandon IP. It's a reason to understand its place.
Currently IP does a good job of isolating the application layer from specifics of the medium (fiber, Ethernet, WiFi, LoRa, carrier pigeons) and provides a way of addressing one or multiple recipients. It works best for links of low latency and high availability.
To my mind, other concerns belong to higher levels. Congestion control belongs to TCP (or SCTP, or some other protocol); same for store-and-forward. Encryption belongs to Wireguard, TLS, etc. Authentication and authorization belongs to levels above.
Equally, higher-level protocols could use other transport than IP. HTTP can run over plain TCP, TLS, SPDY, QUIC or whatnot. Email can use SMTP or, say, UUCP. Git can use TCP or email.
Equally, your interplanetary communication will not speak in IP packets at application level, no more that it would speak in Ethernet frames. Say, a well-made IM / email protocol would be able to seamlessly work with ultrafast immediate-delivery networks and slow store-and-forward links. It would have message-routing mechanism that allows to deliver a message next desk with an IP packet, or another planet with a comm satellite rising above your horizon to pick and store message, then flying over the other side of the planet to deliver it.
The point about data being identified by name and not location is quite strong I think. It pushes concerns that lots of applications would have under conditions of high latency and intermittent connectivity to the network itself, rather than having to be solved repeatedly with minor differences for every application. Encryption and authentication wouldn't belong to higher levels and I think that's right.
I absolutely agree it's not a replacement for IP and that IP has its place. The point rather is to shift one's perspective on what IP is, the implicit constraints it has, and how under different constraints a very different model would be useful and IP would be a lot less useful there. Applications would not be dealing with symbolic aliases for numeric network locations because that wouldn't work.
Names need namespaces, else they cannot be unique enough. IPv6 is one such namespace, DNS (on top of IP), another, email addresses (on top of DNS) is yet another. They are hierarchical; the namespace of torrent magnet links is flat, and still works fine on top of an ever-changing sea of IP addresses. We already have mechanisms of mapping namespaces like that, and should reuse it.
I don't think IP is going to be outright replaced by other transport-level protocols right at the Internet's "waist", but it can be complemented with other protocols there, still keeping the waist narrow.
Consider the case where you've got a computer aboard a delivery truck and it's hopping on and off home wifi networks as it moves through the neighborhood. From prior experience it knows which topics it is likely to encounter publishers for, and which it is likely to encounter subscribers for. There's a place for some logic--it's not precisely routing, but it's analogous to routing--which governs when it should forget certain items once storage is saturated for that topic.
IP is pretty much useless here because both the truck (as a carrier) and the people (as sources/sinks), end up with different addresses when they're in different places. You'd want something that addresses content and people such that the data ends up nearest the people interested in it.
It's an example of a protocol which would be in the waist, were it not so thin.
The computer aboard the delivery truck can just broadcast every time it hops on to a new access point?
It might not have the authority to broadcast via every access point, so it will likely be very circumscribed, but that’s just a question of the relative rank and authority between the truck operators and access point operators, routing layers, etc… not a question of the technology.
Since even several round trips of handshaking across the Earth takes only a few hundred milliseconds, and the truck presumably spends at least a few seconds in range.
> even several round trips of handshaking across the Earth takes only a few hundred milliseconds
then I'm not sure why you'd bother using the delivery truck as data transport anyhow.
I'm more interested in the case where such infrastructure is either damaged, untrustworthy, or was never there in the first place. If there was a fallback which worked, even if it wasn't shiny, there'd be something to use if the original went bad for some reason.
I took from your description that these broadcasts were being forwarded by other parties around the planet. But in the alternate reality I'm trying to sketch, the one where something quite different from IP is in the thin waist of our protocol stack, something content-addressed and not machine-addressed, nodes only select data from their peers if they're interested in it for some reason. If you've got a network of people that are so motivated to share what you have to say that it shows up on the other side of the planet near instantly, with no time spend validating it for accuracy or whatever other propagation criteria is relevant for that data, then you're likely a very influential person. That's the unlikely situation I was talking about.
I think you lost me at the point where the truck was hopping on and off home wifi networks. That doesn't really match with our current notion of local networks as security contexts. I'm also not clear about exactly who the truck would be talking with here, or what it would have to say. Maybe you can expand on that part?
Well, in the world that grew up around IP addresses, in our world, you need to have security contexts like that because somebody can just reach in from anywhere and do something bad to you. But if I try to envision an alternative... one where we're not addressing machines, then I figure we're probably working in terms of users (identified by public key) and data (identified by hash).
In this world security looks a bit different. You're not open to remote action by third parties, they don't know where to reach you unless they're already in the room with you. Instead you've got to discover some peers and see if they have any data that you want. Then the game becomes configuring the machines so that data sort of... diffuses in the appropriate direction based on what users are interested in. It would be worse in many ways, but better in a few others.
So suppose the whole neighborhood, and the delivery driver, all subscribe to a topic called "business hours". One neighbor owns a business and recently decided that it's closed on Sunday. So at first there's just one machine in the neighborhood with the updated info, and everybody else has it wrong. But then the driver swoops by, and since they're subscribed to the same topic as the homeowners, their nodes run some topic-specific sync function. The new hours are signed by the owner of the restaurant with a timestamp that's newer than the previous entry, so the sync function updates that entry on the driver's device. Everyone else who runs this protocol and subscribes to this topic has the same experience, with the newer more authoritative data overwriting the older staler data as the driver nears their house. But at no point does the data have a sender or a receiver which are separated by more than a single hop, and we trust or distrust data based on who signed it, not based on where we found it.
I have an application in mind that I think would run well on it, but because our world has crystalized around point-to-point machine-addressed networking, like a million tiny phone calls, it feels like a pretty big lift whereas innovating at other layers in the stack feels much easier--a consequence of the thin waist.
I guess I'm not persuaded that a system like you describe wouldn't have its own lower layers that serve equivalent functions to IP. While what you describe is something that sounds plausible to implement as application-layer stuff that would work on a wide variety of raw network implementations.
The magic of the existing hourglass model, and the entire premise of the end-to-end principle [1], is that you can build all those features on top of the infrastructure provided by IP. IP as the skinny neck allows service providers to focus on delivering one thing very well, and enables anyone who wants to use that infrastructure to build anything they can think of on top of it.
Couldn't this be implemented on top of IP? It seems to me this is already kind of how mail exchanges work, they hold mail until the recipient (or next relay in the chain) downloads it.
I was thinking this, as well. The brilliance of the IP hourglass lies in the fact that IP makes very few assumptions about anything above or below it, thus you can put just about anything above or below it.
IP has no notion of connection, and thus no notion of the reliability thereof. That's TCP's job (or what-have-you). I understand the top comment's complaint, but I don't think IP is the problem.
Another case: satellite internet. You can't assume a reliable connection to anything when "beam it into space" is an integral step in the chain of communication. Yet, it works! Whether or not a particular service is reliable is an issue with that service, not the addressing scheme.
IP is an envelope with a to-address and from-address. The upper layer protocols are whatever's inside the envelope - a birthday card, a bill. The lower-layer protocols are USPS, FedEx, the Royal Mail, etc. Blaming IP for partitioning problems is like blaming the envelope for not making it to its destination.
But the addresses, both from and to... they're both transient names for computers. Not for people, not for data, and they must remain unchanged for the duration of the interaction. That biases everything above that layer in significant ways.
You can't say "get me this data from wherever it might be found" because you have no means of talking about data except as anchored to some custodian machine, despite the fact that the same bits might be cached nearby (perhaps even on your device).
You also can't gossip with nearby devices about locally relevant things like restaurant menus, open/closing hours, weather, etc... you have to instead uniquely identify a machine (which may not be local) and have it re-interpret your locality and tell you who your neighbors are. You end up depending on connectivity to a server in another state maintained by people who you don't know or necessarily trust just to determine whether the grocery store down the street has what you want.
It creates unnecessary choke points, which end up failing haphazardly or being exploited by bad actors.
> But the addresses, both from and to... they're both transient names for computers. Not for people, not for data, and they must remain unchanged for the duration of the interaction.
This is true, but
1) it's an easy problem to solve in many cases (DHCP works great)
2) the exact mechanisms by which an address could be tied to a particular resource are innately dependent on the upper portions of the protocol stack, simply because the very idea of what a "resource" even is must necessarily come from there.
3) the exact mechanisms by which an address could be tied to a particular piece of hardware are necessarily dependent on the lower parts of the stack (MAC addresses, for example)
#2 and #3 illustrate that IP benefits from not solving these issues because doing so would create codependency between IP and the protocols implemented above and below it. Such a situation would defeat the entire purpose of IP, which is to be an application-independent, implementation-independent mover of bits.
The things you are complaining about are things that are handled by higher level human-scale protocols. They can and are layered on top of the existing low-level hardware-scale protocols.
You might think that those layers suck because they are layered on top of low-level protocols. If we just baked everything in from the start, then everything would work more cleanly. That is almost never the case. Those layers usually suck because it is just really hard to do human-level context-dependent whatever. To the extent that they suck for outside reasons, it is usually because the low-level protocols expose a abstraction that mismatches your desired functionality and is too high-level, not one that is too low-level. A lower-level abstraction would give you more flexibility to implement the high-level abstraction with fewer non-essential mismatches.
Baking in these high-level human-scale abstractions down at the very heart of things is how we get complex horrible nonsense like leap seconds which we then need to add even more horrible nonsense like leap second smearing on top to attempt to poorly undo it. It is how you get Microsoft Excel rewriting numbers to dates and spell correcting gene names with no way to turn it off because they baked it in all the way at the bottom.
It's true that IP itself has a significant problem with devices that roam between networks. I believe that there were some attempts to get a solution into IPv6, but they were abandoned (sadly - that could have perhaps been the killer feature that would have made adoption a much easier sell).
I don't think you're right about neighbors though. IP does support broadcast to allow you to communicate with nearby machines. Of course, in real networks, this is often disabled beyond some small perimeters because of concerns for overwhelming the bandwidth.
I think the issue there isn't IP but TCP. IP is purely an addressing and routing solution. It doesn't care about partitions or outages or intermittent connectivity. TCP does care about that as a protocol but you could envision and in fact there are other protocols that do handle these things over the top of IP.
It started as a NASA project for deep-space networks where communications are intermittent, packets take minutes or hours to arrive, and bandwidth is slim. But it's seeing increasing interest for other applications as well.
Sounds like a more document centric network. Isn’t this something like a torrent where you are after a specific document and it is served by whatever has it and is online at the moment?
I have been thinking about building distributed store-and-forward messaging system. I think you can't do normal packets with unreliable network, need to change the model and put the model in the applications.
Basically, modern email designed for disconnected operation. It would work normally on Internet but keep going when disconnected. I was also thinking it would be useful for limited bandwidth like ham radio. I also think that model would be useful for communicating with other planets.
The problem is that it only works for some applications, and applications would need to change to use the new model and deal with the high latency. Which is a big hurdle when mobile and satellite makes Internet more available.
The way I'm imagining it, the applications can each provide a function matching some standardized interface to the network layer. So every time two nodes are near enough to synchronize, they loop through all of the applications they have in common and run the functions for those applications. Maybe some are maintaining a web of trust, others are verifying the content in some way before they decide to store it, who knows--the network layer doesn't care, it just has some node-operator-configured resources to allocate to that app: 1GB of storage and 15 seconds of sync time every 5 minutes or somesuch.
It would be up to the apps to write their sync functions such that old or irrelevant data gets forgotten, new or venerated data gets propagated, etc. Also they'd have to figure out how to make it worth everyone's while to actually dedicate those resources.
You probably wouldn't end up with millions of apps like this--it would only make sense if many people around you were also using the same app--but even just two or three useful ones would probably make it worthwhile to build the lower layers.
E-mail strikes me as one of the more difficult ones to do because it's sort of fundamentally point-to-point, but I could imagine a sync function which gossips about which other nodes that node typically comes across such that whenever two nodes meet in an elevator or at a stoplight or whatever there's a sort of gradient indicating which of the two is more likely to carry a message to its destination.
I was thinking simple messages. The new part would be routing messages when areas are disconnected. Maybe devices would volunteer to carry messages if going the right way since storage is cheap. It is also easy to make ad-hoc networks.
Data synchronization would be layer on top. I think it would be hard because of latency, and would need to use CRDT or similar. My feeling is that would only work for simple databases, and complicated databases wouldn't be supported.
Have you ever looked into Usenet or bang-path email? These are obsolete technologies because we have IP, but you can consider bringing them back for specific purposes. The obvious problem with delayed store-and-forward delivery is that you may need very big buffers.
What we needed was a protocol where listeners are addressed by _name_ rather than by IP address. And actually we have that, it's called HTTP, though it's not quite at the right layer, though QUIC is close to the right layer because UDP is so thin. And a layer below that we should have had a protocol where things are addressed by {AS number, address}.
Anyways, IP is what we have. The cost of changing all of this now is astronomical, so it won't happen.
Most of my gripes with IP addresses is that they're too much like names. Wherever you have anything like a name you need some authority of which name maps to which thing, and wherever you have that you get either a single point of failure and subsequent fragility or a single point of control which people will use to mistreat each other.
I think we can get pretty far on just key pairs and hashes.
> IP's position at the thin waist of the hourglass that makes it resist change
Believe me, this is the least change-resistant option. You don't have to change all the boxes at one end or the other or in the middle to deploy new stuff!
.. unless it's IPv6. Look how well that went. Because it's not compatible, it's effectively a separate hourglass over to the side of the first one.
> IP is that it's not partition friendly
It's simply not at the level to care about that. If there's no link, there's no internetworking. It could perhaps be more router-flap or roaming-network friendly, at some cost.
The thing you want for intermittent operation might be UUCP, or USENET, from the days when links weren't up all the time. Fidonet had similar solutions for the days of expensive telephone links.
IP's job is super simple. A) Move this packet out of the interface connected to the destination IP, or B) move this packet to a router that claims to be its next hop, or C) drop it.
Because it is so simple, we can add more interfaces and/or routers at any time and expand the network very quickly. By outsourcing responsibility to anything else to other layers, IP can focus on speed.
It'd be really nice for something above IP but below HTTP to gain non-hobbyist traction that separates addressing "who" you want to talk to (an identity) from IP address. So then most applications can work with what they really want to work with - identities - and not network addresses. Existing solutions in typical software mostly rely on HTTP cookies, DNS, and TLS. A lot of existing work on decentralized P2P networks do try or actually solve this problem somewhat with various cryptographic and schemes like DHTs.
> You have to keep your stuff powered on and connected just in case two parties want to have an instantaneous conversation right now, otherwise they can't talk at all.
I mean .. if I want to text you and your phone isn't on, you're physically not capable of receiving it. Something has to capture, save, and provide the message when the other side is available. ISP routers having that capability would be interesting, but not sure it would be a good idea.
Apart from networking, the hourglass design is common throughout
Electricity is an hourglass. Coal plants, solar panels, gas turbines, wind turbines, nuclear power plants all produce electricity. This is then consumed by electric cars, computers, washing machines, etc.
LLVM IR is an hourglass. Many compilers and languages produce LLVM IR. This is then converted to many different instructions sets.
I think if you want many-to-many non-coupled relationships, you will end up with some sort of hourglass design eventually.
POSIX is also an hourglass, right? It creates expectations on the part of apps of how the OS is interfaced to and expectations for how an OS is shaped for POSIX-compliant apps to interface to it.
Details may vary, but that baseline makes it much easier to, for example, have emacs on Windows, Mac, and every flavor of Linux under the sun.
HTTP(S) has also become an hourglass, in a sense. Given the choice to implement a brand-new socket-level protocol or figure out how to shoehorn a service into RESTful APIs accessible via HTTP, there are a lot of incentives to doing the latter (among others: corporate firewalls can tend to block other ports outbound by default, but frequently have to leave port 80 and port 22 traffic unmolested to not seriously break everyone's user experience).
It happens anywhere you have a one group A of things that are necessarily distinct (e.g. software tools) and another group B that is also necessarily distinct (e.g. buried wires and satellites) and people want to be able to use anything from category A with anything from category B.
We could create a mechanism for every single A*B combination, but instead there's gonna be pressure to minimize the effort, and that means a smaller set of intermediate stuff in the middle.
Another example might be that you have a bunch of people with different skills, and a bunch of people with different goods, and then the convergence-point involves money, rather than having a lot of distinct techniques like "how to use carpentry to acquire chickens."
I think IP has two main issues: fragmentation and TCP.
Possibility of fragmentation means that we can not rely on "atomicity" of IP packets. We have to limit ourselves to conservative MTU sizes if we want reasonable performance on possibly unreliable networks. And there are security considerations as well. It would've been great if IP supported a reliable way to do MTU discovery (e.g. by replying with a special "error" packet containing MTU size) or had specified a bigger guaranteed MTU size (ideally, 2^16 bytes).
Finally, ossification of the TCP protocol is a well known problem. Ideally, we would have only IP packets with included source and destination ports, while TCP would be a QUIC-like protocol opaque for "dumb pipe" network hardware.
> It would've been great if IP supported a reliable way to do MTU discovery (e.g. by replying with a special "error" packet containing MTU size)
That’s literally how IP works. The problem is that some middleware boxes block the ICMP error packet that has the MTU so most people don’t see the don’t fragment IPv4 bit. IPv6 gets rid of the bit since there’s not even any fragmentation support (but the error packet remains). To combat this, DPLPMTUD was developed for QUIC where probe packets are used instead to discover the MTU at the cost of global efficiency.
Yes, you are right. I completely forgot about ICMP Packet Too Big messages since they are effectively not used in practice for application-level stuff. One minor difference between it and the behavior desired by me is that the error packet would be sent directly to an application port (one of the reasons why it would be nice for IP packets to contain source/destination ports), making it possible to conveniently process it. IIUC even assuming PTB messages are reliably delivered, we would need some kind of OS help (and additional APIs) to process them properly.
The problem with IP is not the problem with IP, but a problem with middleware that makes deployment of new protocols above IP practically impossible. That's why QUIC rides over UDP rather than being a first class transport protocol.
You can actually do better by just having network hardware truncate and forward instead of drop when encountering a packet that is too large. Then the receiver can detect every packet that went through a route with a small MTU and know, precisely, the MTU of the route the packet went through. The receiver can then tell the sender, at either a system level or application level, of the discovered MTU via a conservatively sized message.
This allows you to easily detect MTU changes due to changed routing. You can easily determine or re-determine the correct MTU with a single large probe packet. You can feedback application-level packet loss information as part of the "packet too big" message which is useful for not screwing up your flow control. The application can bulk classify and bulk feedback the "lost" packets so you get more precise, complete, and accurate feedback that also costs less bytes. The network hardware can be dumber since it does not need to send feedback messages (for this case) which also results in the pipe being closer to a "dumb pipe" as a whole. Basically the only downside is that you forward "dead" traffic resulting in unnecessary congestion, but you wanted to send that traffic anyways (and would have succeeded if the predicted MTU was correct), so that congestion was already assumed in your traffic pattern.
I think truncating packets would make live of application developers significantly harder since it breaks the nice atomic property and requires development of ad-hoc feedback protocols which account for potentially truncated packets. Smaller MTUs are also more likely on the last mile, so dropping a packet and sending an error packet back would result in less "useless" global traffic.
I guess it may be both, IP packet header could contain a flag depending on which data will be either truncated, or dropped with an error packet sent back.
I was using "application" level to mean the consumer of the raw packet stream which would generally be a transport protocol (i.e. TCP, UDP, QUIC, etc.), not a actual user application.
Truncation is trivial to support at the transport protocol level. UDP, TCP, and literally every other network protocol I can think of already encode (or assume) the "expected" non-truncated size, so you just compare that against the size reported by the hardware descriptors and either drop (which is what already always happens, just earlier in the chain) or the protocol can be enhanced to generate feedback. Protocols literally already do that check (for security reasons to avoid attacker-controlled lengths), so changing network hardware to truncate would likely already work with existing protocol stacks. Any enhancements for smarter behavior than just "drop if truncation detected" would only need to be at the transport protocol/endpoint level, so would be relatively easy to bolt-on without making any other changes to the intervening network hardware and while being compatible with "default drop" behavior for anybody who does not want the extra complexity.
> One of the claims of the ATM camp was that it had inherently superior performance to IP due to the small cell size (and hence would better serve video and voice). This was arguably true in 1994 but turned out to be irrelevant as faster links became available and a combination of new router designs and Moore’s law enabled high-speed routing to flourish
For those readers young enough to have never used a landline, know that this is bullshit. No VOIP (or cell) link I have ever used today achieves similar latency to a landline call in the US in the mid 90s. Ignoring latency, peak voice quality is far higher than the 8kHz uLaw that was used then, but the 95 percentile quality was much better back then as well.
The flip side is that VOIP is nearly free by the standards of POTS costs in the 90s. This is true for local phone service and even more true as you compare to long-distance and international. Had the Bell heads won out, POTS probably would have come down in real-terms (also a good chance that the quality would have gone up as technology enabled it), but there's no way it would be as cheap as it is today.
The comments on IP winning are fine, but from the application experience, CableTV is far better than what we are converging on.
The internet is converging on a few major players streaming low quality content , at a higher price, using 100x the resources of cable TV.
We were streaming 500 HDTV channels for $50 / month around 2005. Now similar content would cost you $100 in internet service + $150-$200 in streaming fees. And you need much more sophisticated and numerous ICs for encode-decode.
My point is , you can't judge a mature product against an evolving product . The entire content ecosystem is much worse off. We basically spent trillions of dollars to rebuild the shitty content delivery of cableTV , with worse content.
You can embed/tunnel any network transport into another. There is nothing magical about the internet and IP. It is actually being tunneled when you're using a cable modem. WiFi is a horrible hack that encapsulate IP in a very ugly way to make it onto it's wireless tech.
You could have tunnel ATM over IP, I'm pretty sure of it. The depiction seems to me like a flattering extolment of IP.
Can people please stop posting websites that pop up a “SUBSCRIBE!!” box when you scroll and make the rest of the page unreadable? It's driving me crazy. We should have a policy to post only clean websites where we can read the content unimpeded.
Install uBlock Origin and enable its Annoyances and Cookie Banner filters. This will be most effective in Firefox, but will work in other browsers as well.
If you're unable to do that for some reason (e.g. iOS), check out the Kill Sticky bookmarklet, which at least lets you nuke these quickly and with consistent muscle memory. It also gets rid of those trashy sticky headers. https://www.smokingonabike.com/2024/01/20/take-back-your-web...
NoScript also does a good job of preventing this garbage. It can be a bit of effort to use, but I find it worth the effort.
If none of these work, then take that as a signal that the content is not worth viewing, close the tab, and move on with your life.
Thanks for this! I use u-block, but didn't realize there was an easy option for cookie banners/subscribe. I have been making custom filters this whole time.
The internet have unfortunately stopped being ‘clean’ years ago - ther’s so much ‘Dark Pattern’ going on that it’s scary - I amvannot imagine surfing without an adblocker…
In Safari I pop open Reader, it gives me the full article with no noise. This is the solution for 50% of the complaints I see these days on HN (font too small, too light, ads, popups, paywalls, what's up with web design).
From the title I was expecting this to be a charming dive into how the hourglass icon ended up as the standard cursor shown during blocking activities in visual desktops lol
My problem with IP is that it's not partition friendly. You have to keep your stuff powered on and connected just in case two parties want to have an instantaneous conversation right now, otherwise they can't talk at all. That's not too heavy a burden if you're an ISP, but it's sort of a problem in cases where there are fewer resources available. Like maybe the bits will get there, but they're going to have to wait for the sun to come up and shine on a certain solar panel, or they're going to hitch a ride on a certain delivery truck that's heading that way anyway.
If our software could tolerate reductions in service like this, if its had more modes than up-entirely-and-instantaneous and down-entirely, then there would be fewer incentives for misbehaving governments to attempt to shut it off every now and then, and it would be more likely to be helpful in the event of some kind of infrastructure-damaging disaster.
reply