Basically: yes, within a DC which is heading to some kind of traefik or HAproxy or other redirection/sharder, this could make sense. So.. how does this 2018 approach stack up in 2021
I don't want to overdo the curmudgeon thing: I'm really glad they've started to deploy dual stack, it's long b overdue. And remember that Google has been a strong proponent of v6 in Android and in ietf standards across the board.
As for GKE that's another pair of shoes, probably more related to configuring calico and friends and less about limitations in the low level network fabric.
As for Google API, I read somewhere that they disabled it because billing exemption wasn't ready.
A meta note but it struck me that seeing "traefik" where NGINX usually is is pretty fascinating. I rate it highly because of it's support for k8s (I've written before about how wonderful I think it is) but am somewhat unaware of how widely it's known. Guess it's pretty well known at this point if people casually mention it (then again, the audience is HN after all)!
(edit: forgot to mention IL is still usable on plan 9)
If a link is saturated, say via TCP, then UDP data can still get dropped AFAIK.
Searching I found this blog post which takes a small stab at it. Would be interesting to try other setups with more data.
Regardless, using UDP one should always be prepared to handle dropped and out-of-order packets.
Nothing is perfect, so I'll keep using UDP on a switch and TCP on the internet.
If send a 1Gbit udp stream over a switch to another machine there will be no drops if both are connected with 1Gbit, the same is true if you use tcp, and over a router assuming its capable of forwarding that amount of traffic. If you have a third machine sending udp or tcp traffic towards the one receiving the 1Gbit udp stream you'll have drops on both streams. Doesn't matter what protocol you use, if you have congestion you'll have drops. You typically use udp if your application is real time, or if you want to create your own reliability mechanism and/or avoid issues with devices in the middle that does things with tcp.
However they can still occur, maybe due to some unexpected congestion or other issues. So long as your application can deal with that you should be good.
Like intermittent EMI causing bit flips and checksum failures, which happened to me once in an IoT application where the Ethernet cable to an outbuilding was buried next to the power line, and the network would die whenever the furnace kicked on.
Nice review by ServeTheHome here.
Edit: Apparently LGS105/LGS108 has 10Gb/s switching capacity allready so I'm good!
The ospf vs Bgp and within organisation why not …
Quic is basically the weird semantics of http2 transmuted to UDP with some retransmission logic. But everything is hilariously complex.
If you have fabric determinism like Infiniband, and capacity reservation on the receiving side, you can just dispose of connection paradigm, flow control, and thus get great deal of performance, while simplifying everything at the same time.
I do not see much use of it though unless you are building something like an airplane.
The uncounted PhD hours spent on getting networks to work well do amount to something.
Dealing with RDMA aware networking is far beyond the ability of typical web developers.
Deterministic Ethernet switches cost a fortune, and are lagging behind the broader Ethernet standard by many years.
Making a working capacity reservation setup takes years to perfect as well.
99.9999...% of web software will most likely *lose* performance if blindly ported to RDMA enabled database, message queue server, or caching.
If you don't know how the upoll, or iouring like mechanisms work, you can not get any benefit out of RDMA whatsoever
I once worked for a subcontractor for Alibaba's first RDMA enabled datacentre.
Homa is designed for traffic inside a DC, same as RDMA or Infiniband. I don't think anyone is proposing to use it for normal web traffic.