> Really it's hard to point to any popular open-source tools that fully support HTTP/3: rollout has barely even started.
> This seems contradictory. What's going on?
IT administrators and DevOps engineers such as myself typically terminate HTTP/3 connections at the load balancer, terminate SSL, then pass back HTTP 1.1 (_maybe_ 2 if the service is GRPC or GraphQL) to the backing service. This is way easier to administer and debug, and is supported by most reverse proxies. As such, there's not much need for HTTP/3 in server side languages like Golang and Python, as HTTP/1.1 is almost always available (and faster and easier to debug!) in the datacenter anyways.
HTTP/3 and IPv6 are mobile centric technologies that are not well suited for the datacenter. They really shine on ephemeral spotty connections, but add a lot of overhead in a scenario where most connections between machines are static, gigabit, low-latency connections.
I'm not an expert on HTTP/3, but vehemently disagree about IPv6. It removes tons of overhead and cruft, making it delightful for datacenter work. That, and basically guaranteeing you don't have to deal with the company you just acquired having deployed their accounts with the same 10/16 subnet your own company uses.
Major reason for that is BSD Sockets and their leaky abstraction that results in hardcoding protocol details in application code.
For a good decade a lot of software had to be slowly patched in every place that made a socket to add v6 support, and sometimes multiple times because getaddrinfo didn't reach everyone early enough.
> results in hardcoding protocol details in application code
Are you suggesting that this could have been implemented a different way? Example: IP could be negotiated to upgrade from v4 to v6? I am curious about your ideas.
I think in principle, an application didn't need to know the exact format of an IP address, even if connecting directly to an IP. A simple idea that could have made application code much more IP-agnostic would have been for SOCK_ADDR_IN to take the IP in string format, not as a four-byte value. That way, lots of application code would not need to even be recompiled to move from a 4 byte IPv4 address to a 16 byte IPv6 address, whereas today they not only need to be recompiled, they need to be changed at the source level to use a new type that allows for both.
Of course, code that operates on packets, in the TCP/IP stack of the OS would have still needed to be rewritten. But that is far less code than "every application that opens a socket".
Of course, this only applies to code that uses IPs only to open connections. There's lots of application code that does more things with IPs, such as parsing, displaying, validating etc. All of this code would still need to be rewritten to accept IPv6 addresses (and its much more complex string representations), that part is inevitable.
Yeah, the big issue is that any code that took addresses from user input had to do validation to make sure addresses were valid, in allowed ranges, etc.
While the sockaddr struct allowed to to abstractly handle v4/v6 socket connections, there wasn’t a clean way to do all of that additional stuff and IP address logic leaked into all kinds of software where you wouldn’t first expect it.
Something as simple as a web app that needs to inspect proxy headers would even have it.
It also didn’t help that it became practice to explicitly not trust the addr resolution offered by the sockets API because it would do unexpected things like resolving something that looked like an integer to a uint32 and then a 4 byte V4 addr.
This is vastly oversimplifying the problem, the difference between IPv4 and IPv6 is not just the format of the address. Different protocols have different features, which is why the sockaddr_in and sockaddr_in6 types don't just differ in the address field. Plus the vast majority of network programs are using higher level abstractions, for example even in C or C++ a lot of people would be using a network library like libevent or asio to handle a lot of these details (especially if you want to write code that easily works with TLS).
There isn't much need for many applications to know or care what IP protocol they are speaking, they are all just writing bytes to a TCP stream. I think the parent is saying that existing socket abstractions meant that these applications still had to be "upgraded" to support IPv6 whereas it could/should have been handled entirely by the OS with better socket APIs.
The simplest case would have been using a variant of Happy Eyeballs protocol.
Resolve the A and AAAA records, and try to connect to them at the same time. The first successful connection wins (maaaaybe with a slight bias for IPv6).
This would have required an API that uses the host name and folds the DNS resolution and connection into one call. Instead, the BSD socket API remained at the "network assembly" level with the `sockaddr_in/sockaddr_in6` structures used for address information.
For the examples I am going to use the typical "HTTP to example.com" case.
Some OSI-focused stacks provided high level abstraction that gave you a socket already set for listening or connected to another service, based on combination of "host name", "service name", and "service type".
You'd use something like
connect("example.com", "http", SVC_STREAM_GRACEFUL_CLOSE) // using OSI-like name for the kind of service TCP provides
and as far as application is concerned, it does not need to know if it's ipv4, ipv6, X.25, or a direct serial connection (OSI concept of separating "service" from "protocol" is really a great idea that got lost)
Similar approach was done in Plan 9 (and thus everyone who uses Go is going to see something similar) with the dial API:
dial("example.com!http",0,0,0)
As part of IPv6 effort an attempt at providing something similar with BSD Sockets was made, namely getaddrinfo which gives back information to be fed to socket/bind/connect calls - but for a long time people still learnt from old material which had them manually fill in socket parameters without GAI so adoption was slowed down.
No, for example on Android and iOS the APIs for when you connect to a server the hostname is a string. This hostname can be either an ipv4 address, an ipv6 adress, or a domain. The BSD sockets API on the other hand forces each application to implement this themselves and a lot of them took the shortcut of only supporting ipv4.
It isn't about upgrading one protocol to another but about having the operating system abstract away the different protocols from the application.
Yep, it's tragic because it all stems from unforced differences vs ipv4. The design was reasonable, but with perfect hindsight, it needed to be different. They needed to keep the existing /32s and just make the address field bigger, despite the disadvantages.
"Everywhere but nowhere" is sorta how I'd describe ipv6. Most hardware and lower-level software supports it, so obviously it wasn't impossible to support a new protocol, but it's not being used.
And it would have failed for exactly the same reasons, because just changing the address field size is enough to have everyone who uses BSD Sockets to rewrite all parts of code that create sockets.
Especially since getaddrinfo was ported over from more streams/OSI oriented stacks pretty late, precisely because BSD Sockets required separate path for every protocol.
On hw side, by mid-1990s even changing one routing-important field would mean possibly a new generation of ASICs needed with more capabilities.
Essentially, once you agree to break one field, the costs are so big why not try fixing other parts? Especially given that IETF has rejected an already implemented solution of just going with OSI for layer 3.
All that code using BSD sockets is rewritten by now to support v6, right? If so, that can't be the reason, cause v4 is still dominant.
And btw, what I suggested would actually work without userspace code changes until you want to start subdividing the /32s. Cause v4 addresses would've still been valid in v6.
Ipv6 packet format was needed either way, but only with the 32-bit address space at first (the other 92 bits set to 0). You simply tell your system to start using v6 instead, and everything else stays the same. No dual-stack.
Next step would be upgrading all those parts like DNS, DHCP, etc to accept the 128-bit addrs, which again can be done in isolation. Then finally, ISPs or home routers can start handing out longer addresses like 1.1.1.1.2.
There are two ways for me to interpret "simply tell your system to start using v6".
If it means upgrading every program, then your plan works but it's the same as how things work today. You're telling people to do a thing, and they aren't bothering. The "simple" step isn't simple at all.
If it doesn't mean upgrading every program, then your rollout fails on the last step. You start handing out longer addresses and legacy programs can't access them.
It's the second one. But legacy programs did get upgraded, so I don't see why they wouldn't under this other plan. If anything, it's easier because you're only making the address field bigger and it's not a separate case. Some routers struggled with 128-bit addrs due to memory, and could've gotten away with like 48 or 64 bits if they're using DHCP.
Lots of legacy programs, and current programs, and other things that could have been upgraded did not get upgraded. Getting to the situation where you can just flick a switch is not a realistic dream. There's not enough motivation for the average business to add support for a version that isn't in use yet.
Disconnect your phone from Wi-Fi and visit https://ifconfig.co/ . If you're a Verizon customer, it's probably going to show you an IPv6 address. It's huge, right now, today.
Fair. I bet that'll change soon though. My prediction is that it'll be a mobile-first game, like the next Pokemon Go sort of thing, that'll be IPv6-only.
Plenty of mobile users use wifi at home/work. Telling them to disable their ipv4-only wifi just to play your game is going to be a non-starter, especially when the cost of ipv4 address adds negligible cost to infrastructure. Is your CTO really going to massively increase user friction ("turn of your wifi to play!") just so try to save a few cents (comparatively speaking) on infra?
this isn't true. I know because at some point XFinity started dropping ipv6 connections for me and I noticed because a number of sites (forget which) were broken
What do you mean by dropping ipv6 connections, like dropping ipv6 packets? That's only an issue if you're using v6. I disabled ipv6 on my router years ago and have never had a problem just using v4.
True, but irrelevant to my point. Whether a particular ISP supports doesn’t matter: it is being widely used by the rest of the world, to the point that it’s half of Google’s traffic.
Vodafone's network is reported to handle around 20% of the world's traffic. It's not a random ISP. It's network does not support IPv6. It is how a big chunk of all internet users experience the internet. Claiming it doesn't matter in a discussion over IPv6 adoption rate is ludicrous.
> Yep, it's tragic because it all stems from unforced differences vs ipv4. The design was reasonable, but with perfect hindsight, it needed to be different. They needed to keep the existing /32s and just make the address field bigger, despite the disadvantages.
Exactly. I would love to have seen the world in which that happened, and where all the other parts of IPv6 were independently proposed (and likely many of them rejected as unwanted).
The main problem wasn't all the smaller features but one big one in particular that can't be split into smaller pieces, the new addressing scheme. They wanted to replace all the existing addresses, which meant replacing all the routes. Besides the difficulty of that by itself, it automatically meant that the v6 versions of DNS, DHCP, NAT, etc wouldn't support v4, rather it'd be a totally separate stack.
There were also some small things. And routers often having bad defaults for v6, which btw, would not even be a concern if they left the big thing alone.
> Besides the difficulty of that by itself, it automatically meant that the v6 versions of DNS, DHCP, NAT, etc wouldn't support v4, rather it'd be a totally separate stack.
Sure, "make the addresses bigger" would have required providing DHCPv6, DNS AAAA records, and various other protocol updates for protocols that embedded IP addresses. And making changes to the protocol header at the same time (e.g. removing the redundant checksum) were also a good idea.
It didn't require pushing SLAAC instead of DHCP.
It didn't require recommending (though fortunately not requiring) IPsec for all IPv6 stacks.
It didn't require changing the address syntax to use colons, causing pain for all protocols that used `IP:port` or similar.
It didn't require mandating link-local addresses for every interface.
It didn't require adding a mandatory address-collision-detection mechanism.
I wonder if something like HTTP connection upgrade would have been possible with ipv6-ipv4, maybe something like imagine Machine 1 with ips 1.1.1.1 and 11::11 and machine 2 with ips 2.2.2.2 and 22::22.
When machine 2 receives a packet from 1.1.1.1 at 2.2.2.2 it sends a ipv6 ping-like packet to the ipv4-mapped address ::ffff:1.1.1.1 saying something like "hey you can also contact me at 22::22 and if machine 1 undertands then it can try to use the new address for the following packets.
I can see how it would be hard to secure this operation.
For those building on AWS with VPC per service and using PrivateLink for connections between services, the whole IP conflict problem just evaporates. Admittedly, you’re paying some premiums to Amazon for that convenience.
>That, and basically guaranteeing you don't have to deal with the company you just acquired having deployed their accounts with the same 10/16 subnet your own company uses.
I always found that to be a desperate talking point. 'Prepare your network for the incredibly rare event where you intend to integrate directly' (didn't anyone hear of network segmentation?). It makes a lot more sense to worry about the ISP unilaterally changing your prefix - something that can only happen in IPv6.
> It makes a lot more sense to worry about the ISP unilaterally changing your prefix - something that can only happen in IPv6.
ISPs unilaterally change your DHCP address on IPv4 all the time. And in any situation where you would have a static address for IPv4, your ISP should have no problem giving you a static v6 prefix. This argument makes no sense at all.
IPv6 packets can still be fragmented, but only at the source. IPv4 fragmentation has only worked this way in practice for a long time.
Private addressing is still needed with IPv6, it's a crucial part of how address allocation works, and it's the only way to reliably connect to a client-like IPv6 device on the local network, since its public IP address will change all the time for privacy reasons, assuming it respects best practices.
Routing is only simpler if the ISPs actually hand out the large prefixes they are supposed to. Not all of them do.
DHCP is still required for many use cases. So now you have two solutions for handing out addresses, and you need to figure out when to use SLAAC and when to use DHCP. This is strictly more complex than IPv4, not simpler. SLAAC is mostly just unnecessary cruft, a cute little simple path for limited use cases, but it can never replace DHCPv6 for all use cases (e.g. for subnets smaller than a /64, for communicating additional information like a local DNS server or NTP server, for complex network topologies, for server machines etc).
* First three points matter more on bad connections, but are less of a problem on good ones.
* Private addressing is a feature, not a bug, in the datacenter.
* NAT is a feature, not a bug, in the datacenter.
* Simpler routing matters more on bad connections, but is less of a problem on good ones.
* DHCP is a feature, not a bug, in the datacenter.
Overall, it adds features that I don't need in my datacenter, and takes away others that I do and now need to add back. Like I said: it's great outside the datacenter, not so great inside it.
> * No more private addressing (unless you're a glutton for punishment).
The question of whether or not you use private addressing is, AFAICT, independent of the protocol. I mean, there's no material difference between private and public addressing.
> * No more NAT (see above).
Ditto. You don't have to NAT over IPv4, and you can NAT over IPv6; and - you may want to or need to, depending on restrictions on your connection.
I really have to agree with the "easier to debug" part. I one time had to debug a particularly nasty networking issue that was causing HTTP connections to just "stop" midway through sending data. Turned out to be a confusion mismatch between routers and allowed packet sizes. It would have been so much worse with a non-plaintext protocol.
Totally agree. Most of the benefit of HTTP 2/3 comes from minimizing TCP connections between app->lb. Once you are past the lb the benefits are dubious at best.
Most application frameworks that I've dealt with have limited capabilities to handle concurrent requests, so it becomes a minor issue to have 100+ connections between the app and the lb.
On the flipside, apps talking to the LB can create all sorts of headaches if they have even modest sized pools. 20 TCP connections from 100 different apps and you are already looking at hard to handle TCP flooding.
HTTP/3 is not a mobile centric technology. Yes there was a lot of discussion of packet pacing and it's implications for mobile in early presentations on QUIC but that's not the same as "centric", that's one application of the behavior. Improved congestion control, reduced control plane cost and removal of head of line blocking behaviors have significant value in data center networks as well. How often do you work with services where they have absolutely atrocious tail latencies and wide gaps between median and mean latencies? how often is that a side effect of http/tcp semantics?
IPv6 is the same deal, I sort of understand where the confusion comes from around QUIC because so much was discussed about mobile early on, and it just got parroted heavily in the rumor mill, but IPv6? that long predates the mobile explosion, and again, it helps as an application, but ascribing it as the only application because of it's applicability somewhere else doesn't hold up to basic scrutiny. The largest data centers these days are pushing up against a whole v4 IP class (I know, classes are dead, sorta) in hardware addressable compute units - a trend that is not slowing.
We did this with quic data center side: https://tailscale.com/blog/living-in-the-future#the-c10k-pro... and while it might be slightly crazy in and of itself, it's far more practical with multiplexing than with a million excessively sized buffers competing over pools and so on.
There is absolutely value to quic and ipv6 in the data center, perhaps it's not so useful for traditionally shaped and sized LAMP stacks, but you can absolutely make great use of these at scale and in modern architectures, and they open a lot of doors/relax constraints in the design space. This also doesn't mean everyone needs to reach for them, but I don't think they should be discarded or ascribed limited purpose so blithely.
I will acknowledge that truly massive datacenter deployments can and do use these technologies to good effect, but I haven't worked at any of these kinds of places in the last fifteen years and I suspect many (most?) of my colleagues haven't either. Anything smaller than a /8, they usually don't add much and just get in the way more often than not.
HTTP3 is patch that unfucked some stupid design choices from HTTP2[1]
However IPv6 is perfectly suited to the datacentre. So long as you have properly infrastructure setup (ie properly functioning DNS) IPv6 is a godsend for simplifying medium scale infra.
In fact, if you want to get close to a million hosts, you need ipv6.
[1] Me and http/2 have beef, TCP multiplexing was always going to be a bad idea, but because idealism got in the way of testing
Sure, but now you've lost some of the benefits of HTTP/3, such as the header compression and less head-of-line blocking. To some degree the load balancer can solve this by using multiple parallel HTTP 1.1 streams, but in practice I've seen pretty bad results in many common scenarios.
No one cares about those "benefits" _on a gigabit line_. The head of your line is not blocked at such speeds, believe you me. Same thing with compression. Like, why. Other than to make it harder to debug?
I had head-of-line-blocking issues recently on a 10 Gbps data centre link!
HTTP client packages often use a small, fixed number of connections per domain. So if you have two servers talking to each other and there's slow requests mixed in with short RPCs, the latter can sit in a queue for tens of seconds.
QUIC/HTTP3 relies on TLS. If you already have some encrypted transport, like an Istio/Envoy service mesh with mutual TLS, or Zerotier/Tailscale/Wireguard style encrypted overlay network, then there are no benefits to using HTTP3. Moreover native crypto libraries tend do a better job handling encryption anyway so rather than wasting cycles doing crypto in Go or Node it makes more sense to let the service mesh or the overlay handle encryption and let your app just respond to clear requests.
Sure I was responding to the context as I understood it here which was listening on HTTP/3 as an application rather than a service mesh layer. HTTP/3 can definitely be a choice for service mesh or some sort of overlay. Personally if I were setting up a new cloud/DC today I'd probably just use ZeroTier (or Tailscale) and let the overlay deal with encryption while I just have my sources and destinations do IP based filtering.
The protocol isn’t but its deployment is - real-world deployment is predominantly mobile. That’s nothing to do with the inherent technical features of the protocol, it is a consequence of market history
HTTP/2 is still mostly implemented only over TLS, and that can mean significant and completely useless overhead if the server-LB connection is already encrypted using some VPN solution like WireGuard.
Speaking of gRPC, it's unfortunate that they went all-in on HTTP/2. Should have made it work over HTTP/1.1. I know others made it work, but it wasn't first-party. Maybe it could've been more popular than JSON-over-HTTP by now.
> At the same time, neither QUIC nor HTTP/3 are included in the standard libraries of any major languages including Node.js, Go, Rust, Python or Ruby.
.NET actually looking like it has decent support for any teams that are interested[0] (side note: sad that .NET and C# are not considered "major"...). There is an open source C library that they've published that seems rather far along[1]
Support for Windows, Linux[2], and Mac[3] (the latter two with some caveats).
Overall, I think for most dev teams that are not building networking focused products/platforms, HTTP/3 is probably way down the stack of optimizations and things that they want to think about, especially if the libraries available have edge cases and are too early for production. Who wants to debug issues with low-level protocol implementations when there are features to ship, releases to stabilize, and defects to fix?
> side note: sad that .NET and C# are not considered "major
I've said it before on here, but the tech community severely underrates .NET today. It's not Windows only (and hasn't been for ~8 years) plus C# is a very nice language. F# is also an option for people who like functional languages. I'd highly recommend giving it a try if you haven't already.
.NET suffers from the long lasting reputational taint of Microsoft. It was seen as the sworn enemy of open source and Linux, and for good reason.
Today’s MS is not what it was back then. But long memories are not a bad thing, really. If .NET suffers a bit from some unfair perception, perhaps that can remind MS and others what happens when you take an aggressively adversarial approach.
VSCodium uses NetCoreDbg. There are community snippets to make the same work for Cursor, NeoVim, etc.
So far, there was little demand to write another debugger integration (because, really, the debugger core is implemented in the runtime itself - what vsdbg, NetCoreDbg and Rider all primarily do is consume the runtime API).
Yes, it is annoying. You can run VS Code on a PI and have an amazing Rust environment, or Zig, or any language. Except for .Net. Why MS? What is the benefit to your business to make .net suck without your closed sourced bits?
I'm baffled by insistent behavior like this. I think it is just alienating people and even if they move ecosystems, the negative impression will stay.
If you engage in bad faith behavior in a technical discussion, can you be expected to conduct yourself acceptably in a professional setting? Unlikely.
This is a discussion about HTTP/3 support of all things. Why does it happen only when someone leaves a briefly positive note on C#? I don't know any other language (besides PHP, to an extent) that gets the same amount of hate.
I edited it because it's not just my own experience of dealing with this. On twitter, I follow a couple Japanese developers from mainly gamedev scene and even they complain they started hearing more about "but it only works on windows" and "it's not open-source". Don't you find it strange that it should be the other way around the more years pass since .NET went OSS?
The link itself is also quite outdated and mainly consists of posts from Miguel de Icaza who's promoting Swift, arguably less OSS language. Take from that what you will.
You forgot to mention that Miguel de Icaza was probably the single biggest .NET fanboy for literally decades before throwing in the towel. The fact that a person like this ended up being alienated tells volumes.
I should also add that the general public only saw the tip of the iceberg in this entire episode. Miguel spent a lot of time and effort internally trying to right the .NET ship, gradually escalating through management until he finally gave up.
I don’t doubt this but the criticism has to be rooted in facts and the current state of affairs, and you have to consider conflict of interest. It’s not too different to what you can read here. No one ever talks about whether C# offers good cohesive experience when solving a specific task, or what are the pros and cons of its build system, or how a typical .NET team looks like in a particular region. No.
Instead, the complaints you will read here are about what the authors think .NET’s problems are without ever verifying if any of that is true in hopes of making swipes for god knows what reason, because posting something accurate requires knowledge on the subject and the results of a cursory search usually do not support cheap arguments.
(and I see this as an embarrassment because you can learn a lot from doing research instead of repeating the same tired phrases you heard elsewhere)
My point is that there was no "conflict of interest" when it comes to Miguel. If there was one person in the entire F/OSS ecosystem that ever wanted .NET to succeed, it was him. That is also why he spent so much time and effort trying to get Microsoft to not do things that alienate the community and harm .NET uptake.
Speaking for myself, I happen to like .NET from the technical perspective. While both the language and the stdlib have a lot of cruft from days of yore, it can mostly be ignored, and that aside it's a fast runtime with a decently expressive type system giving one considerable flexibility to pick the right tool to model different domains. It also has great tooling around it. But this all is separate from the question of how open .NET really is, and its long-term prospects in that department.
I don't think we can survive another rename haha. It doesn't seem to have helped PHP either. But we could use a newer .NET language with better lambda lowering, dependent types, HM type inference, structs as the default data type, improved lifetime analysis and different tradeoffs w.r.t now that we're going to have zero-cost-ish non-suspending async calls.
I assumed .NET is Windows-only and proprietary, too, but it has to do with me not having done enough research, so if it is not the case anymore, the blame is on me.
Hey, I just want to commend you for amending your perspective (even if a little). There's a lot of intellectual hoop jumping in this thread with respect to Microsoft.
One has to know what they don't know, or at least consider the possibility that they might not know the whole story. I did not know the whole story, but I do now, thanks to earlier comments with regarding to .NET!
Why hasn't Java been tainted the same since Oracle bought Sun and now 100% controls Java's development? I am continuously surprised to see they continue to make major investments in the platform. Project Valhalla was started in 2014 and is still going strong. I keep waiting for Oracle to cancel all major Java improvements, then milk the remaining corpse.
Me too! And yet it keeps going, with a reasonably open process, and multiple OSS alternatives out there. It is possible to run a Java shop with zero Oracle-derived code, which may be why people are ok with actually using the Oracle JDK.
My bet is this is a pitched battle inside Oracle, and the forces that keep it open have been winning...so far.
In my head, when I think Microsoft I think about the stress and anger I feel using Windows, and the OS-level notification I got apropos of nothing a few minutes ago trying to sell me an Xbox Game Pass subscription. The fear of what is going to break in the next forced update. For months now I haven't been able to do a task as simple as take a screenshot on my PC because seemingly the flash effect it plays gets caught in the image and the resulting screenshot has all the colors blown out, so it's mostly white.
So yes, this does color my opinion of how many ecosystems of theirs I want to tie myself too (minimal)
Yet, somehow DevOps and Visual Studio (proper) continue living as near zombies.
I would never have predicted Microsoft would still be developing a (sort of) Github competitor after acquiring Github. Why not plow all of that focus and energy into making Github the best project management system around? Project management in Github is one of the biggest gripes people have - even on Teams and Enterprise.
GitHub is behind ADO in multiple areas. Boards is the one they're furthest from ADO on; leaving that for last is letting them get the others to a state where ADO users can successfully migrate some of their work to GitHub now, even if the boards need to stay in ADO.
That said, Microsoft is never going to be able to completely kill Azure DevOps. If nothing else, they'll have to keep it alive for enterprise customers' TFVC source control history.
Surely there's a migration path to git from TFVC? Many large projects successfully moved from CVS and SVN into git.
I'm just surprised that, after 7 years of owning Github, Microsoft hasn't plowed their resources into Github's Projects. It's literally the number one complaint I see regarding project management inside Github - and would likely be an easy way to scale subscriptions for Teams and Enterprise. Heck, today it's still impossible to generate a Burn Down chart without using 3rd party "Apps" or the API.
The different branching approaches makes migrating history difficult. Microsoft's recommendation is to migrate just the tip, but will allow you to migrate up to 180 days of history for the trunk (no branches).
If you need branch history, or more than 180 days worth of history, the only option is third-party tooling (git-tfs). It seems to work good enough for development purposes (i.e. git blame)... but I'm not sure if it's good enough if we need the history for legal purposes.
VS code is mostly in house too. Sure, they don’t own Electron, but I was at MSFT when project Monaco (which became the basis for VS Code) was started and remember being very impressed by it back then
Text editors and IDEs come and go, there is very little commitment to using one.
If you write a project in C#, you've committed to it and it's ecosystem. Getting out of there when MS makes a choice you don't agree with is going to be near impossible.
In the case of GitHub, yeah. I don’t think a Microsoft source control “social network” would have taken off in the same way GitHub did.
In fact when Microsoft purchased GitHub, quite a few people did leave and close their account. But GitHub already had such a monumental market lead that the departures ended up being a drop in the ocean.
To be honest, I’m still waiting for the moment when Microsoft managed to fuck it all up like they did with Skype.
Don't forget prior to GitHub Microsoft ran the home-grown, TFS-enabled Codeplex. It worked quite well if using Visual Studio but obviously, like Skype, there was no reason to run it when "something better" came along.
You might be interested in GitHub's State of the Octoverse report from 2020 that had a section dedicated to security of popular languages, active open source projects on GH with those languages, and the package managers for those platforms.
except if you want to use for example system.windows.forms, then "oh well different teams maintain that, nobody made it for linux!!!" "the core is open!!!"
they clearly WANT applications written in .net not to be cross platform
It's not about different teams, it's that System.Windows.Forms is exactly what the namespace says. It's Windows Forms. It's a fairly thin wrapper over the Windows API. It's never going to be adapted to be cross-platform and isn't really something they've put any development work into for many years at this point.
If you want a cross platform UI, use WPF with Avalonia. Or if you want something entirely from Microsoft themselves, there's MAUI as an option.
> obviously system.windows.forms could easily be implemented elsewhere
It can, but not easily. As OP has said, it is a wrapper around Win32, and not an opaque one - it literally has stuff like e.g. the Message struct with members like HWnd and LParam.
Mono did try at one point, but they kept hitting edge cases where this kind of stuff would break things. Eventually they gave up and just wrapped Wine. So, yes, if you really really want to run WinForms on Linux, Mono is where it's at. But ... why?
well maybe because there exists countless gui applications that uses it, and furthermore, if you say its crossplatform, you cant say that with a straight face unless you also support gui? fact of the matter is, microsoft was perfectly happy to preach their .net shit as cross platform to try extend into non-windows usage, except for gui stuff, as thats "for windows"
We are talking about WinForms specifically. If you want crossplatform GUI, then don't write it in WinForms - write it in Gtk#, or Avalonia, or Uno, or ...
Note that C++, Rust, Go all don't have any kind of standard GUI support out of the box at all.
I'm amazed people are still using WinForms. When I was going around tables at a job fair in college, 13ish years ago, one of the questions for me from representatives for a large bank was about if I've used WPF. I said I don't use it, and they basically told me I should get with the times.
I haven’t developed with .NET in a dozen years, let alone since it went cross platform, but I at least know it is capable of being cross platform. It amazes me how many developers I speak to that still assume .NET is Windows only.
Because in the relevant distros it is included in the first-party feeds. Only Debian acts like a special snowflake making it needlessly complex for everyone (including Rust). I believe there is an ongoing work to modify .NET's full source build (i.e. https://github.com/dotnet/source-build) to satisfy Debian's admission requirements, but really it's a problem inflicted by Debian on themselves, not the other way around.
I’ve been a .Net developer since it launched, but recently I find myself using it less and less. I’m so much more productive with LLM assistance and they aren’t very good at C#. (Seriously, I thought AI coding was all exaggeration until I switched to Python and realized what the hype was all about, these language models are just so much more optimized for python)
Plus now Microsoft is being a bully when it comes to Cursor and the other VS Code forks, and won’t let the .net extensions work. I jumped through a lot of hoops but they keep finding ways to break it. I don’t want an adversarial relationship with my development stack.
I miss C# and I really don’t like Python as a language, but I don’t see myself doing a lot more C# in the future if these trends continue.
You can use VS Code and cursor at the same time. One to code and the other to compile the code. That's how I build for Android. I generate code in Cursor/Windsurf then I compile and deploy using Android Studio.
I work on .NET and work on Mac (hate the OS, but the hardware and battery life are way better).
Last startup, we shipped AWS t4g Arm64 and GCP x64 Linux containers. A few devs started on Windows (because of their preferred platform), but we all ended up on M1 MacBook Pros using a mix of Rider and VS Code.
Common misconception between old .NET Framework and new .NET # (e.g. .NET 9) (MS terrible naming). C#/.NET has been capable of cross platform binaries for close to a decade?
"Since there's no standardized way to obtain native macOS SDK for use on Windows/Linux, or Windows SDK for use on Linux/macOS, or a Linux SDK for use on Windows/macOS, Native AOT does not support cross-OS compilation. Cross-OS compilation with Native AOT requires some form of emulation, like a virtual machine or Windows WSL."
Now, you don't have to actually use AOT, the other deployment options are actually much easier to cross-build, but true cross build AOT is still not supported.
Native AOT compilation is definitely not the same as self-contained package. With that logic, every Docker container could be considered AOT compiled static executable.
An image or a container could be, yes. It isn't, because Docker stuff is mostly distributed as Dockerfiles and docker-compose files, and those cause your system to download and install stuff, and that part is not like self-contained package.
More that there are at least three different ways to deploy things in dotnet, and only AOT is directly equivalent to Go executables. I like dotnet and use it at work but this is a nuisance limitation for us.
The original question did ask about creating executables like Go, which means a single file you can run as is, so it was fair to mention AOT. For servers etc you usually don't want the AOT version, so then it doesn't matter which platform you develop on, but it's not always just like Go when you want to ship little applications.
The context of the thread is HTTP/3 servers; would it not make sense to take the comment in that context? Original article mentions that browsers (the client side) already supports HTTP/3 with the application server ecosystem being the missing piece.
Hi, I own the Native AOT compiler and self-contained compiler for .NET.
Self-contained will work fine because we precompile the runtime and libraries for all supported platforms.
Native AOT won't, because we rely on the system linker and native libraries. This is the same situation as for C++ and Rust. Unlike Go, which doesn't use anything from the system, we try to support interop with system libraries directly, and in particular rely on the system crypto libraries by default.
Unfortunately, the consequence of relying on system libraries is that you actually have to have a copy of the system libraries to link against them, and a linker that supports that. In practice, clang is actually a fine cross-linker for all these platforms, but acquiring the system libraries is an issue. None of the major OSes provide libraries in a way that would be easy to acquire and deliver to clang, and we don't want to get into the business of building and redistributing the libcs for all platforms (and then be responsible for bugs etc).
Note that if you use cgo and call any C code from Go you will end up in the same situation even for Go -- because then you need a copy of the target system libc and a suitable system linker.
If your code does not rely on native libraries, or you're fine with shipping multiple copies for different operating systems, a single build works everywhere with dotnet installed.
Or you can cross-compile and run without having dotnet on the target system, I do it from Linux to all three platforms all the time, it's pretty seamless. The application can be packaged into a single binary (similar to Go), or as a bunch of files which you can then then package up into a zip file.
JIT deployments do not care where they get built on. AOT deployments do because they use OS-provided linker to produce the final binary, much like C++ and Rust do (unless you use PublishAotCross nuget package which uses Zig toolchain to allow you to build Linux binaries from under Windows, I'm sure if someone's interested it could be extended further)
Also, if you want to have just a single binary, you want to do 'dotnet publish /p:PublishSingleFile=true /p:PublishTrimmed=true' instead. Self-contained build means it just ships everything needed to run in a folder without merging the assembly files or without trimming unreachable code and standard library components.
I'm a dabbler in Go, far from an expert. But I'm not familiar with a capability to use, say, native MacOS platform libraries from a go app that I'm compiling in Windows/Linux without using a VM of some sort to compile it. If that's possible I'd love to learn more.
Look at this garbage API that for no reason whatsoever mirrors winapi on posix for example.
Then after you painfully wrote your linux application despite all of that, you find out that .net is not included by any linux distribution, so have fun distributing your app!
I'm not sure why you are reading the docs sideways but just in case - C# has method overloading. Process API, while dated, simply offers multiple overloads suitable for different scenarios.
Launching a new process can be as easy as `Process.Start(path, args)`.
Although if you are doing this, I can recommend using CliWrap package instead which provides nicer UX. Anything is possible the second you stop looking for a strawman.
> you find out that .net is not included by any linux distribution
Would I? If it's a CLI or a GUI application, I'd distribute it as either a native binary or as the recipe the user will be easily able to build with the .NET SDK (which is a standard approach - you'd need Rustc and Cargo in the same way).
Lastly - no one wants to put up with the maintainers with such attitude and you know well enough that "not included in any linux distribution" is both provably false and a non-factor - in all the distributions that matter it is `sudo {package manager} install dotnet9` away :)
Yeah I do have a life. I advice you to get one as well.
> Launching a new process can be as easy as `Process.Start(path, args);`.
-_-' Same exact problem. Letting every single process do their own escaping of the arguments. A proper portable API would have an array of strings for the arguments, to map execve(). That's how on windows every program does its own escaping and there's lots of programs doing it differently than others, but not how it works on posix.
Thanks for confirming you don't even comprehend the issue here. Crying "strawman!" won't help.
> If it's a CLI or a GUI application, I'd distribute it as either a native binary or as the recipe the user will be easily able to build with the .NET SDK (which is a standard approach - you'd need Rustc and Cargo in the same way).
You can't do any GUI in .NET outside of windows and you know it fully well.
The difference with cargo or go or pip is that all of these are found in every linux distribution, while .net is in none. Please go ahead and misunderstand this sentence on purpose like you've been doing so far.
> in all the distributions that matter it is `sudo {package manager} install dotnet9` away :)
I guess ubuntu, debian, red hat do not matter? What's left? neonsunsetimaginarylinux?
Look, it is very difficult to hold a conversation with someone who responds with "you're just a fanboy, it's 5!" to "2 + 2 equals 4".
On the off chance you are making an intentionally inflammatory reply - you could also ask normally.
Let me try one last time (and now I vaguely remember having similar conversation here before).
On Ubuntu:
sudo apt install dotnet9
On RHEL (8 or 9):
sudo dnf install dotnet-sdk-9.0
On Alpine:
sudo apk add dotnet9-sdk
Then, you can get a simple cross-platform application template like this:
dotnet new install Avalonia.Templates && \
dotnet new avalonia.app -o AvaloniaExample && \
cd AvaloniaExample && \
dotnet publish -o build -p:PublishAot=true && \
./build/AvaloniaExample
The above can target: Linux, macOS, Windows, WASM and with some caveats Android and iOS (although I would not recommend using Avalonia for mobile devices over e.g. Flutter).
The build folder will contain the native application itself and the Skia dynamically linked dependency alongside it (and possibly some symbols you can delete). This is very similar to the way Qt applications are shipped.
This is just one GUI framework. There are other: Uno Platform, Gir.Core (GTK4 + GObject libraries), SDL2 via Silk.NET, Eto. I'm sure there's more.
What is bewildering is you could argue about "first-class" or "subpar", etc. But "does not work at all" is just ridiculous and indefensible. What kind of thought process leads to this conclusion?
In any case, I doubt you're going to read this since your replies seem to indicate interest in tilting at any windmill with ".NET" label on it instead, but at least my conscience is clear that I tried to explain this to the best of my ability.
Man.. All of these words did proof you were wrong.
> The difference with cargo or go or pip is that all of these are found in every linux distribution, while .net is in none.
This statement is false. It is also in arch. If it is not in debian and ubuntu, that is their fault. (But it would not surprise me, any user will soon run into the fact that even for non-obscure software there are no up-to-date packages in the main repo. The default escape hatch is to add extra repo sources, but my advice is to skip those distro's altogether anyways, unless you have very specific needs.)
But then again, if you think package management is a serious topic and then dismiss .net for the python mess, come on.
> the APIs are windows oriented?
The design of this particular API might be off. I guess this dates from the pivot to cross-platform. Usually, tuning for .net performance is better on Linux. I think nobody in MS believes in Windows as a platform really, it is a sinking ship.
It's not Windows only but it is Windows-first. Every few years I take a look but it still doesn't feel like a serious ecosystem for Linux development (in the same way that e.g. Perl might have a Windows release, but it doesn't feel really first-class). I can't think of a single .NET/C# program that people would typically run on Linux to even have the runtime installed, so no wonder people don't bother investigating the languages.
I don't think I've ever said it was good for all use cases and probably said to the contrary.
I write it very explicitly here[0]:
> Should you use C# for web front-ends?
>
> We're only considering backends here; I do not think that .NET-based front-ends (e.g. Blazor) are competitive in all use cases.
We are in a thread about...backend application servers.
Most of my side projects are TS/JS and run serverless Node.js functions on the BE[1]. I don't choose favorites; I choose the right one for the job.
C# has a strong high level feature set and lower level tools to keep performance up. The language itself is well designed and consistent. Its able to mix functional and OO features without being dogmatic, leading to better dev-x over all.
ASP is actually very good these days and feels cleaner than Spring Boot. There's less choice but the available choices are good. It has arguably the best gRPC implementation. It's just a nice experience over all.
Addendum: having `Expression`[0] type representing the expression tree is a really killer feature.
I've been doing a writeup comparing how ORMs work in TypeScript vs ORMs in .NET and one thing that is magical is that having the `Expression` type enables a much more functional and fluent experience with ORMs.
LINQ in EntityFramework certainly isn't perfect, but frankly it's so far ahead of anything else available in all other languages. It was a brilliant idea to add expressions type into the language AND to create a standard set of interfaces that enable collection interop AND to to then create universal clients for local and remote collections that use all this machinery to deliver first class DX.
Having been working in TS with Prisma for a bit, what stands out is how a Prisma query is effectively trying to express an expression tree in structural form
The difference being that the C# code is actually passing an expression tree and the code itself is not evaluated, allowing C# to read the code and convert it to SQL.
Proper macros operate on AST, which it to say, it is exactly like the Expression stuff in C#, except it can represent the entirety of the language instead of some subset of it (C# doesn't even fully support the entirety of System.Linq.Expressions - e.g. if you want LoopExpression, you have to spell the tree out explicitly itself).
Or you can do your own parsing, meaning that you can handle pretty much any syntax that can be lexed as Rust tokens. Which means that e.g. the C# LINQ syntactic sugar (from ... select ... where etc) can also be implemented as a macro. Or you could handle raw SQL and do typechecking on it.
Actually, the reason why they went with Go rather than C# is because they wanted to port the existing code as much as possible rather than rewriting from scratch. And it turns out that TS is syntactically and semantically much closer to Go, so you can often convert the former to the latter mechanically.
> And it turns out that TS is syntactically and semantically much closer to Go, so you can often convert the former to the latter mechanically.
Are there examples of this?
I ask because I've been working on a Nest.js backend in TS and it's remarkably similar to C# .NET Web APIs (controllers, classes, DI). Really curious to see the translation from TS to Go.
TSC codebase is quite unlike regular TypeScript code you will see out there. So it is about specific way TSC is written in being the easiest to port to Go rather than the TS language as a whole.
There's a screenshot comparing the two side-by-side, IIRC it was showcased on the video of the talk (if someone has a timestamp - please post).
The TL;DR version is that they don't really use classes and inheritance (which is the part that would be hard to map to Go but easy to map to C#). It's all functions dealing with struct-like objects.
The other part of it is that C# is very firmly in the nominal typing camp, while TS is entirely structurally typed. Go can do both.
> At the same time, neither QUIC nor HTTP/3 are included in the standard libraries of any major languages including Node.js, Go, Rust, Python or Ruby.
.NET omission notwithstanding, one of the languages in the list is not like the others: Rust has a deliberately minimal standard library and doesn't include HTTP at all. I don't follow Rust HTTP/3 efforts closely, but there are at least two actively developed libraries: quiche and quinn.
The official Python stance on using the standard library HTTP client is also "are you sure?" and their stance on using the standard library HTTP server is "don't".
There is a very simple reason for not being on the list: It support HTTP/3 and the purpose of the post was to show the lack of support in common languages. .NET does not fit that list well.
And it is obviously only popular in the dark matter of Enterprise Software Development.
In usage yes, in analysis yes but not in general popularity.
And to Rust ... definitely. Rust is a niche product for system development. A very good one. A very popular one. But nothing you need for your day to day microservice.
Until the dotnet packages are shipped in every major Linux distro's default repositories (it's okay if it's a 5-year-old version), we can't call C# a major language.
But apparently there aren't enough people willing to actually do the work.
> side note: sad that .NET and C# are not considered "major"...
Even Microsoft does not use C# for their new projects. See the new TypeScript compiler that is being rewritten in Go. So I think it is safe to say C# is indeed a minor language.
> So I think it is safe to say C# is indeed a minor language
That's not really the case; StackOverflow survey[0] shows C# (27.1%) right behind Java (30.3%) and well ahead of Go (13.5%), Rust (12.6%), Kotlin (9.4%), Ruby (5.2%), and Scala (2.6%). If we exclude HTML/CSS, Bash/Shell, and SQL, C# would be #5 in actual languages used over the past year by devs in this survey.
1. JS/TS (note these two are collapsed)
2. Python
3. Java
4. C#
Two completely separate sources with the same output...
> See the new TypeScript compiler that is being rewritten in Go
If they had started from scratch, Anders mentioned the considerations would be different. But because they had an existing body of code that was not class based, it would be more of a re-write (C#) versus a refactor (Go). A lot of folks read the headline without actually reading Anders' comments and reasoning.
C# is good for many things -- in particular application backends, game engines (both Godot and Unity) -- and not optimal for other things -- like serverless functions. Each language has a place and Go and Python are certainly better for CLI tools, for example.
1. JS/TS (note these two are collapsed)
2. Python
3. Java
4. C#
So now you have two data points that align and are completely independent measuring two different things (one self reported, one based on employer job postings).
I'd say it's consistent and reliable?
It's not like people use StackOverflow because it's written in C#; people use StackOverflow because Google points us there.
C# supports top-level functions as well, that's not the issue. But, just to give a simple example, in TS you can do things like:
var foo: { bar: { baz: string } }
which have no equivalent in C#, because it doesn't have anonymous struct types, and its typing system is almost entirely nominal. Go, on the other hand, can translate this directly pretty much mechanically:
var foo struct { bar struct { baz string } }
And keep in mind that they aren't completely ditching the existing implementation, either, so for a while they're going to have to e.g. fix bugs in both side by side. It helps when the code can also be mapped almost 1:1.
Considering how fast the TypeScript compiler is, the TypeGo -> Go transpilation might as well be similar (up to a constant factor) in speed to Go compilation itself.
I'd give it a try. As a highly enthusiastic Go programmer, a powerful TypeScript-like type system is something I'd welcome in Go with open arms.
I’ve never filled out a stack overflow survey. I wouldn’t say Stack Overflow is statistically representative what’s being used — it’s statistically representative of people that use Stack Overlow. 10 years ago SO was my go-to. Now, I barely notice it — it seems very outdated in many respects.
GitHub is probably a better source. SO is self selecting for people asking questions about something, not actually using it. A “harder” thing might have more SO questions, so it isn’t representative of actual usage.
It tells me which languages have people asking questions about them. That metric is useful only if it's normalized around how many people are using that language, but we don't have that metric.
The interviews with the Typescript dev doing the rewrite will tell you why. Switching their compiler to Go was a quick transition since Go matched their current JS build. The dev also wanted to use go, and use functional programming. It would have required more work to switch from functional to OOP style that C# has. Dev also didn't want to learn F#. Nothing about C#, just a personal decision with the least amount of work to get to a beta.
It's pretty true from recent experience. I've recently started rewriting a C# based desktop/window stream tool because of how weak the support is across the board for C#. Microsoft abandoned WinRTC, Sipsorcery is one guy and is missing VP9 HEVC and AV1 support. And for fancier stuff like using computer shaders for color space conversion, SharpDX is constantly referenced by chatgpt and MS docs, yet it's archived and unmaintained as well. I ended up using media streams VideoFrame class but it and two other classes required to interact with it have unpreventable thread and memory leaks built into the WinRT implementations themselves 4+ years ago. Good times.
This is an interesting point I hadn't thought of when I saw the announcement of the new TypeScript compiler. It might be overstating the case to say that C# is indeed a minor language, but it's thought-provoking that it wasn't Microsoft's automatic choice here, the way it is for some all-Microsoft in-house IT shops.
I am not at all surprised to find that there are people in whom it does not provoke thought, but I am mildly amused that one of them would admit to it.
For me, I think the biggest issue with large scale deployment of HTTP 3 is that it increases the surface area of potentially vulnerable code that needs to be kept patched and maintained. I'd far rather have the OS provide a verified safe socket layer, and a dynamically linked SSL library, that can be easily updated without any of the application layer needing to worry about security bugs in the networking layer.
Additionally, I'd posit that for most client applications, a few extra ms of latency on a request isn't really a big deal. Sure, I can imagine applications that might care, but I can't think of any applications I have (as a developer or as a user) where I'd trade to have more complexity on the networking layer for potentially saving a few ms per request, or more likely just on the first request.
A "few extra ms" is up to 3 roundtrips difference, that's easily noticeable by humans on cellular.
For all the CPU optimisations we're doing, cutting out a 50ms roundtrip for establishing a HTTP connection feels like a great area to optimize performance.
> A "few extra ms" is up to 3 roundtrips difference, that's easily noticeable by humans on cellular.
That's a valid concern. That's the baseline already though, so everyone is already living with that without much in the way of a concern. It's a nice-to-have.
The problem OP presents is what are the tradeoffs for that nice-to-have. Is security holes an acceptable tradeoff?
I routinely have concerns about lag on mobile. It sucks to have to wait for 10 seconds for a basic app to load. And that adds up over the many many users any given app or website has.
Making the transport layer faster makes some architectures more performant. If you can simply swap out the transport layer that's a way easier optimization than rearchitecting an app that is correct but slow.
But it doesn’t allow you to multiplex that connection (HTTP pipelining is broken and usually disabled). So depending on the app setup you could be losing quite a bit waiting on an API call while you could be loading a CSS file.
Your static and dynamic assets should be served from different domains anyway, to reduce the overhead of authentication headers / improve cache coherency. https://sstatic.net/ quotes a good explanation, apparently mirrored https://checkmyws.github.io/yslow-rules/. (The original Yahoo Best Practices for Speeding Up Your Web Site article has been taken down.)
Consider HTTP semantics. If there are cookies in the request, and those cookies change, it has to be re-requested every time. If there are no cookies, the request can remain semantically compatible, so the browser's internal caching proxy can just return the cached version.
There are other advantages: the article elaborates.
Per the official HTTP semantics[1,2], what you say is not true: the only header that’s (effectively) always part of Vary is Authorization, the rest are at the origin server’s discretion; so don’t set Vary: Cookie on responses corresponding to static assets and you’re golden. The article only says that some (all?) browsers will disregard this and count Cookie there as well.
Even still, the question is, what’s worse: a cacheable request that has to go through DNS and a slow-start stage because it’s to an unrelated domain, or a potentially(?) noncacheable one that can reuse the existing connection? On a fast Internet connection, the answer is nonobvious to me.
Oh, would that anyone heeded the official HTTP semantics. (Web browsers barely let you make requests other than GET and POST! It's ridiculous.)
On a fast internet connection, the answer doesn't matter because the internet connection is fast. On a slow internet connection, cacheable requests are better.
Is HTTP the issue here though? Most of the time it seems to be more to do with the server taking ages to respond to queries. E.g. Facebook is equally poor on mobile as it is on my fibre-connected desktop (which I assume is using HTTP/3 as well) so I have my doubts that swapping HTTP versions will make a difference on mobile.
I did find it amusing how the author of the linked article says the megacorps are obsessed with improving performance (including the use of HTTP/3 to apparently help improve performance). In my experience the worst performing apps are those from the megacorps! I use Microsoft apps regularly at work and the performance is woeful even on a fibre connection using a HTTP/3 capable browser and/or their very own apps on their very own OS.
Most people still use Google, and so they're living the fast HTTP 3 life, switching off that to a slower protocol only when interacting with non-Google/Amazon/MSFT properties. If your product is a competitor, but slower/inaccessible users are going to bounce off your product and not even be able to tell you why.
MSFT provides some of the slowest experiences I've had e.g. SharePoint, Teams, etc. Am laughing at the assumption that non-MSFT/etc properties are seen as slower when it is i fact MSFT that are the slowpokes. I haven't used Google much lately but they can be pretty poor too.
AWS are pretty good though. However it is notable I get good speeds and latency using S3 over HTTP/1.1 for backing up several hundreds gigs of data, so not sure if HTTP/3 makes any difference if it is already good enough without.
Nonsense, most of the web is non Google, Amazon or MSFT. Much of the web apps already uses CDNs which will enable web3 and the browser will support it. Other parts like APIs will not benefit much it they hit the database/auth/etc. MSFT stuff is dead slow anyway, Amazon is out of date, Google is just ads (who uses their search anymore?)
The connection to your local tower can have a negligible latency. The connection all the way to the datacenter may take longer. Then, there is congestion sometimes, e.g. around large gatherings of people; it manifests as latency, too.
At a previous job I had to specifically accommodate the backend API design to slow, large-latency 3G links which much of our East Asian audience had at the time. South Korea is one thing, Malaysia, quite another.
A lot of traffic still goes over 4G or even 3G in many countries. Most countries have only deployed 5G NSA (non-standalone) which means mobiles uses both 4G and 5G at the same time. Only a few networks in a few countries have deployed 5G SA (standalone) where mobiles use 5G only -- and even those few networks only deploy 5G SA in certain places e.g. selected CBDs. I live in the largest city in my country and I only get 4G still in my suburb and much of rest of my city is 5G NSA which means in most places phones stil use 4G for uplink and a mix of 4G and 5G for the downlink. Hence there is still a long way to go until most traffic (in both directions -- i.e. uplink AND downlink) is over 5G.
> Isn't 5G supposed to solve the mobile latency issue?
Kinda.
So 5g is faster, but its still wireless, and shared spectrum. This means that the more people that use it, or the further they are away, the speed and bandwidth per client is adjusted.
(I'm not sure of the coding scheme for 5G, so take this with caution) For mobiles that are further away, or have a higher noise floor, the symbol rate (ie the number of radiowave "bits" that are being sent) is reduced so that there is a high chance they will be understood at the other end (Shannon's law, or something) Like in wifi, as the signal gets weaker, the headline connection speed drops from 100mb+ to 11.
In wifi, that tends to degrade the whole AP's performance, in 5G I'm not sure.
Either way, a bad connection will give you dropped packets.
And yet, compared to the time you're waiting for that mast head jpeg to load, plus an even bigger "react app bundle", also completely irrelevant.
HTTP/3 makes a meaningful difference for machines that need to work with HTTP endpoints, which is what Google needed it for: it will save them (and any other web based system similar to theirs) tons of time and bandwidth, which at their scale directly translates to dollars saved. But it makes no overall difference to individual humans who are loading a web page or web app.
There's a good argument to be made about wasting round trips and HTTP/3 adoption fixing that, but it's not grounded in the human experience, because the human experience isn't going to notice it and go "...did something change? everything feels so much faster now".
Deploying QUIC led to substantial p95 and p99 latency improvements when I did it (admittedly a long time ago) in some widely used mobile apps. At first we had to correct our analysis for connection success rate because so many previously failing connections now succeeded slowly.
It's a material benefit over networks with packet loss and/or high latency. An individual human trying to accomplish something in an elevator, parking garage, or crowded venue will care about a connection being faster with a greater likelihood of success.
Almost every optimization is irrelevant if we apply the same reasoning to everything. Add all savings together and it does make a difference to real people using the web in the real world.
Google operates at such a scale that tiny increases of performances allows them to support a team of engineers and saves money on the bottom line.
For example, Google hires 10 engineers, they deploy HTTP/3, it saves 0.5% cpu usage, Google saves a million dollars and covers the salary of the said 10 engineers.
For the vast majority of society, the savings don't matter. Perhaps even deploying it is a net-negative with a ROI of decades. Or, the incentives can be misaligned leading to exploitation of personal information. For example, see chrome manifest v3.
It absolutely matters. Machines are orders of magnitude faster than they were 20 years ago; most software isn't doing much more than software did 20 years ago. And no, collaborative editing isn't be-all, end-all, nor does it explain where all that performance is lost.
Many optimizations have bad ROI because users' lives are an externality for the industry. It's Good and Professional to save some people-weeks in development, at the cost of burning people-centuries of your users' life in aggregate. And like with pollution, you usually can't pin the problem on anyone, as it's just a sum of great many parties each doing tiny damage.
>most software isn't doing much more than software did 20 years ago
This isn't exactly true, but some of the potential reasons are pretty bad. Software turning into an ad platform or otherwise spying on users has made numerous corporations wealthier than the gods at the expense of the user is probably one of the most common ones.
What a bizarre thing to say: not every optimization is imperceptable by humans (jpg, gzip, brotli, JS and CSS payload bundling and minification, etc. etc.) and not all sums of optimizations add up to "something significant in terms of human perception".
HTTP/3 is a good optimization, and you can't sell it based on "it improves things for humans" because it doesn't. It improves things for machines, and given that essentially all internet traffic these days is handled by large scale machine systems, that's a perfectly sufficient reason for adoption.
For a long time all my internet connections were bad (slow, unstable or both). Compressing HTML/CSS/JS, avoiding JS unless absolutely needed, being careful with image sizes and formats, etc, helped a lot... so I guess this makes me biased.
Today I have fibre at home, but mobile networks are still patchy. I'm talking sub 1Mbps and high ping/jitter sometimes. So you can see why I think an "irrelevant" optimisation that removes 300ms from a page reload, no compression vs brotli/zstd, jpg vs avif, etc, are important for me, a human.
It's important to keep in mind that many users out there don't have a fast and low latency connections, at least not all the time. What takes 300ms to complete on our fast machine and fast WiFi at the office might take 1s on someone else's device and connection. It's harder to understand this if we only use fast connections/hardware though.
That was my point: 300ms sounds like a lot until, like me too, you're on a slow connection and those 300ms on the entire 10 second page load are utterly irrelevant. You were already expecting a several second load time, that 300ms is not something that even registers: the HTTP negotiation on a modern page is _not_ what you're noticing on a slow connection. You're noticing literally everything else taking forever instead.
3% speedup is still pretty good. (especially because with some of the awfulness, it's possible to get bottle-necked by multiple of these in which case it could be 6 or 9%)
omfg: YES! YES IT IS! But you won't notice it and so the argument that it improves the experience is nonsense because you as human WON'T NOTICE THOSE 3 OR EVEN 6%
It's good because it speeds up the overall response by a measurable degree, not because it makes the experience better. That only happens in conjunction with tons of other improvements, the big ones of which are completely unrelated to the protocol itself and are instead related to how the page is programmed.
How is everyone this bad are understanding that if someone claims A cannot be justified because of B, that does not mean that A cannot be justified. It's near-trivially justified in this case. This crowd really should know better.
> But it makes no overall difference to individual humans who are loading a web page or web app.
Navigating from my phone at 4g and my fiber connection has drastic differences.
Especially noticeable when on vacations or places with poor connections, TLS handshakes can take many, many, many, seconds..After the handshake and an established connection it's very different.
> I'd far rather have the OS provide a verified safe socket layer
There is work going on right now[1] to implement the QUIC protocol in the linux kernel, which gets used in userspace via standard socket() APIs like you would with TCP. Of course, who knows if it’ll ultimately get merged in.
Yea, but does the kernel then also do certificate validation for you? Will you pin certs via setsockopt? I think QUIC and TLS are wide enough attack surfaces to warrant isolation from the kernel.
> but does the kernel then also do certificate validation for you
No, the asymmetric cryptography is all done in userspace. Then, post-handshake, symmetric cryptography (e.g., AES) is done in-kernel. This is the same way it works with TCP if you’re using kTLS.
The problem is that the situation where everyone rolls their own certificate stack is lunacy in this day and age. We need crypto everywhere, and it should be a lot easier to configure how you want: the kernel is a great place to surface the common interface for say "what certificates am I trusting today?"
The 10+ different ways you specify a custom CA is a problem I can't wait to see the back of.
Putting cert parsing in (monolithic) kernels seems like a bad idea; cert parsing has a long history of security vulnerabilities, and you don't want that kind of mistake to crash your kernel, let alone lead to privilege escalation or a takeover of the kernel itself.
Regardless, your proposal suffers from the usual stuff about proliferating standards (https://xkcd.com/927/): a kernel interface will never get fully adopted by everyone, and then your "10+ ways" will become "11+ ways".
Meanwhile, all the major OSes have their own trust store, and yet some apps choose to do things in a different way. Putting this into the kernel isn't going to change that.
Experiencing the internet at 2000ms latency every month or so thanks to dead spots along train tracks, the latency improvements quickly become noticeable.
HTTP/3 is terrible for fast connections (with download speeds on gigabit fiber notably capped) and great for bad ones (where latency + three way handshakes make the web unusable).
Perhaps there should be some kind of addon/setting for the browser to detect the quality of the network (doesn't it already for some JS API?) and dynamically enable/disable HTTP/3 for the best performance. I can live with it off 99% of the time, but those rare times I'm dropped to 2G speeds, it's a night and day difference.
> I'd far rather have the OS provide a verified safe socket layer, and a dynamically linked SSL library, that can be easily updated without any of the application layer needing to worry about security bugs in the networking layer.
Then you're trying to rely on the OS for this when it should actually be a dynamically linked third party library under some open source license.
Trying to get the OS to do it fails to one of two problems. Either each OS provides its own interface, and then every application has to be rewritten for each OS and developers don't want to deal with that so they go back to using a portable library, or the OS vendors all have to get together and agree on a standard interface, but then at least Microsoft refuses to participate and that doesn't happen either.
The real problem here is that mobile platforms fail to offer a package manager with the level of dependency management that has existed on Linux for decades. The way this should work is that you open Google Play and install whatever app that requires a QUIC library, it lists the QUIC library as a dependency, so the third party open source library gets installed and dynamically linked in the background, and the Play Store then updates the library (and therefore any apps that use it) automatically.
But what happens instead is that all the apps statically link the library and then end up using old insecure versions, because the store never bothered to implement proper library dependency management.
That's not what it is, though. The graph embedded in the article shows HTTP/3 delivering content 1.5x-2x faster than HTTP/2, with differences in the hundreds of ms.
Sure, that's not latency, but consider that HTTP/3 can do fewer round-trips. RTs are often what kill you.
Whether or not this is a good trade off for the negatives you mention is still arguable, but you seem to be unfairly minimizing HTTP/3's benefits.
It's also a poor congestion control practice to begin with. The main categories of UDP traffic are DNS, VoIP and VPNs. DNS is extremely latency sensitive -- the entirety of what happens next is waiting for the response -- so dropping DNS packets is a great way to make everything suck more than necessary. VoIP often uses some error correction and can tolerate some level of packet loss, but it's still a realtime protocol and purposely degrading it is likewise foolish.
And VPNs are carrying arbitrary traffic. You don't even know what it is. Assigning this anything less than "normal" priority is ridiculous.
In general middleboxes should stop trying to be smart. They will fail, will make things worse, and should embrace being as dumb and simple as possible. Don't try to identify traffic, just forward every packet you can and drop them at random when the pipe is full. The endpoints will figure it out.
The slow adoption of QUIC is the result of OpenSSL's refusal to expose the primitives needed by QUIC implementations that already existed in the wild. Instead, they decided to build their own NIH QUIC stack, which after all these years is still not complete.
Fortunately, this recently changed and OpenSSL 3.5 will finally provide an API for third party QUIC stacks.[1] It works differently than all the other existing implementations, as it's push-based instead of pull-based. It remains to be seen what it means for the ecosystem.
I feel another way to look at it is that there is a growing divide between the "fronted/backend developer" view of an application and the "ops/networking" view - or put differently, HTTP2 and HTTP3 are not really "application layer" protocols anymore, they're more on the level of TCP and TLS and are perceived as such.
As far as developers are concerned, we still live, have always lived and will always be living in a "plaintext HTTP 1.1 only" world, because those are the abstractions that browser APIs and application servers still maintain. All the crazy stuff in between - encryption, CDNs, changing network protocols - are just as abstracted away as the different hops of an IP packet and might just as well not exist from the application perspective.
I think its more that HTTP/3 only really gives marginal gains for most people.
Just as python3 had almost nothing for programmer over python2, apart from print needing brackets. Sure it was technically better, and allowed for future gains, but in practice, for the end user, there was no real reason to adopt it.
For devs outside of FAANG, there is no real reason to learn how to setup and implement http3
I’d go even further and say that HTTP/3 gives almost no gains for the average person using a high speed wired or wireless internet connection at a fixed location (or changing locations infrequently).
However, for high latency mobile connections while roaming and continuously using the internet, it’s quite an optimisation.
I wouldn’t expect even the vast majority of devs in FAANG to care. It should purely be an infrastructural change that has no impact on application semantics.
It's pretty glaring that nginx still doesn't have production-ready HTTP3 support despite being a semi-commercial product backed by a multi billion dollar corporation. F5 is asleep at the wheel.
there are quite a lot of features, but it's hard to say what constitutes a new module. (well, there's "Feature: the ngx_stream_set_module." so maybe yes?)
One would probably have to go through git logs [1] so I guess I should do that after getting some food in the belly to answer my own question. It's a big log. Interesting side note, appears all commits from Maxim stopped in January 2024. Must be all F5 now.
16 commits from F5 from 2020 to 2025, nothing before that. Looks like they are bugfixes and enhancements. Perhaps someone else created the ngx_stream_set_module module prior to the acquisition.
There's some cool stuff & capabilities here. Its surprising to me that uptake has been so slow.
Node.js just posted an update on the state of QUIC, which underlies http3 & has had some work over the years. They're struggling with openssl being slow to get adequate API support going. There's efforts that have working books for quic, but the prospect of switching is somewhat onerous.
Really unfortunate; so much of this work has been done for Node & there's just no straightforward path forwards.
My observation is that anything based on public cloud providers using their load balancers is basically using HTTP3 out of the box. This benefits people that use browsers that support this (essentially all browser and mobile browsers). And since it falls back to plain HTTP 1.1, there are no downsides for others.
Sites that use their own apache/nginx/whatever servers are not benefiting from this and need to do work. And this is of course not helped by the fact that http3 support in many servers is indeed still lacking. Which at this point should be a strong hint to maybe start considering something more modern/up to date.
Http clients used for API calls between servers that maybe use pipelining and connection reuse, benefit less from using HTTP3. So, fixing http clients to support http3 is less urgent. Though there probably are some good reasons to support this anyway. Likewise there is little benefit in ensuring communication between microservices in e.g. Kubernetes happens over http3.
I’ve been using niquests with Python. It supports HTTP/3 and a bunch of other goodies. The Python ecosystem has been kind of stuck on the requests package due to inertia, but that library is basically dead now. I’d encourage Python developers to give niquests a try. You can use it as a drop-in replacement for requests then switch to the better async API when you need to.
Traditionally these types of things are developed outside the stdlib for Python. I’m not sure why they draw the line where they do between urllib vs niquests, but it does sometimes feel like the batteries-included nature of Python is a little neglected in some areas. A good HTTP library seems like it belongs in the stdlib.
requests dead? The reason given for not including it in the stdlib was so it could evolve more rapidly. Back then the protocol layer was handled/improved by urllib3.
> Requests is in a perpetual feature freeze, only the BDFL can add or approve of new features. The maintainers believe that Requests is a feature-complete piece of software at this time.
> One of the most important skills to have while maintaining a largely-used open source project is learning the ability to say “no” to suggested changes, while keeping an open ear and mind.
> If you believe there is a feature missing, feel free to raise a feature request, but please do be aware that the overwhelming likelihood is that your feature request will not be accepted.
But it's not feature complete if it can't make modern networking requests when the whole point of a library like requests is to make networking requests.
> whole point of a library like requests is to make networking requests.
The whole point of requests is to make HTTP requests easy. HTTPLib was/is an arse to use.
as HTTP3 is really not that widely adopted, and where it is, has fallbacks to 1.1 whats the point?
Plus the people who are keen on HTTP3 also seem to be keen on async, https://github.com/aiortc/aioquic/tree/main/examples which even though it isn't seems to be overly complex and difficult to use. Contrast that to r.get(url)....
Nginx (F5) and Go (Google) are hardly scrappy open source projects with limited resources. The former is semi-commercial, you can pay for Nginx and still not have stable HTTP3 support. Google was one of the main drivers of the HTTP3 spec and has supported it both in Chromium and on their own cloud for years, but for whatever reason they haven't put the same effort into Go's stdlib.
The backwards compatibility guarantees are for the language and not the standard library. They won't make breaking changes nilly willy but it can and has happened for the std.
The comparison with IPv6 is interesting. IPv6 isn't mainly driven by open source or community. It is driven by the needs of large corporations, including both ISPs and tech companies. ISPs like T-mobile wanting to run an IPv6-only backbone network, and tech companies like Apple forcing every app in the App Store to work in IPv6-only mode (DNS64+NAT64). New operating system levels features for IPv6 are often proposed by big tech companies and then implemented eagerly by them; see for example DHCP option 108.
In a sense the need for IPv6 is driven by corporates just like that for HTTP/3.
IPv6 always seemed to me to be driven by a certain class of purist networking geeks. Then some corporations started getting on board like you said, but many couldn't care less.
The largest use of IPv6 is in mobile (cell) networks. When they effectively killed IP block mobility (provider independent netblocks), they (the standards bodies) effectively killed it's adoption everywhere else.
I work in the networking space and outside of dealing with certain European subsidiaries, we don't use IPv6 anywhere. It's a pain to use and the IPv6 stacks on equipment (routers, firewalls, etc) are no where near the quality, affordability, and reliability of their IPv4 stacks.
I've gone through dozens of applications for a PI block and all been turned down. Heard the same from most of the networking people I know of. One even had their company become a LIR just so they could lock down a block.
Outside of Europe I don't know anyone not FAANG sized that managed to get it done in the last few years.
In my dealings with small to medium sized biz, I usually go the SDWAN route to aggregate and balance in IPv4 space instead as it is MUCH easier to get it done from an ISP.
You lost me at “multiple per-interface NAT rules […] with some load balancing trick” being “stupid easy”…
But BGP+PI is indeed how multihoming is supposed to work, both in IPv4 and IPv6. (Well, in IPv6, you can put two prefixes on-link and do poor man's multihoming that way, which you cannot reliably in IPv4.) Of course, if you define “it cannot be BGP+PI”, then indeed it's probably harder, but if you exclude the intended solution, obviously there's no intended solution.
Depends on the country. New Zealand has zero mobile IPv6 users -- not one carrier has deployed IPv6 on mobile (and none plan to do so anytime soon). This includes two carriers that have deployed IPv6 on their xDSL/Fibre networks so it's not like they don't know how. It's interesting that some countries (e.g. NZ) see IPv6 deployed mostly on fixed line (i.e. xDSL/Fibre) while others (e.g. US) are mostly mobile. Perhaps it's not the fixed/mobile layer of the network stack that influences the decision to go IPv6 or not.
The exhaustion of IPv4 address pool was easy to predict even in 2000, just by extrapolation of the growth curve.
Then came IP telephony backbone and the mobile internet, topped up with the cloud, and the need became acute. For the large corporations involved, at least.
Oh many purist networking geeks joined large corporations so that these corporations began to push IPv6 in a direction set by the geeks. They understood that as independent geeks they have essentially no say in the evolution of IPv6. My favorite example here is Android refusing to support stateful DHCPv6; it's clear that it's being pushed by purist networking geeks inside Google.
With IPv6 RAs, there's no need for DHCPv6. I don't use it at all and I use IPv6 just fine on mobile. One place where DHCPv6 may make some sense is the router<->WAN/ISP connection. However once your router has a IPv6 prefix, it can easily advertise it on your LAN/WLAN for devices to capture it via IPv6 RAs for its IPv6 autoconfiguration which Just Works. Given that Android devices will attach to a WLAN router (and not directly to your WAN/ISP) it makes sense for there to be no DHCPv6 as it's not necessary for end user devices that aren't expected to be attached directly to the WAN/ISP.
My home network doesn't use stateful DHCPv6 either. I agree there's no need.
The bigger thing is carrying this "you don't need it" attitude to a product used by billions. Thousands of network operators who do not believe in "you don't need it" are now forced to make their network work for Android. If those purists had guts they should go to the IETF and formally deprecate stateful DHCPv6.
> My favorite example here is Android refusing to support stateful DHCPv6; it's clear that it's being pushed by purist networking geeks inside Google.
If you read the huge bug on it, Google's counter argument is stateful dhcpv6 significantly complicates tethering to the point of needing an ipv6 nat. That's a very practical position to take, hardly "purist network objectionists"
So what? The Linux kernel already supports NAT66. Android uses the Linux kernel. I use Tailscale and when I use an IPv6-only node as the exit node, NAT66 is being used. Use `ip6tables -t nat -vnL` to check. You can also grep for `v6nat = true` in Tailscale logs.
It's those purists at Google that decide that NAT66 is evil and should not be used and therefore they have chosen not to support stateful DHCPv6.
Yes, right now every large provider does that, which is great. That was not the case when the first p2p networks were growing big (Napster, Gnutella, that kind of thing).
Ummm… Google invented QUIC and pushed it into Chrome and shuttled it through IETF to be ratified as a standard. Some of the large OSS projects are maintained by large companies (eg quiche is by Cloudflare) and Microsoft has MsQuic which you can link against directly or just use the kernel mode version built into the OS directly since Windows 11. The need for QUIC is actually even more driven by corporates since IPv6 was a very small comparative pain point compared to better reaching customers with large latency network connections.
99% of the benefit of HTTP/3 is on distributed web serving where clients are connecting to multiple remote ends on a web page (which lets be honest, is mostly used for serving ads faster).
Why would the open source community prioritize this?
Yes, but they've likely already optimized any code that's part of their ad networks to support http/3 anyways. They're not necessarily going to lose sleep if other components doesn't support it.
> You'll start to see lack of HTTP/3 support used as a signal to trigger captchas & CDN blocks, like as TLS fingerprinting is already today. HTTP/3 support could very quickly & easily become a way to detect many non-browser clients, cutting long-tail clients off from the modern web entirely.
That explains it. I've seen this when using 3 year old browsers on retail web sites recently. A few cloud providers think I’m a bot.
I've been doing that on my hobby sites ever since all the popular browsers supported HTTP/2.0 [1]
if ($server_protocol != HTTP/2.0) { return 444; }
It knocks out a lot of bots. I am thankful that most bots are poorly maintained and most botters are just skiddies that could not maintain the code if they wanted to.
What exactly are sites supposed to do to prevent being the targets of DDoS, spam, fraud, aggressive bots, and other abuse? And it's not "locked down", it's usually just a CAPTCHA as long as you're not coming from an abusive IP range like might happen with a VPN.
Also there are a thousand other signals besides HTTP/3. It's not going to make a difference.
The normalization of CAPTCHAs for simply reading what ought to be public information strikes me as very alarming, as does characterizing essential privacy and anti-censorship measures like VPNs as "abusive".
Something like 1% of HTTP hits pose some risk of spam or fraud, those where somebody is trying to post a message or a transaction or something. The other 99% are just requesting a static HTML document or JPEG (or its moral equivalent with unwarranted and untrustworthy dynamic content injected). Static file serving is very difficult to DDoS, even without a caching CDN like Fastly. There is still a potentially large bandwidth cost, but generally the DoS actor has to pay more than the victim website, making it relatively unappealing.
Should web sites really be weighing "a thousand signals" to decide which version of the truth to present a given user with? That sounds dystopian to me.
Of course it's alarming. But what's the alternative?
> Something like 1% of HTTP hits pose some risk of spam or fraud
It doesn't matter if it's a tiny percentage of requests that are spam/fraud. The only thing that matters is the absolute amount, and that's massive.
> Static file serving is very difficult to DDoS
No it's not, and most pages aren't particularly static. They're hitting all sorts of databases and caches and stores.
> generally the DoS actor has to pay more than the victim website
No, generally the DDoS actor pays very little, because they're using bots infecting other people's devices. The bandwidth is free because it's stolen.
> be weighing "a thousand signals" to decide which version of the truth
Nobody said anything about "truth". You're either blocked or you're not. Page content isn't changing.
Yes, spam and fraud and abuse prevention does require weighing a thousand signals. It always has. It sucks, but the world is an adversarial place, and the internet wasn't designed with that in mind.
This is also my experience. If you are dealing with clients that can pay to have their content cached, and not hit the originating server, that's great. To me, that sounds like what Cloudflare offers. Unless you have the ability to push an entire website into a CDN, there are things like query (GET) parameters on a URL that will bypass the cache and hit the server. This means that only using file caching is not viable unless you are running a completely static website. Most websites allow the client to make changes to the content through some web based GUI, and you have things like pagination where the maximum page number may not be known ahead of time for your CDN.
I have dealt with DDoS attacks with and without a service like Cloudflare, and without something like Cloudflare, the costs are extremely high. These are not clients pulling in millions of dollars of sales, even per year. There are IP blocks, but sometimes those bots are running on networks shared with the client's customers. Most clients I deal with couldn't fathom dealing with a static website, even if I could give them a hidden backend that pushed all files to a CDN for caching. I have enough trouble explaining why a page isn't showing recent changes, even with a big button stating to press it if your page changes do not display (integration with the Cloudflare caching API).
Bots, human fraud, it's all a balancing act while trying to give your clients the easiest solution possible for them to run their business online. Without a massive hosting budget, it's not feasible to provide for an easily maintainable client solution, that is also easy on their clients, and prevents abuse from bots and those that would cause fraud or damage.
Can you explain more specifically the threats faced by your website? Please help us understand what attacks you are currently facing. Are you currently getting DDoSed? Did the DDoSer stop DDoSing when you blocked HTTP 1.1? Did your credit card chargebacks drop by 50%?
---
According to other commenters the main use case for HTTP/3 is ads serving. Should I assume your project is an ad server? I could disable HTTP/3 in my browser to block ads. You see that this is a bit silly, right?
I'm sorry, are you really questioning whether these threats exist? Or whether not using HTTP/3 is one potential signal of being a bot (out of many), since tools like cURL don't support HTTP/3?
The two other commenters are wrong, ads are not the main use case at all. And disabling HTTP/3 won't block ads, not even the tiniest bit. It appears you are getting a lot of misinformation.
I would like you to explain specifically and concretely why you need this, without using any broad abstract ideas like "bots" or "fraud".
For example: "We were receiving 1000 spam and 100 legitimate comments per day even though we used hCaptcha on the comment form. When we disabled HTTP 1.1 on the comment endpoint, the spam stopped entirely, and we still received 95 legitimate comments per day." (in this scenario, I don't think it's necessary to further elaborate what counts as a "spam comment" if it's the usual type. If it's not the usual type then you will have to elaborate it.)
Sorry, but you seem to be continuing to misunderstand how this works. Disabling a version of HTTP on its own is not going to stop spam. You seem to be confused about how something can be one factor out of many in a statistical model.
If you don't want to talk about basic concepts like bots or fraud, and don't understand how and why detection mechanisms for them exist, I suggest you do your own research. There are lots of explanations out there. An HN comment isn't a place where I can help you with that, sorry.
It sounds like you are advocating a policy to solve hypothetical problems or problems you have vaguely heard that somebody had once, not real-life problems where you are familiar with the tradeoffs.
I would say absolutely not, for something that is actually run as static. How many websites could run as static, versus the clients that pay for those websites that believe they need to make a change at any time without going back to their web developers? In my experience, most clients rarely change their websites, and often request changes from my employer after having paid for a solution that allows them to have control should they want or need it. Due to this, I still need to run a dynamic website for truly static content.
A large part of this issue is web development being totally foreign to most users, and clients being burned by past developers that disappeared when needed the most, or hit with a large charge for an update that seems to be "simple". This pushes clients to want a solution where they can make the changes, and that leads to a dynamic website for static content. If you were to stop taking this type of business to further privacy, there are probably tens or hundreds of other companies in your own city that will gladly take on the work.
This absolutely is dystopian, but also current reality. Unless you are able to run completely privately online, the Internet as a whole has become dystopian because shutting down bad actors has been thrown into the hands of individuals instead of part of the backbone to how the Internet functions. Just my two cents, as a developer that needs to use these resources, but also runs a VPN on my phone and desktop nearly all the time for privacy (as well as some speed benefits, depending on the VPN provider).
When it comes specifically to Cloudflare, it does not have to be this way. A site operator can choose to set their own rules for triggering CAPTCHAs, it's just that most don't actually bother to learn about the product they're using.
I use Cloudflare through my employer because I deal with clients that aren't willing to spend a few hundred dollars a month on hosting. In order to keep up with sales of new websites for these clients (where the real money lies), I need to keep hosting costs down, while also providing high-availability and security. Bot traffic is a real problem, and while I would love to not require using Cloudflare in favor of other technologies to keep a site running quickly and securely, I just can't find another solution near a similar price point. I've already tweaked the CMS I use to actually run with less than the minimum recommended requirements, so would have to take a more hostile action towards my clients to keep them at the same cost (such as using a less powerful CMS, or setting expiration headers far in the future - which doesn't help with bots).
If anyone has suggestions, I'd be open to them, but working for a small business I don't have the luxury to not run with Cloudflare (or a similar service if one exists). I have worked with this CMS since 2013, and have gone into the internals of the code to try and find every way to reduce memory and CPU usage so I don't need to depend on other services, but I don't see too many returns anymore.
I am all for internet privacy, and don't like where things are going, but also do work for lots of small businesses including charities and "mom and pop" shops that can't afford extra server resources. In no way do I use Cloudflare to erode user privacy or trust, but can understand others looking at it that way. If I had the option to pick my clients and their budgets, it wouldn't be an issue.
It's not clear to me that HTTP/3 is relevant to anyone who isn't already using it. It's most useful for large-scale hosting providers and video. And these people have already adopted it, and don't necessarily use out-of-the-box web servers for their infrastructure.
A dropped packet requires a retransmit which is effectively a round trip. Introducing shared fate by bundling many requests into a single TCP connection results in more requests being delayed by round trips per dropped packet.
We switched to Caddy in multiple projects and really happy with it... Certificate generation feels like magic and http3 works great as well. Config files are much smaller and easier to read as well!
Make sense. Nowadays, for a new project, I would like to use Caddy where I would otherwise use Apache or nginx, but I would be even more inclined to use a cloud provider's load balancer.
If you are looking for a reverse proxy with good HTTP/3 support, I recommend Envoy. Configuring it is a bit of a chore, but it feels like it was engineered from first principles to be the best possible HTTP/2 and 3 reverse proxy. The architecture for HTTP is entirely based around the h2 protocol, unlike NGINX which splits requests into various phases and struggles to support things like bidirectional streaming and upstream h2. (One down side is that it really is a reverse proxy; no static file serving or anything like that. But if you know what you need it can be great.)
I do hope to see more QUIC and HTTP/3 support, but to be honest, even h2 support in many cases sucks pretty hard. The Go HTTP interface is still pretty much HTTP/1 oriented and really needs a rehaul, and even the h2 implementation feels like it still lacks some battle testing. I think that is a damn shame.
Envoy is so much more. We used it for gRPC, service discovery, health checking, dynamic failover, and so much of our at-scale service topology at my last place.
I've run into production bugs with Go HTTP/2 at multiple different jobs. They are now fixed, but I'm personally not confident that those were the last ones. I could look them up and link them if you're actually curious, but I'm not sure that will be convincing, you can obviously link to bug reports for anything and claim it's unstable, I can only speak of my personal experience.
Good question! I have nothing against Traefik. I just haven't used it as much, whereas with Envoy I've both used it and contributed to it a little bit too.
A lot of the article is the same marketing spiel that Google has been using to promote QUIC (and then HTTP/3)
At best those amazeballs advantages are applicable only at Google's scale, and have very little impact anywhere else.
Worse, still,
--- start quote ---
We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart. Moreover, the performance gap between QUIC and HTTP/2 grows as the underlying bandwidth increases.
The benefits of QUIC / HTTP/3 have been extremely well defined as-
-higher latency connections.
-packet loss under multiplexing scenarios / suboptimal connections (e.g. mobile).
These are the situations where it shines and runs away from HTTP/2. And this has been the promised advantage from the outset, and is literally the problem it is designed to solve.
Given that the linked paper mentions the word latency once in an irrelevant context, I think that's telling. Of course there is no advantage -- and in fact is an expected disadvantage -- when your client and server are 0ms from each other with 0% packet loss. Now put them 100ms from each other with 5% packet loss/reordering/retransmission and multiplexing.
It is bad packet loss, but is the sort of situations that people often find themselves in. Congestion, bad connections (which is a lot of mobile scenarios), satellite comms, and the like and it's a reality.
It's interesting how people often say HTTP/3 only benefits the megas like Google. A few years back I worked on a data centric system (fund administration) where we had a single centralized server cluster serving high value users across the globe. Because of the integration and real-time nature of the data, it wasn't possible to replicate to servers around the world, nor was local (relative to the user) caching of much value at all.
QUIC (which became HTTP/3) proved a significant improvement for users of the system. Users in the UK, Singapore, Australia, Germany, California, and so on, were all using a system in Toronto basically transparently, with great usability. That it was continents away suddenly didn't matter.
> These are the situations where it shines and runs away from HTTP/2. And this has been the promised advantage from the outset, and is literally the problem it is designed to solve.
And yet it's somehow being pushed as a be-all solve-all replacement despite this:
--- start quote ---
We experimentally demonstrate that QUIC’s performance degradation affects not only bulk file transfers but also other applications including video content delivery and web browsing, despite their intermittent traffic patterns. QUIC incurs a video bitrate reduction of up to 9.8% compared to HTTP/2 when delivering DASH (Sodagar, 2011) video chunks over high-speed Ethernet and 5G. Again, such QoE degradation only exhibits when the underlying bandwidth is sufficiently high. For example, the impact is hidden over 4G but unleashed over 5G. QUIC’s page load time (PLT) is 3.0% longer than HTTP/2’s, averaged across 100 representative websites, with a long tail of page load time gaps over 50%.
--- end quote ---
Latency is all good ... until latency isn't the only thing affecting the performance
> Of course there is no advantage -- and in fact is an expected disadvantage -- when your client and server are 0ms from each other with 0% packet loss. Now put them 100ms from each other with 5% packet loss/reordering/retransmission and multiplexing.
Indeed, why not claim something that article never claimed and then claim moral superiority for yourself. Nowhere in the article do authors claim to have servers 0ms from each other with 0 packet loss.
Additionally, if your performance degrades even in these ideal conditions, what does this promise for non-ideal conditions?
>And yet it's somehow being pushed as a be-all solve-all replacement
But it isn't a be-all solve-all replacement. The whole point of HTTP/3 is that you can still use HTTP/2 all you want in your build-outs, and it uses as appropriate. If large file, many packet, high speed sustained performance is your thing and you've got problems with HTTP/3, deploy it on an HTTP/2 server. Go nuts. Positively nothing will go awry. Everything will be fine.
You're arguing a strawman.
>Additionally, if your performance degrades even in these ideal conditions, what does this promise for non-ideal conditions?
This is an absolutely nonsensical statement. HTTP/3 is quite literally built for situations where you have many small requests, often in suboptimal situations. The average web user interacting with an average web page over something other than their local ethernet connection, exchanging tens of thousands of back and forths for different resources and navigations and posts. Screeching, with moral superiority I might add, that if it pins the CPU using their oddball no-name server -- oh, and where they bizarrely forced the HTTP/3 server to use HTTP/2 congestion control because that made the results funner -- with their client machine with a CPU 1/4 the performance of my smartphone, downloading a many GB file, isn't the big win you seem to think it is.
> If large file, many packet, high speed sustained performance is your thing and you've got problems with HTTP/3, deploy it on an HTTP/2 server.
Ah yes. Basically back to some links I discussed. Oh, it's amazing but you have to be careful what you deploy, and when, and you have to switch between HTTP/2 and HTTP/3 for some unspecified criteria which may or may not be better in one or another while the article we're in comments to decries "why oh why so few implement HTTP/3"
> HTTP/3 is quite literally built for situations where you have many small requests, often in suboptimal situations.
Google doesn't particularly care if you use HTTP/3. They don't even build it into the tools they build like Go or Dart, at least not in a timely manner. There was a passing bit of technical notes for RFCs and as they added it in Chromium, but otherwise they've been remarkably silent about it.
Yet they moved trillions of web requests to HTTP/3. Maybe they really don't know what they're doing. Cloudflare also clearly hasn't the slightest, right? Fools!
>while the article we're in comments to decries
HTTP/3 is complex to implement. Very complex. It's pretty simple to understand why it hasn't seen wide implementation in every random tool. And for many people HTTP/2 is fine, especially as you're probably just going to drop Cloudflare (which has HTTP/3) with caching in front of it anyways.
Has Google said anything? Is it dependent on certain e.g. server-side factors? Did Google "get this wrong" or was this intentional? E.g. is it by far a net positive to be faster on slow internet where the difference is perceivable, than to be slower on fast internet because it's still lightning-fast even when it's slower?
I don't know what you mean. In this particular instance, the point is to speed up page loads. In this particular case, speed increases are most people's needs.
How much does http/3 help for server to server traffic? Seems like larger websites can use a CDN or load balancer to do termination and then use http 1.1 to the back end. Is that good enough with large pipes and a high number of connections?
QUIC was not designed for server-to-server. In that use case, you’ll [1]likely experience poor performance due to higher CPU usage (since QUIC is a user-space protocol without TCP optimizations at the kernel/NIC level) and lower throughput.
[1]This is based on public benchmarks, try searching for `TCP vs QUIC`
The primary benefits of QUIC apply in scenarios where you have some packet loss, and are multiplexing multiple independent "transactions" (DB queries, HTTP req's, gRPC calls etc.) over a single connection.
Multiplexing is very common but unless you are at megacorp scale (or operating a cloud hosting platform) packetloss within your own wired network infrastructure isn't a super common issue. Compared to say packetloss to mobile clients on bad networks where QUIC can really provide a significantly improved experience.
I'm using HAProxy on my 1 machine homelab. I'm not convinced that HTTP/3 a big improvement over HTTP/2, but for me, HTTP/3 was as easy as upgrading and adjusting the config file a little bit. From my perspective, the seeming lack of HTTP/3 support noted in this article is not a big problem. That said, I'm dying to get official 0-rtt support.
Everyone loves to tout TTFB numbers for quic, but there's very little widespread reporting on throughput, particularly analyzed for a large number / wide breadth of real world customers.
This matters because for a lot of operating systems, UDP buffers are still tuned to nearly 1990s levels, and are insufficient to overcome BDP challenges. For CDNs / edge deployment systems such as Cloudflare/Fastly, this may not be "statistically relevant" for their customers, in that they're close to "statistically most" of their customers, however for users in locations where these organizations do not have anywhere near such a good presence (APAC, the islands, etc), their experiences are getting _far worse_.
> At the same time, neither QUIC nor HTTP/3 are included in the standard libraries of any major languages including Node.js, Go, Rust, Python or Ruby.
I'm not sure why it would be in the Rust std, Rust std doesn't even have HTTP/1 or TLS, it's not supposed to support any network layer higher than what's provided by the OS.
It's been proven many times that in well-connected networks (e.g. datacenters) H2 is faster, often because all of the things that H3 improves on is now handled in the user context, which negates the overhead.
The benefits only show in poorly connected networks (public internet), so that's pretty exclusively where it should be used - anything internet-facing.
There's ongoing work exploring QUIC-in-kernel-space at https://github.com/lxin/quic, and more generally HTTP/3 will be increasingly optimized over time as it moves towards becoming the majority of HTTP traffic (a few years off, but looks likely eventually). There's no fundamental reason I'm aware of that HTTP/3 would be _inevitably_ slower than HTTP/2, it seems likely for now that it's largely implementation details.
There's plenty of internet-facing cases with average-at-best connectivity where HTTP/3 would be beneficial today, and isn't available (non-megacorp Android apps, CLI tools, IoT, desktop apps, etc). Even on the backend, it's very common to have connections between datacenters with significant latency (e.g. distributed CDN to central application server/database).
The article talks about the advantages of HTTP/3 for IoT applications. I recently did an IoT application (weather station) where I actually ended up using HTTP/1.0. I only had to send a total of around 70 bytes every two minutes. If at the end of that two minutes I had an invalid or missing response there was nothing more to do other than record the result for a possible future reset. Once there was new data available the old data was irrelevant.
So I don't see how HTTP/3 would of helped there. Most IoT applications are not very sensitive to things like latency and what counts as reliability is very much dependent on the situation. A simple protocol is an advantage.
In the sense that nothing is really reliable in the presence of state-level adversaries censuring traffic, malicious ISPs, and crappy hardware. In general I have been more successful with QUIC hole punching than with ipv6.
But having a standard and relatively broadly implemented way to make reliable "TCP-like" streams over UDP is a great thing regardless.
While this article is about H3, I think there is wider issue at play; in general I think the gap of availble tools (/libraries/etc) between these faang megacorps and your average dev has grown very wide. Googles internal tooling maybe is the most famed, I'm sure some xooglers can tell more. This is very sad to see after seeing the massive democratization of sw development in the 00s-10s. I'm concerned about what the impact of having this two classes of developers will have on the community and ecosystem. I suppose this h3 phenomenon is one consequence.
The people whose ISPs are spying on them, won't be able to use HTTP/3 anyway. It's not hard to figure out if a packet is HTTP/3 and when you drop all HTTP/3 packets your browser will usually just use something that does make it through.
Well, http/3 so far works in China and Russia, because, presumably, many enough required services are using it so that blocking indiscriminately is not practical.
By terminating an udp stream and fetching the backend data using http 2.0?
This might be enough for static web and such, but in a "new web" you'd want to use all those nice features of http/3 which have no direct equivalent in http/2, like multiplexing udp streams for video and such.
And if you forward the udp stream itself, what do you do with the certificate? Use a selfsigned one with no expiration date and add a custom CA? Seems laborious.
> At the same time, neither QUIC nor HTTP/3 are included in the standard libraries of any major languages including Node.js, Go, Rust, Python or Ruby. Curl recently gained supportopens in a new tab but it's experimental and disabled in most distributions.
That doesn't matter too much, since a certain "security" company is doing deep packet inspection and blocking anything that isn't Firefox, Chrome, or Safari.
My recent projects in C++ are just using cURL, but given some of the versions of cURL I have to support are 10 years old it isn't being turned on anytime soon.
Even the latest deployments on Rocky 9 are using a 4 year old version of cURL
When you're writing libraries distributed as binaries for other teams you can't just statically link whatever you want willy nilly.
I’m not OP, but at $WORK we sell a C++ library. We want to make it as easy as possible for clients to integrate it into their existing binaries. We need to be able to integrate with CMake, Meson, vcxproj, and hand-written Makefiles. We’re not the only vendor: if another vendor is using a specific cURL version, you better hope we work with it too, otherwise integration is almost impossible.
You could imagine us shipping our library as a DLL/.so and static-linking libcurl, but that comes with a bunch of its own problems.
That doesn't work if other teams want to apply their own cURL patches, or update as soon as upstream publishes new security fixes without waiting for you.
That's the point. We don't do that. You link to the system libcurl dynamically and everyone is told to do the same.
If you want to use a private curl as an implementation detail then the only safe way to do it is to ship a .so, make sure all the symbols are private and that symbol interposition is switched off.
If you ship a .a then the final link can always make symbols public again.
There's also a sort-of informal "standard library" of C libraries that have super-stable ABI's that we can generally assume are either present on the system or easy to install. Zlib is another one that comes immediately to mind, but there are others as well.
HTTP/3 isn't really HTTP. It's a protocol designed to deliver javascript applications to a browser. It's not even TCP. It's UDP and then a CA TLS only layer of QUIC on top of that. It is entirely designed for large corporate use cases the the detriment of anything else.
Reminds me of IPv6... When I was 17... (2007) and learned about it I was very hyped to see it become mainstream.. I still don't know why we go out of our way to only use IPv4 to this day. Its even older than when I discovered what IPv6 was.
but what turned out to be a big problem (not enough IP addresses for clients) was solved by CGNAT and a simple market (for servers)
of course it's important to understand that it was cheaper to deploy thousands of CGNAT boxes than to upgrade the whole Internet (and corresponding software)
It seems cheap consumer ISP hardware works absolutely fine these days, but prosumer/small business devices sometimes still have trouble with hardware acceleration. You can sometimes get that acceleration when you flash OpenWRT so the problem seems to be a lack of effort from companies that should do better.
Also, IoT crap tends to disable IPv6 (for saving a few kilobytes of ROM I think) but that stuff is better off locked behind six levels of NAT anyway.
HTTP/3 adoption will explode as soon as its provided in `libcurl` with default compile options when combined with OpenSSL and not a moment before. As soon as this happens there will be a bunch of clients that speak HTTP/3 if available, and then there will be effort to build it into servers.
Right now there's no critical mass, and the most commonly used most-reference implementation of an HTTP client doesn't support HTTP/3 in any standardized way.
As an Akamai user I already serve all my DASH traffic (video) over http3. Akamai itself return to origin only supports http 1.1 LL-HLS forces me use HTTP2.
The problem here is Akamai really in only supporting HTTP1.1 to the origin.
Cloudfare I think only supports HTTP2 to origin.
Does Fastly yet support QUIC to origin? Does Cloudfront, I could only find information about it supporting QUIC the last mile.
Maybe more CDN support will drive web server support.
I'm not an expert, so it's likely there are many things I don't get, but I wanted to have "UDP in the browser" for a project, and after looking at quic/http3/web transport I'm depressed by the complexity of it all
A lot of programming interfaces do not even keep alive HTTP/1.1 connections across requests because they do not have means to manage the hidden state. Transparent and meaningful upgrades to HTTP/3 on the library side only appear difficult.
Here’s the question, are the benefits worth the increase in complexity, or rather are they worth more than other features that could be worthwhile for libraries to support.
For the hyperscale web sure, but for the long tail web that is very unclear.
Openssl choice here remind me of torvalds and his „don’t break user space“ refrain. They may have gone with a technically cleaner solution but it’s causing chaos downstream
Not really. The whole point of quic is that it is built on top of plain old stupid udp, so it does _not_ require any OS level cooperation. The raison d etre of quic was to get rid of interacting with OS developers, convincing them, and patching some old ecomstations, and just "do what we want" in the process level.
I have no specific knowledge about this case, but I would guess one major reason is that BoringSSL does not provide any API or ABI stability guarantees.
The sheer truth is the prime user for the protocol is ads serving, for common case (API calls with some keep-alive) it's an effective downgrade to even http1.1.
I am still super disappointed about HTTP/3 in Rust. AFAICT there was a working HTTP/3 implementation in Rust over 6 years ago, but they (quinn crate) then literally yanked their entire version history from Cargo in order to wait for h3 which proceeded to take about three years (minus three days...) to even show up. I don't know the status of h3 today but that whole thing made me incredibly upset at the time. That having happened also makes me get annoyed again every time someone points out that Rust still doesn't have a very good HTTP/3 implementation, because I get reminded of the implementation from over 6 years ago that practically got retconned out of existence for this.
(I guess though that quiche has been an option if you like to write Rust like it's C.)
I think a lot of this is missing the point. The features HTTP/3 provides are features of value to datacenter and large scale deployment. Most sites deploying off-the-shelf solutions like nginx or whatever don't care. Most behind-the-reverse-proxy services don't care. So Cloudflare cares about HTTP/3, but their customers mostly don't, except the biggest ones.
Use the tool for the job. If a single-threaded Python server running HTTP/1.1 works for your IT app, then use that. HTTP/3 has nothing to offer you.
as an indie web master, some percentage improvement on ttfb brings zero benefit to me and my users, but requiring or recommending it to users only help big tech.
Because the faster response is negligible when you're an indie web host that can serve content over a single connection with reasonable speed. Where HTTP/3 really "shines" is when you connect to a web site that then has dozens of connections to other hosts (internal or external)...which is facebook/google/ad companies.
HTTP/3 speeds up that kind of content by reducing connection startup times to all of them, which can be compounded.
http3 has enhancements that speed up initial handshakes for new connections (QUIC essentially is UDP plus combining TLS and a native reliability layer to replace separate initial SYNs for them separately). So when you go to a page with tons of ads/trackers/javascript libraries, you're not left waiting as long for the ads to serve while browser reaches out to all those components and does separate TCP, TLS, and HTTP connections.
But users of most other websites won't see a ton of noticable benefits unless they have facebook/netflix/google levels of traffic. And at that point, you're either highly focused on the end user component code or you've outsourced it to a CDN that'll do the http3 for you anyways.
> And Fastly shared the major improvements in time-to-first-byte they're seeing in the real world:
What's time until last byte though?
Yeah, The metric that got you promoted is cool and all, but if my application works on files atomically, it's the time until last byte that has meaning to me.
I'm also gonna assume a bit more, and say it's also the byte where the service can start processing someone else's connection. So if http/3 increases the load on servers, with their CPU, instead of network hardware, with their ASIC, this would be a net loss as soon as you look at amount of complete requests per energy used. Instead of just looking at single session metrics.
I guess I should probably read the HTTP/3 spec now...
Google invents something and uses its huge market share and power to force everybody to use their thing, shaming and threatening those who don't. Amazing outcome.
> We've developed a totally new version of HTTP, and we're on track to migrate more than 1/3 of web traffic to it already! This is astonishing progress.
It's astonishing change. You could have used the same argument to show that any dismaying historical development was "astonishing progress". From my point of view HTTP/3 looks like an advantage for hyperscalers but no benefit to regular users using small-scale websites.
That's progress toward a future I don't want to arrive.
> This seems contradictory. What's going on?
IT administrators and DevOps engineers such as myself typically terminate HTTP/3 connections at the load balancer, terminate SSL, then pass back HTTP 1.1 (_maybe_ 2 if the service is GRPC or GraphQL) to the backing service. This is way easier to administer and debug, and is supported by most reverse proxies. As such, there's not much need for HTTP/3 in server side languages like Golang and Python, as HTTP/1.1 is almost always available (and faster and easier to debug!) in the datacenter anyways.
HTTP/3 and IPv6 are mobile centric technologies that are not well suited for the datacenter. They really shine on ephemeral spotty connections, but add a lot of overhead in a scenario where most connections between machines are static, gigabit, low-latency connections.