Hacker News new | past | comments | ask | show | jobs | submit login

When HTTP/2 came out, I was wondering how long it would take for the billions of websites/APIs/webservices/... out there to migrate from HTTP/1.1

HTTP/2 came out in 2014, and since then I still haven't seen it used in production in the jobs I worked at.

Now, I wonder again, will HTTP/4 be released before we migrate to HTTP/3 ?

The real question here is, what are the arguments in favor of a migration? When starting a new web project, should you really consider anything else than HTTP/1.1?




I just recently got my company to switch over to http2 and halved our load time across the board. It might go down as the greatest improvement in customer experience for the simplest action I’ll ever do.


What caused this? I don’t know much about http


http2's biggest advantage is it's multiplexed over a single tcp connection. So if your site has a bunch of resources on it, then http2 will allow your clients to request all their required resources at once without establishing a bunch of TCP connections.

Browsers allow for somewhere around 6 simultaneous TCP connections (which means around 6 concurrent requests with http1.1). Http2 allows for http requests at the speed of the clients upload connection (effectively unlimited).


I don't get this explanation.

Over one tcp connection, these "unlimited" requests are interleaved, blocking each other for periods of time.

With multiple tcp connections I can have parallelism. It seems like for larger requests, you would suffer performance loss with http2? What am I missing here. Must be something.


Well, let's start from the bottom. For most computers and systems, there is only a signal network interface (and if not, there's usually just a single route from A to B.) So, when you say "I can have parallelism" you have to realize that even without a multiplexed TCP connection, the operating system is playing the same game. It is interleaving and blocking bits to the NIC even with multiple TCP connections.

The parallelism you see with multiple TCP connections comes from the fact that the NIC is not (generally) saturated sending and receiving bits.

So with that out of the way, where's the benefits?

One is eliminating the TCP slow start [1]. TCP was designed generally with the notion that you'd only ever have 1 connection between 2 computers. As a result, it has baked into the protocol network congestion sensing protocols. More connections == more congestion control == slower requests. It also, however, means that the first request of every connection will be penalized as the connection warms up. A single connection pays that penalty once.

Another is the latency problem. In order for a connection to be used, you need to send the data from the client to the server and then receive a response from the server. During that time, your connection is bound up (ignore http pipelining, it never worked well). You cannot add more bits down that connection, you have to wait for the server to process and send back response. If that round trip time takes 200ms, that's 200ms of sitting around doing nothing. During that time, over a 100Mbps connection your client could have sent 2.5MB worth of requests to the server. Because http request are typically very small (usually in the bytes), that's a lot of potential requests. This leads to the servers more than likely being able to saturate the connection which is what you want if you are trying to maximize performance.

> It seems like for larger requests, you would suffer performance loss with http2?

A little tricky, it depends on what you are measuring. A large request will block other requests going through while it is streaming. In that scenario, you'd still be saturating the network line but the perceived concurrency would be down. So, from a requests/sec standpoint, that could end up being lower. But from a data-throughput standpoint, you'll be better off still than http1 due to the better network congestion handling of a single TCP connection. (The ole latency v throughput problem).

This is where we enter HTTP3, which bucks TCP as the primary request carrier for UDP messages instead. HTTP3 allows for much lower connection establishment costs (so making multiple QUIC connections isn't as expensive) but also contains protocol changes for errors which keeps an bad packet from blocking all the inbound packets (TCP will freeze the stream while it goes back up the connection to get the correct packet).

[1] https://blog.stackpath.com/tcp-slow-start/


Very reasonable. I had forgotten to consider that this parallelism was smoke and mirrors from the ground up.

This is an excellent comment and very helpful. Thank you.


HTTP/1.1 can do this. It's called pipelining. But browsers refuse to implement it.


Would still be constrained to head-of-line blocking which HTTP/3 aims to solve.


HTTP/3 will not be adopted because it's a downgrade from HTTP/2 for CDN companies.

HTTP/2 has indeed massively benefitted CDNs by providing more effective capacity per buck, but UDP based 3.0/QUIC will singlehandedly destroy it, and bring it to below HTTP 1.1.


Major CDNs seem to be adopting HTTP/3 just fine:

Cloudflare: https://blog.cloudflare.com/http3-the-past-present-and-futur...

Akamai: https://developer.akamai.com/blog/2020/04/14/quick-introduct...

Fastly: https://www.fastly.com/blog/why-fastly-loves-quic-http3

Not to mention Google, which uses it a lot for regular pages and for content (eg: YouTube video).

Adoption client-side also seems to be moving quickly too. I looked at the stats for 2 of my sites and HTTP/3 is now at 60-70%, with HTTP/2 having dropped to 25-30% already.


Good for them, when they have money to spend on servers. Budget CDNs are much less happy.


That might be the case, but HTTP/3 is supported by most major CDNs and browsers. It has been adopted already.

Also, if this blog post from 2020 is correct, performance per buck isn't that bad with some teaks: https://www.fastly.com/blog/measuring-quic-vs-tcp-computatio...

I assume things have evolved since 2020.


Could you give an eli5 as to why that's the case? Or are you just throwing out a proclamation?


The solution to that is QUIC over IP, i.e. removing UDP from the stack.


Browsers implemented it, but had to drop support because proxies don't support this, so in practice it turned out to be a major source of bugs.


It was also a source of obscure bugs and DoS-level resource usage in servers and (non-browser) clients. Some servers managed to send multiple responses together or deadlocked the connection when the timing of requests or available responses was just right. Some servers would queue up responses in memory and so were prone to excess resource usage.

Also, if you had a sent request waiting behind a slow/large response, you had to decide whether or not to send an extra request on a separate connection to get the resource sooner. And when a server closed a connection after sending a response (the spec allows it for multiple reasons), it was hard to know whether it was that the server was just not accepting more requests on that connection or whether a particular request caused it and hence whether any of the waiting request(s) should be re-sent or treated as being in error.


It isn't multiplexed. You get the responses back in the same order that you request them. I actually prefer this because I use pipelining outside the browser. I do not want to have to re-ordering responses. I have never found the lack of multiplexing to make HTTP/1.1 pipelining any less useful.^1 It is still fast. HTTP/2 and HTTP/3 appear to be designed for a web of advertising, not a web where someone is using non-browser programs to efficiently request multiple files via HTTP from a single IP address.

Some browsers implemented it, at least temporarily. However the important point no one mentions is that servers implemented it, permanently. I have been using HTTP/1.1 pipelining continuously since the mid 2000's. I could not even name an httpd that does not implement support for HTTP/1.1 pipellining (no doubt some HN commenter will come up with one, but it is a rare exception to the general rule).

1. I do not need compressed HTTP headers either. I like ASCII headers as they are useful record separators for catenated HTML. I have the forward proxy configured to add useful headers that provide additional information about the connection.


If your website has a lot of small files served, H2 and H3 offer dramatic improvements: browser says "gimme those 100 files" and server sends them all over a single connection in a streaming fashion.

Whereas with H1 browser opens up to 6 connections, downloads 1 file on each, then once the response comes it tells "ok now gimme another file" etc. (Another RTT until it comes back). That's many round-trips saved.

This is probably mostly useful when downloading many small images. It can be also useful for splitting JS into smaller bundles to favor cache hits for returning visitors. (An be also useful to serve ES modules without or with minimal bundling; however in practice bundling is still useful given that modern JS apps have 100s/1000s of files and small files overhead still matters at such numbers).


Just to add a few bits: I'm not an infra person so can't tell about managing a H2 deployment, but from frontend web performance perspective H2 is very interesting and somehow not used to its full potential.

H2 has built-in support for interesting concepts like prioritization and re-prioritization of requests. Unfortunately, most impls in clients and servers are either not full or not very good:

I recommend googling "http/2 prioritization" and checking blogs and talks from Andy Davies and Pat Meenan. The subject is super interesting and inherently hard; browsers need to make certain assumptions whether scripts, fonts or CSS are more important and should be prioritized and it's non-obvious how to handle it best.

https://github.com/andydavies/http2-prioritization-issues

https://medium.com/dev-channel/javascript-loading-priorities...


> This is probably mostly useful when downloading many small images.

It's useful for much more than that; I'd actually argue stream multiplexing is most useful outside of the browser in bespoke HTTP APIs, because that's a place where you can just stumble on cases where high fan-out requests are useful. For the browser case you're often limited by things like pagination, while JS/CSS can be amalgamated.

Not that it isn't good for small images too! But APIs benefit a lot as well and I think that's easily forgotten.


Yes, it basically render webpack obsolete :P I've not yet created a product used by so many people that this optimization was needed, maybe tomorrow?

> If your website has a lot of small files served

What about JSON/GraphQL APIs?


HTTP 1.1 had pipelining too, but easier to just airdrop in an entirely new transport.


HTTP/1.1 has _sequential_ pipelining,its the worst.

Reponses need to be sent in the order the requests were made, since there is no _request id_ for the client to know which response is for which request.

This is a major annoyance with HTTP/1.1 pipelining, since it then becomes the client responsibility to send requests in an order that minimizes latency.

Say you want to retrieve 1) a huge background image of the website and 2) a small js script at page load time. If you send 1) then 2), you have to wait for the background image to come first.

Not only this is bad for client user experience, but it also means that the server cannot send responses in real-time, it has to buffer all subsequent responses until the previous ones are sent, creating bloat.

HTTP/1.1 is the worst.


If I'm doing any sort of streaming (video chunks, etc.), then I'd want sequential pipelining for an optimal experience.


> Re: HTTP/1.1 is the worst

Technically, this could have been mapped with an additional header alone.

Something like HTTP/1.2 that features a header like Transfer-Request-Id: 13337

While I agree with the sequential pipelining case I'd argue with the abuse from the other perspective. Clients have now much more power to re-request things in a suboptimal order, because the server still accepts it.

There's no cryptographic signatures on buffers that could allow a dictionary on what has been sent already, and what has not, which would fix the issues that a client can re-prioritize and re-request resources that will drain the server.

The beauty of HTTP/1.1 (apart from slowloris) was its simplicity and cacheability for reverse proxies.

Now with H2 shit can hit the fan real quic once a new LOIC comes out because it has a whole new attack surface for DDoS scenarios, where even a single client can take down a server in those circumstances (aka when a server blindly accepts re prioritization requests).

But I would agree that HTTP/1.1 isn't good when it comes to transport hop specific headers (everything Transfer-...) because they allow crafted packets like HTTP smuggling that can trick a reverse proxy and backend because there's no way to validate the packets on a lower level when it comes to their Origin.

This is something I would've hoped SPDY/QUIC would address, because they are a real security risk embedded into the nature of the protocol.


You can't actually do pipelining at all on HTTP/1.1 as it existed so yes, it was literally easier (frankly, necessary) to do an entire protocol revision.

HTTP/1.1 as documented looks like the simplest pipelining should work. But too many HTTP/1.1 servers in the wild by the point browsers began trying to enable pipelining were getting this wrong. Users don't care that "technically your bank's web server is non-conforming and you should contact them to report the issue" what they hear is just "Don't use our broken web browser".


It doesn't take a rocket scientist to support http/1.1 pipelining on servers. It's harder to implement in proxies. Much of the creepy middleware on the Internet would break down in nearly impossible to troubleshoot ways when browsers pipelined requests. So they forked the protocol as a bypass.


It really depends on your deployment, but a lot of us have adopted HTTP/2 without lifting a finger, or with routine dependency updates. It’s possible (probable) that those migrations have not resulted in optimal usage, but neither have specially tailored migrations (see: the sad slow death of Server Push).

As far as whether you should consider migrating, it’s worth doing a quick spike to determine level of effort. I expect the number of use cars which:

- can’t trivially update

- aren’t directly providing HTTP functionality to users who expect that

… is a rather small niche.


Anything using Google, AWS, or load balancers from other major cloud providers will likely end up being early adopters for this stuff. In our case, since we were targeting mobile, we actually used nginx with SPDY support before HTTP/2 was even stabilized. It wasn't that hard. We just had to get a recent build and fiddle with a few lines of configuration. That was 2014. HTTP/2 standardization came later, as I recall.

We already have an HTTP/2 and HTTP/3 capable load balancer because we host on GCP these days. Mostly this stuff just happens without intervention. HTTP/1.1 is a fallback for people with crappy network infrastructure.

If you are starting a web project, you should probably use one of the major cloud providers unless you have a good reason not to. You'll use this stuff by default mostly unless you explicitly turn this off. But why would you want to do that?

If you are managing your own load balancers; consider using something modern/up to date that would be able to support this. Also from a security point of view probably not something you should skip. And if you are using something modern, it shouldn't be that hard to turn on the feature.

If you manage some legacy stuff that just doesn't matter; don't bother of course. HTTP 1.1 will continue working just fine. If you have mobile users though, you are not doing them any favors by making things needlessly slow and this would probably be an easy upgrade.


> since then I still haven't seen it used in production in the jobs I worked at

How do you know? Most of the Alexa top 100k web sites are HTTP/2 but I suspect a great many of the people working on those web sites have no idea HTTP/2 exists, it simply has no impact on their work. It probably has no impact on your work.


Because I mostly worked with teams who built internal tools, be it monitoring solutions, or APIs consumed by BI tools, or custom CMDBs, custom CMS, etc...

Every single one of them was deploying the projects on the internal infrastructure, sometimes a small Kubernetes cluster, some other times just an EC2 instance, most of the time just one or two VMs.

Each employee was given a computer with a VPN client pre-installed, to connect to the internal infra.

The biggest company I've worked with had 10k employees, still not enough to see any impact on performance on the slowest single-threaded service we made.

When deploying, we were simply adding to the internal DNS server an A record "service.internal-domain.tld" pointing to the IP of our server. No load balancer, no reverse proxies, no HTTP/2 in the stack.

It's cheap, it works, it's certainly not something you would do if you were serving a SaaS application to millions of user on the world wide web, but it's definitely enough for internal stuff.

Even Google does not have more than 100k of employees.


If you’re primarily writing internal tooling then http2 is unlikely to be particularly relevant or beneficial to you (unless using gRPC or similar).

However, turning on http2 for an org might mean nothing more than toggling some settings on a reverse proxy, no code changes required and little to no visibility or fuss for developers.


> Even Google does not have more than 100k of employees.

Alphabet has over 150k employees.

https://www.sec.gov/Archives/edgar/data/1652044/000165204422...


> When starting a new web project, should you really consider anything else than HTTP/1.1?

This makes it sound like the HTTP version you use is as major of a decision as the programming language you use, but can't you change HTTP versions basically by just flipping a switch?


If you like to ignore the security implications that come with every switch? Some services use HTTP/2 frontends for their HTTP/1 backend services. This resulted in a large amount of tunneling attacks that the HTTP/2 spec. predicted, it is almost as if every implementation of HTTP/2 just ignored an entire section of its spec. . HTTP/3 is also already off to a good start, it requires changes in how encryption libraries interact with the protocol and that has already resulted in people pressuring OpenSSL to adopt experimental changes (marked do not use) by Google well before that part of the HTTP/3 spec was even done.


To be clear, I don't think HTTP version is an unimportant decision. I'm just saying it's one that's easily changeable and so doesn't need to be set in stone when you first start building a web project.


That’s because HTTP/2 broke every capability that was not used by Google.

NTLM auth… broken.

Client cert auth (mTLS)… broken.

Etc…

Monopolies are dangerous because they can accidentally lock out their competition without even trying.

They just have to focus on their own problems and ignore other people’s problems… because they can.


Mutually authenticated TLS doesn't care about which protocol is being spoken over TLS, so whether you speak HTTP/1.1, HTTP/2 or a Martian protocol you dreamed up yesterday it works fine either way.

NTLM auth is garbage. Google didn't break it, Microsoft acknowledged many years ago that this is trash and you should stop using it, so unsurprisingly they didn't put in the engineering effort to support using this broken trash over HTTP/2 on IIS or Edge.


NTLM Auth is hot garbage and shouldn't be used, but is still possible in HTTP/2 iirc (HTTP Negotiate way), just nobody implements it for a good reason.

Personally I'm not sure why current HTTP2 stacks prevent Negotiate mechanism, but I suspect it's because of connection reuse and Negotiate doing connection-oriented auth - so I suspect Microsoft temporarily banned such feature (I can't find reference to other implementations downgrading, but outside of Microsoft on IIS not many do kerberos over http)


Indeed, I was surprised when we tried to move a B2B application using mtls to http2 and realised that it simply was not implemented!

I know that's not used a lot but it is still used sometime when accessing critical systems.

I suppose that in the long run, mtls will completely disappear because of that, strange for a protocol that advertise itself as more secure to remove this option.


Client cert authentication (aka mutual tls) works perfectly fine on HTTP/2 (as it happens on a completely unrelated protocol layer), so my question is just what did you deploy that broke it?


It doesn't "just work", because it's a HTTP authentication flavour that happens to occur at the Transport layer.

Because it crosses layers like this, HTTP has to know about the transport.

For example, it's possible in HTTP 1.1 to have a page trigger cert auth even though the initial transport didn't use it! This is common for logging on to pages using Smart Cards but leaving an option to use alternate authentication modes.

This was just... abandoned in HTTP/2 (and HTTP/3) as "too hard".

Or more accurately, every authentication mode not used by Google was abandoned and does not work.

I've tried HTTP/2 with products from numerous vendors, and without fail anything Google doesn't use... didn't work.


Are you talking about mutual TLS, or are you talking about Negotiate? Because the only way to trigger cert auth (under mtls) if the initial transport didn't use it is to force a TLS renegotiation, as it's handled by sending a CertificateRequest message at TLS layer.

Windows Integrated Authentication however triggers authentication by sending Negotiate header and performing HTTP-level exchange.

N.B. Every example of page triggering cert login later on that I have encountered in the wild forced a fresh connection and thus mutual authentication.


If you use Go or gRPC, then you're probably using HTTP/2. I've seen both used, at least somewhat, by every company I've done work for in the last 5-6 years.


> If you use Go or gRPC

I'm not. My go-to web frameworks are Elixir/Phoenix, Python/Django, NodeJS/ApolloServer.

I have an irrational hatred for the Go language, but still use it to develop quickly some CLI tools.


I don’t know what’s common for serving BEAM web services, but unless I’m mistaken nginx is still a safe assumption for Python. If you’re using nginx for either, transitioning to HTTP/2 should be a couple small config changes away.


I deploy on Kubernetes, so it will be behind an Ingress controller (NGinX, Istio, ...) if it needs to be exposed to the outer world.

If I was deploying to baremetal, I would either have a HAProxy/Keepalived reverse-proxy, or a load balancer that most probably supports it.

But HTTP/1.1 have not yet failed me, so I have not yet considered the migration.

If it's not broken, don't fix it :)


> Istio

Also looks like a trivial config change. I’m not familiar with it but I don’t see any red flags.

> If it's not broken, don't fix it :)

This seems exceedingly myopic in this context. Why migrate to broadband from 56k modem? That modem connects to the internet too! Why upgrade to a browser version which supports HSTS? You can just manually type https in the location field. Why try anything with new features at all? You can just never have new features if they work.

I’m not saying there’s never a trade-off with any of these things. But you’ve preemptively ruled out any benefit without exploring them… or whether they even have any meaningful (to you) cost. “Cutting off your nose to spite your face” is the phrase my family uses for this.


There is a difference between switching protocol, and:

  - not migrating to broadband from 56k modem
  - not upgrading to a browser version which supports HSTS
  - not trying anything with new features at all
HTTP/1.1 still serves its purpose, successfully, without any impact on performance/usability. I'm not Google, I'm not serving a webpage to 8 billions devices at the same time.

Sure I can try HTTP/2 (and I did), and HTTP/3 (and I will). But when it comes to distributing an application/service to customers, I'll automatically choose the battle-tested 30 years old technology which is supported by everyone and every device.


Okay enjoy your slower than necessary web stuff for no apparent reason!


There are a few small, but important cases where h2 in the mix could possibly bite you.

In my case it's monitoring security issues like h2 to h1 header smuggling and the security implications in complex environments where front end proxies may talk to both h1 and h2 applications.


> I have an irrational hatred for the Go language

Do you mind sharing why?


My biggest annoyance is that the Go language is full of implicit conventions/behaviors:

  - interface implementation is implicit
  - "export" is based on the function/type/member name (first letter uppercase)
  - private functions/types/members are private within a package not a single file
  - ...
This is painful because when you read a single file, you need to know about the rest of the code. This makes it very hard to get started with a new codebase, especially a huge codebase.

The other annoyances I have are just petty, like "if err != nil", zero-values, or the fact that VS Code still does not support Go projects in subdirectories (I like to open my whole "devel" folder).


> interface implementation is implicit

Fun fact: this is true for python too! https://stackoverflow.com/questions/40764347/python-subclass...


> VS Code still does not support Go projects in subdirectories

This is annoying, but there is a workaround: add each folder with a go.mod file individually starting with the most nested ones first. So add each project separately, then add the root dir.


Unpractical for me, my "devel" folder is organized as follow:

  |-+ orga (github, gitlab, other)
  | |-+ domain (infra, business, research, ...)
  | | |-+ repo-name
VS Code do not support hierarchies of projects in a workspace, so I would just have a huge list of `project-name` where project-name would be manually renamed to `orga/domain/repo-name`.


Not the same person but I share his sentiment. Go is terribly designed from a language perspective. It's weird and doesn't follow normal language conventions. The tooling and package manager is terrible.

Eventually I think it'll be seen as the next PHP. A language that people love to hate.


I think the only reason Go was adopted is because Google was behind and pushing it.

But the result is that today, the ecosystem is huge. Implementing a kubectl plugin in Go is far easier than in other languages. Many things are in fact simpler with Go thanks to its vast stdlib and ecosystem.

I hate it, and yet... It's still useful to me.


Go is terrific for multithreaded apps. Or multithreaded scripts. But if I know that I'm going to be running everything in a single thread I wouldn't even consider Go.


I would say it works as long as you don't need any major coordination outside of a few channels. As soon as you get into mutex territory you happily end up shooting yourself in the foot hoping that go test -race caught the bug someone else introduced. Or you spend unnecessary time implementing some concurrent safe type trying to hide the synchronization behind a pretty interface.

At work we have a split code base between rust and go, using go to interface with the world due to better ecosystem while using rust for the heavy lifting of business logic in an extremely concurrent implementation.

Even though the rust side is more complex it is much easier to work with and refactor leading to fewer bugs. The language simply gives so much for free, when you get over the first humps of a slow start.

The Rust cycle of 1. make it work, 2. handle errors, 3. encode invariants in the type system to prevent inadvertent refactors is simply magical.


  > The language simply gives so much for free
Are we familiar with the same Rust? Though I've only looked at it briefly about two years ago (beginning of pandemic), I don't remember getting much "free" with the language. Especially not coming from a "batteries included" Python background. I would honestly love to know what you feel that Rust provides out of the box. I'm very likely to use it in the future and such a head start may be valuable.

  > The Rust cycle of 1. make it work, 2. handle errors, 3. encode invariants in the type system to prevent inadvertent refactors is simply magical.
Isn't that the workflow of any minimum viable product development paradigm?


I think we are looking at it with two different views of "free". It is not batteries included outside of synchronization, string operations, collections and iterators. We are using the Tokio ecosystem which has some nice batteries included.

My view of "free" in the comment is correctness. Concurrent code simply keeps on working through refactors to suit the business needs. Most of the time the tests pass the first time the code compiles again after a refactor of core pieces. That is something I haven't experienced in any other language. I simply do not feel scared touching, changing or removing anything to get the most lean code base possible.

Compare this to Python where I want to have the mental model of the entire "world" for every change I do and then simply hope that the tests will catch any errors. Go is better when kept simple, but still a lot of implicitness and conventions leading to easy shooting yourself in the foot.

> Isn't that the workflow of any minimum viable product development paradigm?

Definitely, Rust simply gives more tools for it. For example through sum types and traits to enforce invariants. Say comparing to the awful iota hacks to get "enums" in Go, accidentally having a meaningful default value or run time verification because somewhere you ended up with an interface{}.

In rust for the "make it work stage" simply throw expects everywhere and choose to panic on errors, or just bubble to the top without caring. It never silently simply works unless you've explicitly chosen how to deal with the possibilities which is very reassuring.

Go has some of that feel, but actually doing something useful with errors more than logging I have never seen a good paradigm for yet. Errors as/is kind of works but still puts most of the work on the user to figure out the correct thing. Maybe I haven't dove deep enough though.

For some info on how far you can take error handling in Rust forcing correctness on the users see this blog post from Sled (an embedded KV database) http://sled.rs/errors.html


Thanks, I've yet heard of Tokio, I'll save that and the Sled article in my rust.org file.

Interestingly, the example code on the Tokio github page use the antipattern described in the Sled article, regarding the try operator.


> Go is terrific for multithreaded apps.

Yes, true. But so is Erlang/Elixir.


Thank you. I've never used Erlang though I understand that it's philosophy is well suited to high-uptime applications such as servers, which I do work on. I will definitely give it a test drive.


I haven’t written any Go, but having read through some tutorials I find this a little hard to believe. Besides, Node has already been the next PHP, in terms of reputation and ridicule, for almost its entire lifespan. I do expect Go will become more esoteric over time, but mostly because it’s a Google thing (with all the unpredictability that implies), and because it overlaps a couple rapidly evolving use case targets.


It's not. For the niche it has found success in (infra type backend stuff), it's by far the best option available. `if err != nil` is a godsend in that world.


What's so great about `if err != nil` ?


In theory: explicit error handling.

An error should be taken care of immediately, or propagated with some additional context. Returning the error instead of throw/catch makes this explicit and forces the developer to think about it.

In practice: a lot of boilerplate code.

Rust does this better with the Result type and the ? operator.


From a pragmatic perspective, the theoretical advantage is real (typed exceptions are better than not having them, just as explicit nullables are better than implicit). Theoretically Result and Option are just recognized names, which I don’t mean to discount but they’re effectively union types with pattern recognition privileges.


I like the “opportunity for correction” and “code proportionality” points brought up in https://medium.com/@shazow/code-boilerplate-is-it-always-bad....


> It's weird and doesn't follow normal language conventions

Maybe this is a key ingredient in successful languages. Just warty and anachronistic enough that users learn a lot of unique patterns that keep them glued to the language.


PHP powers a vast part of the web ecosystem, is far more accessible to learn and deploy than Go, and the language is greatly improved and even loved by many.


Elixir/Phoenix used to be my go to as well. I’ve since switched to Rust with actix-web for backend web projects. I’m really happy with the transition.


The default Go http server is 1.1, no?


The default Go server supports HTTP/2.

https://pkg.go.dev/net/http

> Starting with Go 1.6, the http package has transparent support for the HTTP/2 protocol when using HTTPS.


We will switch about 6 months after adoption of IP6 is complete.


A big thing is that you may be using HTTP/2 or HTTP/3 already if you use something that terminates HTTP for you. This doesn't have to be Cloudflare or whatever; if you run Nginx in front of your Rails server, you can just have Nginx handle HTTP/2 for you. Many deployments or services like AWS, Heroku, S3, etc already enable HTTP/2 for you and it's transparent on the client end.

The widespread deployment of caching utilities and other frontend application middleware was, in a paradoxical way, a huge boon that lead to easy success for HTTP/2 (and now /3) while making it appear a bit bland to everyone else. "Nobody uses this, right? I mean, I've never changed anything." You weren't supposed to! For many purposes this is intended: service providers, operational admins etc can largely tweak this all independently at the caching service layer, for instance. Taken together it's mostly just "free performance" for the case of a user who has a reasonably reliable network.

Now, don't get me wrong, you need to understand caching and the HTTP connection lifecycle. There are aspects of newer HTTP versions that do impact the development experience. For example if you're sure you're using HTTP/2, you can rely on multiplexing. That means that, for instance, if your user has a dashboard with many panels, and each panel is doing independent requests to get metrics data (like Grafana, for instance), then all of those HTTP requests for data happen over 1 TCP connection. If you had to use many connections, you'd either have to throttle/serialize updates, use pipelining, or it might not work at all since the browser will refuse to open too many at once. But you can rely on this working with HTTP/2. If you rely on multiplexing, the penalty of having many small files on a single domain is much smaller. So those user avatars hosted on mybusiness-static-content.net will load much faster for all users too. That's just free performance on the table. But you have to design with it in mind.

So to answer your question: why should you migrate? Well, in general the answer is simple: better performance and better security (HTTP/2 and HTTP/3 both mandate TLS, for example), and HTTP/3 handles flaky mobile connections much better than either v1 or v2. That might really help for something like an API endpoint that mobile device clients access, for instance.

If you start a new project, should you consider anything except HTTP/1.1? Yes! Because HTTP/2 and HTTP/3 are here to stay, so it's not like you're betting the farm on something untested. You do need to consider client library availability; you can't get away with a half-baked "HTTP/2" implementation as easily as you could with HTTP/1.1 (this is the dark side of people who complain about binary-vs-text formats: it was very easy to get HTTP/1.1 very wrong). But they have tangible benefits for end users, and, taken into consideration, can help you do some stuff you couldn't before. Ignoring any claims it's a sign of website bloat, "download a lot of small files" is actually a legitimate, huge use case you couldn't do efficiently before in the browser or otherwise! Some non-browser workloads I've had see incredible performance improvement just from this alone. And if you're writing server software? Well, a good reason is "because it's easy to deploy". You might only need to change a configuration in your existing terminator.

Will there be an HTTP/4? I guess nothing is impossible here but I wouldn't bet on it. HTTP/2 was the big change. HTTP/3 was a "big" change not for HTTP, but because fixing HOL blocking at the TCP layer can only be done by, well, not using TCP! It's otherwise the exact same thing you know already, just in binary form, over a different transport stream. There could of course be other things down the line, but it seems like if anything continues to evolve, it will be QUIC itself, and less-so for HTTP.


>The real question here is, what are the arguments in favor of a migration? When starting a new web project, should you really consider anything else than HTTP/1.1?

Depends on what you mean with web project. How the server delivers data might not be of importance.



> HTTP/2 came out in 2014, and since then I still haven't seen it used in production in the jobs I worked at.

I'd already be happy if I could use IPv6.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: