Hacker News new | past | comments | ask | show | jobs | submit login

In my opinion HTTP/2 is too complex.

I think the better route is to investigate how HTTP/1.1 could be layered atop multiplexing transport protocols like SCTP.

It might take 10 years to reach wide deployment, but HTTP/1.1 over TCP is working just fine anyways.




We know SCTP is less deployable. What benefit does it have?

And as the FAQ says, if you don't use header compression then response headers alone will take many RTTs to transmit due to slow start.


It has the benefit of giving us separate protocols for the transport and application layers. And of already existing, though it isn't widely deployed yet.

But it would allow multiple HTTP requests to be multiplexed over a single connection.

SCTP is also using some form of slow start congestion control, but since it would multiplex the HTTP requests over a single connection there would only be one initial slow start for all the requests.


there would only be one initial slow start for all the requests

Which is worse than HTTP/1.1. Avoiding that problem is why HTTP/2 uses hpack.


Why worse? Wouldn't multiple HTTP/1.1 requests sharing a persistent TCP-connection also only have one initial slow start phase?

Maybe the SCTP multiplexing / parallelization of the requests and thus the initial headers affect this negatively as more headers would be transferred during the one initial slow start.

If the delay caused by slow start is a big problem one could add header compression to HTTP/1.x and run it over SCTP.

I understand that a transport protocol designed especially for and embedded in HTTP/2 will be more optimal than a generic one like SCTP. But my argument is that a multiplexing transport protocol like SCTP could be good enough. And usable for more than HTTP/2. And the focus could be on simplifying HTTP instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: