Hacker Newsnew | past | comments | ask | show | jobs | submit | vasilvv's commentslogin

For what it's worth, C++17 added [[nodiscard]] to address this issue.

The story here is a bit complicated. WebTransport is, in some sense, an evolution of RTCQuicTransport API, which was originally meant to solve the issues people had with SCTP/DTLS stack used by RTCDataChannel. At some point, the focus switched to client-server use cases, with an agreement that we can come back to the P2P scenario after we solve the client-server one.


This sounds very similar to how base::WeakPtr works in Chromium [0]. It's a reasonable design, but it only works as long as the pointer is only accessed from the same thread it is created.

[0] https://chromium.googlesource.com/chromium/src/+/HEAD/base/m...


The article seems to make an assumption that the application backend is in the same datacenter as the load balancer, which is not necessarily true: people often put their load balancers at the network edge (which helps reduce latency when the response is cached), or just outsource those to a CDN vendor.

> In addition to the low roundtrip time, the connections between your load balancer and application server likely have a very long lifetime, hence don’t suffer from TCP slow start as much, and that’s assuming your operating system hasn’t been tuned to disable slow start entirely, which is very common on servers.

A single HTTP/1.1 connection can only process one request at a time (unless you attempt HTTP pipelining), so if you have N persistent TCP connections to the backend, you can only handle N concurrent requests. Since all of those connections are long-lived and are sending at the same time, if you make N very large, you will eventually run into TCP congestion control convergence issues.

Also, I don't understand why the author believes HTTP/2 is less debuggable than HTTP/1; curl and Wireshark work equally well with both.


I think the more common architecture is for edge network to terminate SSL, and then transmit to the load balancer which is actually in the final data center? In which case you can http2 or 3 on both those hops without requiring it on the application server.

That said I still disagree with the article's conclusion: more connections means more memory so even within the same dc, there should be benefits of http2. And if the app server supports async processing, there's value in hitting it with concurrent requests to make the most of its hardware, and http1.1 head of line blocking really destroys a lot of possible perf gains when the response time is variable.

I suppose I haven't had a true bake off here though - so it's possible the effect of http2 in the data center is a bit more marginal than I'm imagining.


HTTP2 isn't free though. You don't have as many connections, but you do have to track each stream of data, making RAM a wash if TLS is non-existent or terminated outside your application. Moreover, on top of the branches the kernel is doing to route traffic to the right connections, you need an extra layer of branching in your application code and have to apply it per-frame since request fragments can be interleaved.


TCP slow start is not an issue for load balancers, as operatings system cache the congestion window (cwnd) on a per-host basis, even after termination of all connections to that host. That is, next time a connection to the same backend host is created, the OS uses a higher initial congestion window (initcwnd) during slow start based on the previous cache value. It does not matter if the target backend host is in the same datacenter or not.


Isn't this the problem that JSON5 (and probably other similar projects) is supposed to solve?

Both JSON (as defined in the RFC) and JSON5 have a nice property of being well-defined, meaning that you can use different libraries in different languages on different platforms to parse them, and expect the same result. "JSON but parser behaves reasonably (as defined by the speaker)" does not have this property.


JSON5 would be ok if that's all it did. They added so much additional unnecessary complication that it undermines the simplicity of JSON that makes it good.


http://seriot.ch/projects/parsing_json.html

"Despite the clarifications they bring, RFC 7159 and 8259 contain several approximations and leaves many details loosely specified."


Nothing will probably ever top Markdown in my mind for bullshit specifications.

And Gruber wouldn’t give Jeff Atwood permission to call his variant <something> Markdown, or it seems anybody else, so we ended up with CommonMark, and GFM.

Json5 is good for JSON at rest, as others have mentioned already.


> HTTP → RFC-2616 says in section 19.3 says "we recommend that applications ... recognize a single LF as a line terminator...." In other words it is perfectly OK for an HTTP client or server to accept CR-less HTTP requests or replies. It is not a violation of the HTTP standard to do so. Therefore they should.

The most up-to-date version of HTTP/1.1 spec is RFC 9112, which says:

> Although the line terminator for the start-line and fields is the sequence CRLF, a recipient MAY recognize a single LF as a line terminator and ignore any preceding CR.

"MAY", of course, is different from "MUST" or "SHOULD", so I feel like the author's claim that implementations rejecting bare NLs are broken is at odds with the specification.


This comes down to Postel's Law; they recommend liberally receiving what you conservatively cannot send. Also from RFC 2616 but not cited by the author:

> This flexibility regarding line breaks applies only to text media in the entity-body; a bare CR or LF MUST NOT be substituted for CRLF within any of the HTTP control structures (such as header fields and multipart boundaries).

They aren't going to allow sending LF until at least one bump to a higher protocol version where every server MUST accept it.


W3C generally requires Working Group participants to provide IPR licensing commitments for the spec in question [0]. As far as I understand, higher level of specification maturity implies stronger level of obligations, though the specifics of what specifically changes when were never clear to me.

[0] https://www.w3.org/policies/patent-policy/#sec-Requirements


I'm not sure where rustls comes from -- Chrome uses BoringSSL, and last time I checked, Mozilla implementation used NSS.


Generally, L2 networks are engineered with the assumption that they will carry TCP, and TCP performs really poorly with high loss rates (depends on the specific congestion control used, but the boundary can be anywhere between 1% and 25%), so they try to make sure on L2 level that losses are minimal. There are some scenarios in which a network can be engineered around high loss rates (e.g. some data center networks), but those don't use TCP, at least with traditional loss recovery.

Error correction codes on L4 level are generally only useful for very low latency situations, since if you can wait for one RTT, you can just have the original sender retransmit the exact packets that got lost, which is inherently more efficient than any ECC.


It's possible to build something similar on top of TCP, see Minion [0] for an example. There are multiple reasons why this is less practical than building on top of UDP, the main two being, from my perspective: (1) this requires cooperation from the OS (either in form of giving you advanced API or having high enough privilege level to write TCP manually), and (2) this falls apart in presence of TCP middleboxes.

[0] https://dedis.cs.yale.edu/2009/tng/papers/nsdi12-abs/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: