Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In Deno, sockets are still asynchronous, but receiving new data requires users to explicitly read()

Interesting. If I understand correctly, they're essentially using pull streams[0]/reactive streams[1]. I compiled a few resources on this topic when I was digging into it a while back[2]. I've found the mental model to be very elegant to work with when needing backpressure in asynchronous systems.

As for the dependencies-as-URLs, I don't mind it, and may prefer it. I've been experimenting with minimizing my dependencies lately, and vendoring them in git submodules. It's worked fairly well.

[0]: https://github.com/pull-stream/pull-stream

[1]: https://github.com/reactive-streams/reactive-streams-jvm

[2]: https://github.com/omnistreams/omnistreams-spec



I would call it: "They are essentially using the plain async conversions of regular system calls and Stream (e.g. known from Java/.Net) APIs"

All the reactive streams stuff still had been push streams, just with some backpressure bolted on. The issue was that without async/await, you always end up with some kind of callback model, which then again results in a push model.

Whereas with async/await you can just mostly model IO like any kind of synchronous IO.


When describing the input stream back-pressure problem:

> To mitigate this problem, a pause() method was added. This could solve the problem, but it required extra code; and since the flooding issue only presents itself when the process is very busy, many Node programs can be flooded with data. The result is a system with bad tail latency.

Are they saying that even with correct use of pause() there are still issues?


They're saying since the issue doesn't present itself until it's too late most code does not use pause().


I'm not sure this is referencing the js api, which will be based around promises like node. I believe this is referencing the rust implementation, which is built on top of tokio and rust's async/await.


Is this different from how streams work in most other languages, e.g. Java, Go, Python?


Most languages block by default, so backpressure is much easier to model: just don't read until you need more data and the sender will block.

But in JS the receiving end will just keep firing off data events until you pause it, or use something like reactive streams to request data as you're ready for it.

That's my understanding of the situation at least.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: