The generational garbage collectors in all (or the vast majority of) javascript implementations is specifically tuned to make this a pretty big non-issue for the extreme vast majority of code.
It's the same argument people make against many React patterns, that inline objects or inline functions generate additional garbage, but when tested it's found that the time spent cleaning up that garbage is insignificant (and in the React case, workarounds often end up causing performance issues at boot time by trying to move that garbage-generating code out of the render function where it's a non-issue!)
Without measurements you are just guessing, and JITs don't always work the way most will intuitively assume they work. I've tested the overhead of creating and throwing away objects on each iteration of a loop, and the reality is that it just doesn't impact the performance of the code in any meaningful way. The GC happily cleans it up extremely fast, and in some cases the JIT will even compile the code in a way to avoid generating the garbage at all.
Agreed on all points, but I'd just like to point out the main reason inline objects and inline functions is a perf antipattern in React is not garbage collection pressure, but rather the fact that they invalidate perf optimizations like PureComponent and React.memo because the objects/functions will have new references on each render. If you use those two together, then you're incurring the cost of individual prop comparisons used by PureComponent/React.memo without reaping the benefit of being able to skip the render entirely if no references have changed.
They are in that one case, but the vast majority of components aren't PureComponent, and therefore it's not a universal "perf antipattern".
It's also one of those things that people tend to extrapolate out. They hear that the function changing every render causes perf issues, and then just assume it's truth and apply that thinking everywhere. It's so common that the Hooks FAQ actually has a section dedicated to talking about this specifically [1]!
We are getting off topic now, but it was one of the areas that I believe the react team wasn't happy about when it comes to the design of React. You had/have to treat components differently depending on whether they are pure or implement `shouldComponentUpdate` and how they implement it.
It's also one of the things that the new Hooks API solves. With hooks like `useMemo` and `useCallback` you can cache those values and functions that you pass around.
> Without measurements you are just guessing, and JITs don't always work the way most will intuitively assume they work. I've tested the overhead of creating and throwing away objects on each iteration of a loop, and the reality is that it just doesn't impact the performance of the code in any meaningful way. The GC happily cleans it up extremely fast, and in some cases the JIT will even compile the code in a way to avoid generating the garbage at all.
This is why I get so disheartened when writing "benchmarks" for code. I don't know much about what gets transformed to what under the hood, and I think you can only write confident benchmarks if you know these things. I'm sure I've spent hours writing completely pointless benchmarks.
That would turn Symbol.iteratorDone into a dangerous value, that can't safely be yielded from a generator, or included in an array, or passed to a variadic function.
I don't know if that would be much of a problem in practice but it feels wrong instinctively.
Exactly. `Symbol.iteratorDone` would be a kind of "null" value. The existing approach is equivalent to returning Either<a,b> in a language with labelled sums; it carefully distinguishes between the envelope and the payload, if you like.
Python's solution is to throw a StopIteration exception when the iterator runs out. That works out ok as long as you don't manually throw it in weird places, but a naive implementation could have high overhead because exceptions are expensive.
PHP's iterators have separate methods for advancing to the next element, checking if there's a current element, and getting the current element. That's cumbersome, and requires keeping extra state, but it's abstracted over so it doesn't typically matter. I think it mimics a bizarre old way of iterating through arrays.
There are multiple ways to signal the end. Javascript's is pretty clean.
Oof, I'd definitely take JavaScript's wrapper objects over an exception. Iteration complete isn't something that strikes me as an error, it's just the end of the finite list you're iterating over! Does the program expect the list to never end or something? Errors should be reserved for the unexpected. A list ending is very expected, unless you're dealing with an infinite generator.
It seems like that would be low-hanging fruit for a JIT. The JS layer may present it as an object but it would be easy to compile it into something else under the hood. Especially if you never touch the "done" value inside the loop, which seems like a pretty common use case.
Iterators and generators are powerful and can make code much cleaner/more declarative. Especially when used with async/await.
If anyone is interested, I recently released a real-time WebSocket library (Node.js and front end) which only uses Async Iterators to stream data (no event listener callbacks):
As a cool example of the practicality of generators, my son and I wrote a generator in PICO-8 (Lua calls it coroutines) to animate moving the tiles in a 2048 game we made, using simple a for-loop to animate the tile's pixel coordinates, but yielding at the end of the loop body, so that we could let _update() and _draw() continue. This allowed the program to still be able to respond to key presses, which would "skip" to the end of an animation by fast-forwarding the coroutine until it was dead, and then create and start the next animation coroutine. I think when he saw that it was just a for-loop but it that kept "pausing" (yielding) at the end of each iteration, he finally understood generators.
The next step I would have shown if I wrote this article, would have been to tell reader that JS generators as they are now, do not work with async / wait.
As long as you have some simple Fibonacci interview question they look nice to show off, but when you want to use them for real like return some paginated data from server as on going iterator, then current JS generators cannot be used.
This makes js generators limited for any real usage, given JS IO is async.
Both async generator functions and async iterators have been in the pipeline for a while. The proposed, and currently implemented syntax in Firefox and Chrome is `for await ... of`[0]. Creating an async generator function is as easy as `async function* functionName`.
You might be interested in a slightly unusual UI framework I wrote that heavily focuses on async generators to define the UI - https://github.com/ajnsit/concur-js.
The idea is that a UI "Widget" is a sequence of UIs (which themselves can be Widgets) generated asynchronously (say when a UI event fires).
A sample widget -
async function*() {
// Show a clickable button.
// All processing is async, this will conceptually block until button is clicked.
yield* <button onClick>Click me</button>
// Button was clicked, show text
yield* <div>Button was clicked!</div>
})
It uses React and VDOM to diff the UIs generated and update the page slightly more efficiently. As you can see, it also provides JSX support (by overriding createElement calls).
The Widgets themselves can be easily composed, and the entire thing is designed to be extremely easy to get started with. The README on github has more information.
I have another article in mind to approach this topic. Also event emitters and streams might be other useful tools of the trade when you have to deal with asynchronous stuff.
I did consider using the Node.js streams at one point but they support a lot of legacy edge cases and are more relevant for I/O than just a simple event stream (which was my use case).
My stream library is less than 100 lines of code and only supports asyncIterator interface; also, it supports multiple concurrent consumers; each consuming at its own pace.
It's actually preferred over async/await in the frontend world in libraries such as redux-saga and mobx flow. One advantage is that you can cancel the iteration early.
If anybody is interested in coding in a way that is closer to requirements, generators are really helpful to switch towards a paradigm called Behavioral Programming: https://lmatteis.github.io/react-behavioral/
Forgive me if I'm missing something but this looks needlessly complex. Could you explain how this could be helpful, especially for developing React applications? The page doesn't really explain it besides insisting that this is closer to how we think, but I don't agree at all.
One of the things it helps with is the idea that we can modify the behavior without having to actually see how old code has been implemented. All one needs is a trace of events (a particular behavior) and can decide with newly added code how to modify that trace to fit the new requirements (by blocking specific traces and have others happen instead).
This is an extremely powerful way of programming because it more easily fits how we develop apps: requirements constantly change. Currently when behavior needs to be modified we are stuck with having to modify and understand old code which can be very tedious (in my opinion it's a crucial pain-point in software development).
Behavioral Programming makes it easier to modify a system without having to understand how it was built, but by observing a particular behavior. Once observed, the behavior can be modified, removed or completely changed incrementally, without having to go back and refactor old code.
I referenced my framework further down the thread as well, but repeating it here since you specifically ask for ease-of-use and React. Please try out Concur-JS which is my async-gen based UI framework that is React based, and designed to be extremely easy to get started with. https://github.com/ajnsit/concur-js
> Consecutive calls to next() will always produce { done: true }.
Not necessarily. You can make recursive generators which never "terminate". For example, here's a codegolfed version of a recursive generator which represents a n + 1 sequence: