I feel like the toy examples in the article might only make sense to people who already know why they want this feature. The examples don't make it at all obvious to me why I would want this feature. The new version in the examples have more code, more indirection, and more magic (in the sense that it relies directly on a specific property of the runtime). Could anyone here help me understand a more robust example where `try...finally` just won't work (or at least would be patently less readable/etc)?
The verbosity comes because they are demonstrating both how to write the library code to support the feature, and how to consume it. But in reality a lot of the time someone else will have written the library code for you.
In this (still contrived) example, we end up having to do nested try/finally blocks.
Before:
let totalSize = 0;
let fileListHandle;
try {
fileListHandle = await open("file-list.txt", "r");
for await (const line of fileListHandle.readLines()) {
let lineFileHandle;
try {
lineFileHandle = await open(lineFileHandle, "r");
totalSize += await lineFileHandle.read().bytesRead;
} finally {
await lineFileHandle?.close();
}
}
} finally {
await fileListHandle?.close();
}
console.log(totalSize);
After:
let totalSize = 0;
try {
await using fileListHandle = getFileHandle("file-list.txt", "r");
for await (const line of fileListHandle.readLines()) {
await using lineFileHandle = getFileHandle(lineFileHandle, "r");
totalSize += await lineFileHandle.read().bytesRead;
}
}
console.log(totalSize);
Thank you for this example; it wasn't clear to me reading the article, but this is the main problem I was hope being solved. Will make writing tests much smoother.
As a consumer of the library, how will I know which calls need to have the using keyword? Is it a case of having to rely on the documentation stating so, or is there some other tell that something contains a Symbol.dispose function?
using typescript, you can imagine a little squiggle in vscode under an open without using "using", typescript can know that is object has a disposable symbol.
I wonder now if React / Vue / Svelte / SolidJs (and all the others) could use this to cleanup things as a finer grained way of handling unmounting of nodes, for example
Not likely. This is essentially syntactic sugar for `try ... finally`, the resource disposal is scope based. Node unmounting is linked to a different (longer) lifetime system than plain javascript scope.
It’s possible that there are UI systems where this would work but not the ones you listed above.
Yeah, the scope-based disposal jumped out at me from the examples. I suppose that means that anywhere you still have a reference to the resource alive, referenced by an object in the main node loop, that thing would never be automatically destroyed, right? You'd have to manually release it anyway. On the other hand, does this trigger if anything returned from that function goes out of scope? Or not until everything returned from it does?
I think you’re confusing scope and reachability. Maintaining a reference to an object has nothing to with whether or when it’s disposed in this TC39 language enhancement. Such a system _does_ exist in Object finalizers, but it’s hard to use correctly, especially in a language where it’s very easy to inadvertently retain references via closures. Resource disposal of this type needs to be much more predictable and can’t be left to the whims of the runtime. The docs on finalizers and WeakRefs are full of warnings not to expect them to be predictable or reliable.
With this new using syntax, resources are disposed of when the object they are tied to goes out of _lexical scope_, which doesn’t need to worry about the runtime or object lifetimes at all. This example from the TC39 proposal makes it pretty clear:
function * g() {
using handle = acquireFileHandle(); // block-scoped critical resource
} // cleanup
{
using obj = g(); // block-scoped declaration
const r = obj.next();
} // calls finally blocks in `g`
Maybe this is a dumb question, but for something to be reachable - i.e. not marked / swept by a garbage collector - doesn't it need a reference in the active scope? Weak references exist specifically to allow event handlers to be dereferenced at lazy intervals, but that's not what I'm talking about. What I mean is, if the above function returned a database connection to the Nodejs main loop, which stuffed it into a pool array of connections, wouldn't that still remain in scope for the remainder of the program unless it were explicitly deleted?
> if the above function returned a database connection to the Nodejs main loop, which stuffed it into a pool array of connections, wouldn't that still remain in scope for the remainder of the program unless it were explicitly deleted?
No, since these are block-scoped the original variable goes out of scope when the block it was declared in ends. The underlying _value_ that the variable is a reference to certainly can escape the block in a number of ways (assignments to existing variables or properties, closures), but this system doesn’t care about any of that, it’s directly equivalent to using try and finally, and finally blocks execute when you would expect them to.
By abstracting away the countless ways a function can handle cleanups, you are providing a single, uniform interface that anyone can implement. This concept is akin to the usage of `.then`, where you agree upon an interface that allows people to execute asynchronous work. The syntax sugar of async/await relies on the existence of a `then` key, demonstrating similar concept.
In the context of the parent question then it's the consistent standard syntax that makes things easier to read - something that indeed is impossible to see with an isolated example that, if anything, may look less clear than the syntax you're used to.
Scoped resources are essential for avoiding global singletons. The JS/Node ecosystem acts like (and often believes) globally-scoped singletons are a recommended pattern for things like database connections.
The reality is, there’s no better option at the moment.
Virtually every other ecosystem has concluded globals are not the best practice. (At least until we return to dependency injection containers where they are suddenly cool again but I digress.)
It can be useful take a step back and think about where best practices come from. What problems do they solve?
Singletons are bad in complicated, long-running processes, because you're in trouble if you want to have more than one of something, and cleanup can be a problem. A one-to-one relationship with the running process is problematic.
But JavaScript often runs in a disposable runtime environment that forces cleanup when terminated. For example, a web page or a web worker. Memory leaks usually aren't a problem and you can just treat it like arena allocation.
If you want more than one web page, it's very easy to do.
Similarly, if you're writing scripts using disposable Unix processes then a memory leak in a command isn't all that big a deal; you can sometimes get away with never freeing anything because the OS will do it.
NodeJS, however, is not running in a disposable runtime environment. It is also the location where things that need cleanup are likely to occur (database connections, for example).
For the browser context, I don’t have a good use case for ‘using’ (except for browser devs themselves where some code could be in JS now but impossible before).
Then again, people always surprise you with new use cases!
Singletons are considered 'bad' under general advice because they can make testing hard.
As with everything, there is a time and a place, but if you understand the tradeoffs to recognize that time and place you won't be soliciting random advice from the internet, and thus won't hear the 'good'.
I get the impression that the JavaScript world largely doesn't care much for testing, though.
When using singletons for DB connections in TS, I will generally have a singleton function that returns the resource. So getDatabaseConnection will return a databaseConnection.
It’s pretty simple, but it has solved the testing problem for me because within that getResource function I can check if the environment being ran in is a test environment. If so, return a mocked instance. If not, return the real instance.
It’s pretty rudimentary, but it’s solved our issues.
Note, you can also place said singleton as a single export in a module, and test anything using that module with a mock on the loader, which most of the newer testing options for JS support pretty easily.
I think from a threading perspective that’s a reasonable statement. But it’s also about creating manageable abstractions, very limited singleton usage ok, but it can easily get out of hand and lead to hard to reason about code.
I rarely write Javascript, so I'm not in tune with the community, but when I have I have had no trouble passing database handles and such things around like I'm used to in every other language I work with more frequently.
It appears all this does is avoids needing to manually call close (or equivalent)? While that is a nice addition, helping to avoid the situation where you forget, why does globals become the alternative? Isn't simply calling close manually the best option at the moment?
I've only had one scenario where I actually needed something like this (and I don't know if the `using` proposal helps). I have a library (Prolog interpreter) that uses a Wasm module originally written in C, so I have to manage the memory manually. If users iterate through or close a query (e.g. with `for await`) it's fine and cleans itself up. But, it's possible to to create an AsyncGenerator, never call .return() on it (which won't run the `finally` block), and have that reference garbage collected. I used a finalizer on a local variable in the async generator to work around this :)
The proposal[1] explains several cases where try…finally is verbose or even can result in subtle errors. The upshot for me is that this feature adds RAII to JavaScript. It makes resource management convenient and more maintainable. Seems like a no brainer to me.
Based on my reading of the spec, you can't actually return a resource, since the disposal semantics need to map exactly to calling [Symbol.dispose] in a finally block at the end of the current block. Also, unlike C#, you can have multiple using statements in the same block, which will be disposed of in LIFO order.
It's essentially the exact same thing as the using statement in C#, so if you search for that you should be able to find more info.
But as to your question "Could anyone here help me understand a more robust example where `try...finally` just won't work", using is just basically syntactic sugar for try/finally, but I'm very much in favor of syntax that gets rid of lots of verbose boilerplate that can obscure the real purpose of your code.
When writing a game in JS, it’s very important to minimize garbage collection - too much will cause periodic stuttering and freezes. To avoid GC you want to minimize runtime allocations and mostly allocate upfront and reuse that memory.
One way is to use object pooling, but doing so in JS can be brittle because you have to remember to manually call `Pool.release(obj)`, `obj.free()` or whatever method you’ve chosen to return an object to the pool.
If a developer forgets to do this, you could exhaust the object pool, or if it’s growable, cause a memory leak! In a game’s update loop that could happen very quickly.
With this new feature, you could grab a short-lived object from the pool and automatically return it to the pool at the end of the method or loop.
Example - imagine this is inside an update method called 60 times per second:
for (const enemy of enemies) {
using pos = Pool.getVec3();
// do stuff with pos
enemy.setPosition(pos);
} // pos is returned to pool automatically
You asked about try/catch/finally. The downsides for this use-case are:
* Big performance hit when you use it in a hot loop like this - the disposal could be happening ~10,000 times per second.
* Harder to remember to fill all your loops with try…finally, ugly to have double braces anytime you’re using a pooled object.
* It’s an abuse of syntax if you’re not actually catching any error.
Consider the case where you want to iterate over an input file, query data from a database with those inputs, and write the outputs as lines to an output file. The function can fail anywhere: between opening the input file, connecting to the database, or opening the output file. With try/finally, you need to keep the variables outside the scope of your try block and check whether each one has been set in the finally (and clean them up). I.e., if the database connection fails, you don't want to try closing the output file because you haven't opened it yet. The resulting code is sloppy.
The "best" way to do this in JS today is to have one function that opens and cleans up the input file (with a try/finally). That function calls another that opens and cleans up the db connection in the same way, and so on. That's verbose and makes your code nonlinear.
The new keyword brings Go's (and other languages') equivalent of 'defer' to JS. You don't need to worry about cleaning up, the 'using' keyword implies that it happens for you.
JavaScript used to be a little elegant in its simplicity. JavaScript now is getting complicated. Syntactic sugar, I think, is only there to build a moat around a tech to justify a salary premium and reduce the number of newcomers. You end up with a dozen arcane ways to do the same thing, and you have to be fluent with all of them in order to read your teammates code. It makes the language harder to use, while adding no new functionality
> Syntactic sugar, I think, is only there to build a moat around a tech to justify a salary premium and reduce the number of newcomers.
Ascribing that kind of motive to something as innocuous as syntactic sugar is silly at best. Most syntactic sugar I've seen is pretty clearly an attempt to make users' lives easier (whether successful or not is irrelevant).
I love how this all went full circle. You know, just a few years ago, there still was the narrative of how bad JS is. Especially on HN. "Designed in only ten days", people made fun of it for being such a half-assed language. Stuff like the "Wat" talk, making fun of left-pad, etc.
Now it's "JavaScript used to be a little elegant". Fantastic.
More seriously:
1. This is typescript. Feel free to use js without it. (EDIT: This is wrong :)
2. This is a relatively easy feature, and there are equivalent features in some other mainstream languages.
You don't have to like it, I'm not sure I do, but this feels so dramatic.
What? I'm the biggest TypeScript apologist today, and even I think that JavaScript used to be terrible! Remember `function`? Remember `var that = this`? Heck, remember `var`? Remember `.bind()` and having to bind all class methods? Remember `parseInt(08)`?
This is really true. I feel somehow the problem came when the functional programming gang hijacked it and forced ES6 on us. Promises were so counter intuitive they had to create shortly after async / wait...
Don't get me wrong there are alot of great things in ES6, but it was not quite the same language after...
How would you solve the async/await issue then? I think JS handles asynchronicity very elegantly. You just need to create a proper mental model around it. I don't want to go back to the old times of callback hell, that's for sure.