Hacker News new | past | comments | ask | show | jobs | submit login

Making classes (or equivalent) in the way you propose - ie "make use of closures for private instance fields" - does not work well. It requires allocating a new copy of each of the class's methods every time the class is instantisted, working around the prototype system.

If we're arguing about crappy misuse/abuse of JS and "doing it wrong", surely this is far worse?




I would be interesting to see exactly how much stuff is copied by the various implementations in such cases.

There are obviously some stack or other data frames that have to be captured somehow, but the code itself should in principle be something the VM makes a single copy of with references to in each object using it as a property. (Internally within a Function type reference, separate trackers for the instructions vs the bound data)


Normally it could be effectively optimised away and just treated as an extra argument.

But it's also required by the spec that `foo().method !== foo().method` when returning new functions from a closure in `foo`, so the function has to be wrapped and a new structure allocated each time to differentiate.


Thanks. I suspect the internals of “===“ in this case could look for the presence of a closure data pointer on a Function or some such hack.

Anyway, I’d probably use an actual prototype on something (with “methods” and) hundreds of instances, but otherwise, I’m not too worried about just using closures and object literals.


what i was demonstrating by showing that design pattern is that there is already an easier and clean way to do private variables and scopes.

BUT if you want to talk about its speed and efficiency here we go:

It very intentionally trades memory space for faster variable resolution.

plz read this article - https://www.toptal.com/javascript/javascript-prototypes-scop...

he has some test code at the end that compares the speed of resolving function/variable names in the local scope compared to going up the prototype chain. in his case its about 8 times faster in the local scope, but does in fact require a local copy of the variable.

I initially had to learn and understand JS variable resolution when i was writing a server to process Google Analytics data from our customers accounts to get some valuable business insides from the data.

The beauty of a self terminating function is that it creates a scope that is not even related to the global scope, if the variable name is not found in that scope it does not start going up scopes, and the scope itself is very clean and not polluted (unless you pollute it yourself) so its faster to resolve your variable names, vs an object with a long prototype chain. keep in mind that in the article above the prototype chain would get slower and slower the more methods and variables you added.

Just using a self invoking function as a wrapper for my array addition sped up my code 7 times. along with a bunch of other variable lookup optimizations i was able to process a GB of data per second on my T2 micro aws server with 1 GB of memory.

In the modern day era of computing i am also very confident making trade-offs for using more memory via copying that function to be closer in memory when i need it.

Also keep in mind that when it comes to low level cache hardware (like your CPU cache) its going to take advantage of the function actually being close in memory to the object, as well as probably being accessed when the variable is accessed ( taking advantages of temporal and spacial locality) when the object you are trying to use gets loaded into memory; it's likely to also load the function into the cache with it, and then you don't have to wait on all the cache misses as it traverses up the prototype tree.

OFCOURSE there will be some case somewhere where that object function for some reason takes up a huge amount of memory and its more efficient to store it in the prototype, but that will be highly unlikely.

But this kind of design pattern:

var collection = (function() { // private members var objects = [];

    // public members
    return {
        addObject: function(object) {
            objects.push(object);
        },
        removeObject: function(object) {
            var index = objects.indexOf(object);
            if (index >= 0) {
                objects.splice(index, 1);
            }
        },
        getObjects: function() {
            return JSON.parse(JSON.stringify(objects));
        }
    };
})();

Is called a module pattern and is used A LOT... like really A LOT in javascript, most of npm packages are wrapped like this for example, because it gives them a blank scope and they don't have to worry for what lives outside that function.

https://medium.com/@tkssharma/javascript-module-pattern-b4b5... -- just read through this

This is a tried and tested pattern. I didn't make this up myself lol

That's why i initially suggested that the developer is probably younger and less experienced, if you work with JS for the last 5 years you are almost guaranteed to run into this.


Also to clarify something, it technically wouldn't be a closure since -

A closure is an inner scope that has a references to its outer scope.

It happens any time a function is created (technically, even global). That actually even goes for “IIF(Instantly Invoked Functions)”, since the function is created first (has references to its outer scope) and THEN is called (two separate “events” in js). The call is not part of the function decl/expr and therefore comes next. Between the two there’s a gap and that’s where you’d say there’s a closure. After that call the reference to the function ceases to exist and the garbage collection tears down the function and with that the closure too.


Bear in mind, in the cases you refer to, the inner functions should only be allocated once - when the file is loaded.

The issue is when you use this pattern for initialising what are effectively classes. Then every time you use `new`, the functions have to be wrapped again.


Again you are saying the "inner functions should only be allocated once", there is no rule or book in CS that states that as the absolute truth. It is however a concept that would be taught in a typical OO programming class.

Idk where exactly you were taught this, or if you did not read my whole speil about that being memory/cpu tradeoff.

i know i put up a wall of text, but if u have time to read one article on JS from all of this it is this article - https://medium.com/javascript-scene/common-misconceptions-ab...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: