Hacker News new | past | comments | ask | show | jobs | submit | fsnarskiy's comments login

Please for the love of god learn functional programming. this is a primary example of being stuck thinking there is a RIGHT way to do things according to Object oriented design practices.

Firstly objects in JavaScript are not classes, they are just your good old dictionary value store, to an JavaScript object it doesn't matter if u put a function or a value under its property, under the hood its just a pointer pointing to a memory location. infact you can dynamically change it!

Also if this code is running in the browser you get ZERO security benefits, i can still read your values through the debugger.

But it does impart some very significant slow down, especially if you are calling the getter a LOT.

With Box.area; it literally has to resolve the BOX variable name and jump 1 pointer. with BOX.getArea(); it has to create a new scope and stack for the function call and there is SO MUCH MORE going on behind the scenes.

The right solution here is absolutely to update your area as the width and height change.

please learn functional event driven programming if you are going to give advice on javascript. most of the stuff you learned and got tested on in your HS and College courses about how to properly write object oriented code does not apply to javaScript. ESPECIALLY if performance is vital, which it often is since you interact with user input.


Let's not stoop to the "learn2code" condescension on a forum where we're talking to fellow developers. The person you replied to had a perfectly reasonable point.

I use strict FP languages every day and could not figure out what your point was, and your condescending intro and outro makes it hard to continue the discussion with you.


Who are you talking to here?


Why?

I feel like the person who proposed this is relatively young and newer to coding. The reason why i say this is the functionality he is proposing can already be done using one of the myriad of JavaScript design patterns - https://addyosmani.com/resources/essentialjsdesignpatterns/b...

It can already be done in a clean and easy way with a an anonymous self executing function - (function(){//EVERYTHING HERE IS PRIVATE SCOPE})();

this also has massive added benefits to execution time and variable name resolutions.

Learn more about variable scopes and how powerful JavaScript is.

Also recognize that JavaScript is much closer to a functional programming language than a typical Object oriented language. If u are going to try and make JavaScript objects try and behave like classes that you know and love from java, you are gonna have a BAD time. its going to take way longer to code, gonna be hard to maintain and reason about it. Try instead to write asynchronous event driven code where functions pass data around rather then data passing functions around.

Also it is really dumb trying to set private/public fields in JavaScript code that runs in the browser, i can still easily access all the variables through a debugger or running in a custom environment( like Selenium).

and if your concern is about keeping that data safe in a server environment.. well then your really shouldn't rely on private/public to keep you safe and put in actual user role management and access at the application level, and one of the best way to do that is utilize a graph database to store and resolve complex user roles and permissions in your application.


Making classes (or equivalent) in the way you propose - ie "make use of closures for private instance fields" - does not work well. It requires allocating a new copy of each of the class's methods every time the class is instantisted, working around the prototype system.

If we're arguing about crappy misuse/abuse of JS and "doing it wrong", surely this is far worse?


I would be interesting to see exactly how much stuff is copied by the various implementations in such cases.

There are obviously some stack or other data frames that have to be captured somehow, but the code itself should in principle be something the VM makes a single copy of with references to in each object using it as a property. (Internally within a Function type reference, separate trackers for the instructions vs the bound data)


Normally it could be effectively optimised away and just treated as an extra argument.

But it's also required by the spec that `foo().method !== foo().method` when returning new functions from a closure in `foo`, so the function has to be wrapped and a new structure allocated each time to differentiate.


Thanks. I suspect the internals of “===“ in this case could look for the presence of a closure data pointer on a Function or some such hack.

Anyway, I’d probably use an actual prototype on something (with “methods” and) hundreds of instances, but otherwise, I’m not too worried about just using closures and object literals.


what i was demonstrating by showing that design pattern is that there is already an easier and clean way to do private variables and scopes.

BUT if you want to talk about its speed and efficiency here we go:

It very intentionally trades memory space for faster variable resolution.

plz read this article - https://www.toptal.com/javascript/javascript-prototypes-scop...

he has some test code at the end that compares the speed of resolving function/variable names in the local scope compared to going up the prototype chain. in his case its about 8 times faster in the local scope, but does in fact require a local copy of the variable.

I initially had to learn and understand JS variable resolution when i was writing a server to process Google Analytics data from our customers accounts to get some valuable business insides from the data.

The beauty of a self terminating function is that it creates a scope that is not even related to the global scope, if the variable name is not found in that scope it does not start going up scopes, and the scope itself is very clean and not polluted (unless you pollute it yourself) so its faster to resolve your variable names, vs an object with a long prototype chain. keep in mind that in the article above the prototype chain would get slower and slower the more methods and variables you added.

Just using a self invoking function as a wrapper for my array addition sped up my code 7 times. along with a bunch of other variable lookup optimizations i was able to process a GB of data per second on my T2 micro aws server with 1 GB of memory.

In the modern day era of computing i am also very confident making trade-offs for using more memory via copying that function to be closer in memory when i need it.

Also keep in mind that when it comes to low level cache hardware (like your CPU cache) its going to take advantage of the function actually being close in memory to the object, as well as probably being accessed when the variable is accessed ( taking advantages of temporal and spacial locality) when the object you are trying to use gets loaded into memory; it's likely to also load the function into the cache with it, and then you don't have to wait on all the cache misses as it traverses up the prototype tree.

OFCOURSE there will be some case somewhere where that object function for some reason takes up a huge amount of memory and its more efficient to store it in the prototype, but that will be highly unlikely.

But this kind of design pattern:

var collection = (function() { // private members var objects = [];

    // public members
    return {
        addObject: function(object) {
            objects.push(object);
        },
        removeObject: function(object) {
            var index = objects.indexOf(object);
            if (index >= 0) {
                objects.splice(index, 1);
            }
        },
        getObjects: function() {
            return JSON.parse(JSON.stringify(objects));
        }
    };
})();

Is called a module pattern and is used A LOT... like really A LOT in javascript, most of npm packages are wrapped like this for example, because it gives them a blank scope and they don't have to worry for what lives outside that function.

https://medium.com/@tkssharma/javascript-module-pattern-b4b5... -- just read through this

This is a tried and tested pattern. I didn't make this up myself lol

That's why i initially suggested that the developer is probably younger and less experienced, if you work with JS for the last 5 years you are almost guaranteed to run into this.


Also to clarify something, it technically wouldn't be a closure since -

A closure is an inner scope that has a references to its outer scope.

It happens any time a function is created (technically, even global). That actually even goes for “IIF(Instantly Invoked Functions)”, since the function is created first (has references to its outer scope) and THEN is called (two separate “events” in js). The call is not part of the function decl/expr and therefore comes next. Between the two there’s a gap and that’s where you’d say there’s a closure. After that call the reference to the function ceases to exist and the garbage collection tears down the function and with that the closure too.


Bear in mind, in the cases you refer to, the inner functions should only be allocated once - when the file is loaded.

The issue is when you use this pattern for initialising what are effectively classes. Then every time you use `new`, the functions have to be wrapped again.


Again you are saying the "inner functions should only be allocated once", there is no rule or book in CS that states that as the absolute truth. It is however a concept that would be taught in a typical OO programming class.

Idk where exactly you were taught this, or if you did not read my whole speil about that being memory/cpu tradeoff.

i know i put up a wall of text, but if u have time to read one article on JS from all of this it is this article - https://medium.com/javascript-scene/common-misconceptions-ab...


This is actually very interesting.

Low-grade inflammation is often linked to aging and cancer developments, it is however often unclear whether its cause/effect or correlation - https://academic.oup.com/biomedgerontology/article/69/Suppl_...

But mechanisms of inflammation are many and still pretty illusive, this is a great step towards understanding aging mechanisms in general.


OK threads aren't "BAD" or "GOOD", threads are a tool to be used correctly.

Threads are like a data super-highway and all the incorrect uses of them arise from using them for way to little data. Akin to building a 5 lane highway for 5 cars to pass.

A thread has some amazing things of being able to switch an execution very fast (built into things on the CPU level) and memory caching/storing advantages. Aka a thread is meant for a compute heavy task like rendering something, or running a decode in the background, mainly doing heavy math. Threads provide great things but at a cost. just like a highway they cost a lot ( a lot of memory in your ram) and require some maintenance and management (locking mechanisms)

The problems with threads arise when people think its ok to use them everywhere for all tasks parallel or async.

Example Apache used to start a thread for each connection to server which at that time took 40 MB + .5 sec and this allowed a myriad of attacks on, one of them being slow loris.

In java-script if you start a new web worker thread, that's actually a new V8 instance and costs you again a lot in memory and startup time.

This "start a thread for everything" was definitely the prevelant thinking in the first decade of 2000, and people were not really thinking about hidden costs.

Come along Ryan Dahl with node.js in 2009 and "OMG everyone forgot there are such things as event loops"

An event loop is basically a much cheaper single threaded async way of processing events in an event queue, the big idea here was that in most other languages threads waited for any time consuming I/O to network or hard disk and let other threads run in the meantime.

Ryan combined the async nature of event loops with async I/O... rightfully a very clever move. (also I/O locks is what often causes thread locks in multi-threaded environments)

This allowed the single threaded event loop to never really lock up with any time consuming, but not CPU related task, freeing the CPU to constantly process the event queue, in a way emulating multi-threading on a single thread.

Going back to the highway metaphor, this would be more like an elevated city bike path, it cant take heavy trucks (heavy CPU loads) but it can take a huge amount of light processing request and never lock up, freeing up your city streets from bikers and leaving them more free to run the heavy trucks.

This is how node js can handle 600k concurent connections - https://blog.jayway.com/2015/04/13/600k-concurrent-websocket...

something you would never be able to achieve if u started a thread for each one.

basically this is akin to building 1 dense bike path for 600k bikers or building 600k 5 lane highways down each only 1 biker would go.

Where node.js falls short is if u give it heavy math tasks, the event loop will lock up.

So in my analytics processing server i had a node.js main loop with a bunch of V8 web-worker thread pools, to do the heavy math and statistics, while the main thread just routed requests and served cached data.

Another consideration however is memory leaks, threaded environments tend to clean up well after themselves, because if there is a leak in a thread it gets wiped when the thread dies. But node.js is very susceptible to memory leaks.

All these things are just tools, you have to learn when to use the right tool for the right job.

But i think there are much more pitfalls in building threaded environments then there are using event loops. I got node js concepts within a week or two, however i still struggle with some thread lock concepts even after taking clases, and shit is way harder to debug properly too. Its that high abstract level of thinking that i have a hard time visualizing in my head, and i am never sure that i though EVERY scenario through.


>Ryan combined the async nature of event loops with async I/O... rightfully a very clever move.

How is it clever? You cannot have async IO without an event loop. Async IO was a pretty mature technology long before nodejs came out. Netty did this back in 2004. The only special thing about nodejs is that it's culture is to be async by default.


A much better article that talks about the same problem today - https://thetechsolo.wordpress.com/2016/02/29/scalable-io-eve...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: