Hacker News new | comments | show | ask | jobs | submit login
Closures for class/module privacy considered harmful (sidekicksrc.com)
38 points by campbellmorgan on July 29, 2013 | hide | past | web | favorite | 58 comments

"Considered harmful" is the code word for passive-aggressively manipulating an opinion as truth. Whenever I see it, I would mentally say, fuck you, just state your opinion as opinion and don't try to twist it around.

Back to the blog. Contrast to the claim, closure for class/module privacy is great because it separates inner implementation from public interface. I really don't want any outside code to touch the private code. Because tomorrow I might change the inner implementation, function signature, break the function into two pieces, or proxy-call other libraries. And I want the assurance that the change won't break any calling codes.

Public function is like global variable. Anyone can access it anywhere and anytime. You don't know how, where, and when people call it. If you change its signature, you pretty much have to hunt down every call of it to retrofit. Private codes have no such issue. It's best to limit the scope of codes that are intended to be private.

Testing private code is a non-issue. Nobody outside cares about your private functions. They care about your public functions serving as the public interface. You test your public functions as your users would call them. Your public functions in turn exercise the private functions.

If you really want to test the private functions, write private tests in the same scope as the private functions to call them.

Why hide it completely? What about exposing your interface by convention like in Python? Never heard anyone suffering from inability to make methods and fields private in Python.

Because it's a hack in Python.

Edit: Why not make all variables global? Why not just use convention to label a globally scoped variable as "global" or as "local"? You can make it work as a convention. Now apply that to the whole language and to all the developers ever use it.

Well, that depends on your starting point, if you're a private state proponent then it may be a hack for you. I'm more of a control freak and I'm on a side of 'everything is public, lets code by interface', so for me private state is a bit of a hack that some languages have because some other languages have it and and some other languages have because it was introduced in languages before them. My point was that the lack of private state in Python didn't made impossible to separate inner implementation from public interface.

Why not make everything global? For obvious reasons - modularity, memory consumption, and inconvenience of lengthy variables names.

If you are a control freak and you are writing a library, you would want to control how and where people can call your code. Most languages support private function because it has found to be a good idea. Sure most of them are doing it as a hack.

It is exactly the point where our interests collide - when I'm using your library, I want to have a control over your library, because I'm using it. I don't want you library to control me an the way I use it. It's a tail wagging the dog analogy.

For example I use your library and everything goes great until I'm stuck because I need to have an information hidden in a private field, if I'm in Python I would do it and add a note, fixme, todo, whatever that this is a hack and things may break in a future, but my code would work right now. And I can fork your library later and submit changes. If this is a language with real private support the only way is to use hacks (reflection) which is the same as using the field directly like in python just with some additional PITA, or to fork your library and maintain it until you merge changes, or if you not, I have to maintain it forever.

This. And if you want to extend, encapsulate.

I've heard it said in Java that if your private methods need testing its a sign that those methods should be public methods of another class.

I can't see why this would not also apply to Javascript, other than that Javascript is rarely written in a very pure "OO" way.

For what its worth I find _private methods to be a horrible hack, I'd much rather have true private methods and force the next developer to think about whether they truly want to make that method public in the hope they'd consider restructuring it.

If your testing requires you to change your public API - especially, making its scope larger and more complex - the problem is not with your API, it's with your testing tools.

(Yes, this is a deficiency in Java amongst other languages. You don't want to be e.g. wading through the lower details of font kerning when trying to use an edit box widget, even when those sub-components need individual testing.)

I don't think anything he said means changing the public API. Moving the methods to another class and making them public there does not mean the other class has to be visible to people using the API.

Most of the value of testing comes from giving yourself a use case for the APIs you're building, forcing you to think about what the interface should be in a real way. The actual testing of behaviour is almost a side effect.

(It's certainly not a tooling deficiency as there are tools that allow testing private methods in Java. IME the best developers make an explicit choice not to use them)

Testing at the API level is effectively integration testing for any kind of non-trivial API.

Unit testing has particular value because it avoids m*n problems. But that doesn't necessarily mean that m and n need to be separately addressable entities in your API. But if you unit test at the API level, then you're forced to do that.

For example, consider a web browser widget that's part of a UI framework. A web browser is an enormously complex piece of software with hundreds of individual modules doing various things. But the API at the UI toolkit level is much, much simpler; an URL and a handful of properties and methods relating to history navigation. Trying to unit-test the web browser at the API level would be ridiculous; no less ridiculous than having your testing constraints drive enormous complexity into the developer-facing API.

There are layers of API. The appropriate test for a web browser widget is a UI framework is probably render this page in the web browser widget, render it in an actual web browser, and check that the two look the same.

Inside the web browser code there will be more detailed things that should be tested - internal APIs. But the point is that these are APIs and should be built as such; they will likely need to be used by other people working on the same code. Even if not, you need this level of structure so that you can build comprehensible software. If every class can monkey around in the internals of every other one, you get an unmaintainable mess very quickly.

The argument seems to be "you shouldn't do this because it accomplishes the goal it was intended to". Saying don't use this pattern to hide internals because it achieves the goal of hiding internals seems dubious at best.

That would be why bulk of the article is about how you shouldn't actually want to completely hide your internals, and it's not a completely terrible argument.

It does ignore the other benefit of module-closures, though (avoiding namespace collisions). Explicitly exposing your "private" functions (prefixed with _) in the module object would be a better way of achieving what the author wants.

I had the same impression, basically the core arguments presented in the article really were a rejection of OOP information hiding, not really the specific technical implementation of using closures for information hiding.

Come on, "considered harmful" is really overused now. I usually do not even read articles with "considered harmful" in the title anymore, because it is usually someone whining about a thing he has not properly understood.

Overused now? It was overused 11 years ago, and it still is.


"Considered harmful" consider harmful, then?

I always wanted to know why would anyone want to hide their code at all? If you making a library you would never know how people going to use it. Why don't expose public interface using naming conventions and let the people freedom to break it when they really want to. I sometimes found myself using methods and variables that intended to be private because I wouldn't achieve needed functionality without touching them. Just imagine what if Backbone had all model's attributes private - that practically means that you wouldn't be able to extend a model or to inspect the model's attributes in the console. Of course you can say that adequate programmer would never do that, but that's exactly my point - you have to rely on programmer's ability to write decent code, while if everything was public you would also rely on your ability to hack things, yes this can be messy and you can shoot your leg, we all know that, but we all grown ups and we know what we are doing.

When I read about symbols in upcoming ES6 I can't get rid off the sense of anxiety because I fear thing would get worse, the only hope is that browser vendors would implement some means to inspect private state of the objects that use Symbols.

By the way, this was one of the reasons why Guido van Rossum had chosen not to add private state in Python: https://plus.google.com/115212051037621986145/posts/7wpbQTPR...

Because you'll lose the ability to refactor the internal structure of the library without breaking people's code.

Well if they use your private members thats all in theirs responsibility. What if they just want to make the thing work, ship the product, whatever else, and don't care about new versions.

This depends on programming culture. For example, at Google, if you commit a CL to a shared library that breaks another team, they will tell you to roll it back and fix the callers first, and you will. It's doable but it slows you down. Doesn't matter whose fault it is; they're long gone anyway.

In other situations you don't have to listen, but if a popular library doesn't pay attention to backward compatibility then they'll end up in a situation where major customers keep using an older version of their library and nobody wants to upgrade. And unless you want to keep maintaining multiple versions, you want them to upgrade.

If your language allows you to encapsulate state properly even within an inheritance hierarchy then that makes it a lot more possible to let users upgrade safely. Anyone who's had to upgrade a large codebase using Rails or Django knows how much trouble it can be; contrast that with upgrading Wicket which is a wonderfully smooth experience.

think about Google Analytics tracking script

The article is considerably better than the comments here imply. In my view, the OP is showing intelligent intuition about software.

I've encountered the exact same issues that they have. It's frustrating when code wrapped up in a function prevents you from testing or experimenting in a REPL. It's infuriating when a framework wraps your code in one of these mega-lambdas so you can't get at it. There are many programming techniques that require the ability to breach encapsulation when appropriate.

Meanwhile, the virtues of enforced privacy are overrated. There are countless ways to screw up software; trying to lock things down so the next guy can't screw them up (preventing access, etc.) is a classic mistake. Modularity is extremely important in systems design, but not as an enforcement mechanism. It's an organizational tool. It's about decomposition and factoring, not hiding things so that only you can control them.

It's worth noting that Smalltalk doesn't have private methods [1], and that this was regarded as a feature not a bug. That alone shows that there's no particular link between enforced-privacy and OO, or even enforced-privacy and encapsulation.

To some extent this is a matter of taste, but I come down on the side that sees little value (and considerable drawbacks) in the privacy fetish. It's one of those things that people believe in because it sounds like things ought to work that way, when in reality they do not.

[1] http://stackoverflow.com/questions/7399340/smalltalk-public-...

Enforced privacy is not as common as people think either, so you don't really even need to pull out the Smalltalk argument. In Java, C#, PHP etc it is not possible for the developer to actually enforce privacy like you can in JS because the privates are readily accessible through reflection if needed.

Another point is that the Javascript enforced privacy pattern only supports private, so 2 internal classes cannot get at each other even if they are maintained together as a package. That encourages creating god objects which is not surprisingly what I commonly see in projects that use enforced privacy in Javascript.

Can't you expose the internal methods individually via the module for testing purposes without making them vulnerable to manipulation as they are used by the public methods?

Just tried this in the developer console:

    // module 'm' exposes 'p' that uses 'f' internally. Exposes
    // 'f' for testing purposes.
    var m = (function() { var f = function() { return 'a'; }; var p = function() { return f(); }; return {'p': p, 'internalF': f};})();

    // 'a'

    // 'a'

    m.internalF = function() { return 'haxor';};
    // 'haxor'

    // 'a'

Considered harmful is a little excessive considering his analysis is that it's hard to test. Hard to read is very subjective (I find no issues with it).

As for extension, it's a pattern that is useful in some circumstances. It's not really something you would chose to use if you needed extension. I was going to say it was like saying a Singleton is harmful if you then change your mind later and decide you want multiple instances but I'm pretty sure I risk being called out for hyperbole.

Generally though I find the notion of struggling to wedge privacy into JS code a waste of effort, there are much cleaner patterns. You are always relying on personal/group discipline and good habits when coding with JS anyway so why start by disadvantaging yourself.

This article is myopically targeted at js. Hiding implementation details behind abstraction is a staple of good programming. It helps to decouple classes so you don't have to rewrite your calling code when the internals of the called class are changed.

> You cannot know ahead of time how someone else will want to modify your code

And this is exactly why I make implementation private. Otherwise my implementation becomes my public interface and I can't change it without breaking somebody else's patch.

> If you’ve got a tricky interaction between a few private functions it’s very helpful to be able to test them!

Then refactor those tricky functions into another class with testable interface.

    var SomeModule = (function(trickyFunctionality) {

      function publicFunction() {

      return {
        publicFunction: publicFunction

    })(new Tricky())

A slightly better reason: creating closures is (comparatively) slow. I recently switched from a closure model to a prototype model in my main js lib; a quick series of tests in Chromium gave me a 13% speed increase in my core functions using the prototype approach.

This roughly matches what I read elsewhere, e.g. http://macwright.org/2013/01/22/javascript-module-pattern-me...

If you need extension, you wouldn't use this pattern.

If you need to avoid this technique entirely because you're so frequently needing to unit test private methods, you have a design problem, not a testing problem.

Readability is subjective. Personally I tend to find this style a bit more readable than trying to use a naming convention to indicate privacy, as suggested later in the article.

Tooling -- any real world examples of this? It seems to me tooling would only need to worry about the public interface, not the private details.

"True privacy is a bad idea in OO" - You didn't limit this statement to javascript, so I just want to say this is ludicrous. Access control is an extremely valuable tool when you're building a large system where development may be spread amongst many teams. If you just meant client side JS, I would say your statement is debatable.

When you utilize true privacy, you're taking a very disciplined approach to building software. You prevent monkey patching or tinkering with the internal state from components who should not be concerned with that object's internal state. When you need changes, it forces you to think about how your changes impact the overall structure of your application, and whether you need to restructure aspects of your application in response to those changes. Without true privacy, sure, you could just patch some "private" method and move on with your life. As you utilize this technique more and more, however, and your application grows, it has the potential to turn into a big headache.

There isn't true privacy by default in Java or C# so how can they get by? Are they relying on the fact that programmers won't use reflection to access private? Does the client run the program with security manager on? What if the application cannot afford to run with security manager on?

I don't disagree with anything you said, which makes me think I worded my response incorrectly. I was using the author's term "true privacy", but what I meant is traditional privacy that, when not explicitly attempting to circumvent it, will generate some negative action, such as a warning or a compilation error. My problem was the author's blanket assertion that this type of privacy is bad and should be avoided, even if it is available.

What is meant by true/enforced privacy is that the only way to access something is to modify the original source. This is not the default anywhere else, as you know, but is encouraged by some (Crockford for example) in Javascript.

Enforced/True privacy is very bad, and the badness is multiplied by the fact that in Javascript it cannot be fine-tuned so when you are making it private for random application code, you are also making it private for other internal classes.

Those warnings are not part of the language, and can be done with underscore prefixed Javascript too in your build step, a fine example of programming into one's language.

Every so often I hear about a frustrated programmer complain about not being able to access private or otherwise inaccessible methods.

Could that code be useful? Sure, but that's not the point.

Information hiding is one of the core concepts in OOP:


It's not even constrained to OOP, take C for instance: why it is that all functions are not exported in a shared library? Are they nefariously hiding code they don't want you to see? No, it's just that they are minimizing the interface between components, which is a concept as old as modular programming itself.

It makes change easier to contain and allows one to reason over large systems.

This is a good thing. Hiding implementation details is a good thing. Using closures to implement private functions in javascript is not bad. It's advocated in "Javascript, the good parts", as a pattern.

Should one try and write javascript as if it were java? No, and overuse of this pattern leads to a bunch of boiler plate. One usually tries to remain state-less and write deterministic, side-effect free functions that do not depend on closed variables.

Is it handy tool in some cases to implement DRY? Yes. Should it be considered harmful? No.

The author mentions SICP, but apparently fails to remember the numerous instances where the SICP authors use closures to hide implementation details. I remain unconvinced that exposing implementation details for the sake of testability is a good thing.

Hint: use something like

  var DEV = true;

  (function() { 
     var priv = function() {};
       this.priv = priv; 
and then use Closure Compiler with option: --define DEV=false

Meh. The testing issue is easy to handle.

Just create a second entry point that exports everything you might want to test. "Secure it" (there is no real security here) to be only callable if a read-only global flag was set when the object was created.

In your private unit tests you set that flag, create the object, and get access to privates. In your normal code, there is no way to access those functions.

Yes, it is a hack. But I firmly believe that it is acceptable to have hacks to enable unit testing.

Whether or not any type of privacy enforcement is a good idea is a debate that I leave to others.

Or another way without the global:

    function myModuleReally() {
        return [useMe, testMe];
    function myModule() { return myModuleReally()[0]; }
It'd be nice if the programming environment gave us a "can opener" (not globally available) to open up closures we've made -- like http://gbracha.blogspot.com/2012/07/seeking-closure-in-mirro... -- but we can do plenty without it.

Why drag closure into this? Closure is just the ability for code executing in a function to see variables above its immediate scope (interesting/non-intuitive effects when combined with asynchronous logic notwithstanding). The thrust of the article is really about hiding data from access by external code. Closure just happens to be a mechanism JS devs typically exploit to accomplish that.

It's not very proper to call these closures, is it? Their purpose is not to close around external variables available in the lexical scope.

And I'm ready to upvote the first blog post entitled "the 'considered harmful' meme considered harmful".

They look like closures to me. In javascript, functions are first class entities, so when he writes

  function privateFunction() {
that is sugar meaning there is a variable named privateFunction and it points to a function. So the object that is eventually returned closes around these external variables.

But if they're really closures, how can we explain such behaviour?:


Because you aren't really showing the pattern that is being discussed. In that example, the two incr variables are in the same lexical scope, and you are calling your closure in that same scope. So when your inner function calls incr, it calls the first one it sees, moving outwards through nested scopes. Later, when you overwrite incr, the new one gets called. You've really effectively only closed over the variable x with respect to the lexical scope that incr is in.

Here's four variations that hopefully show the difference.


edit: bad link

Indeed, you're right. It still feels quite strange to me to use the fact that a function (used as behaviour and not data) is, from the language features, technically a variable to call that kind of construct a closure. But thanks for the effort and occasion to think about it.

Wait, so closures for encapsulation are bad because they allow encapsulation? You're complaining that you can't get access to private member functions this way? Isn't that what encapsulation is for??

i really wish there was a good alternative to this, because I sometimes find myself wanting to write tests for some of my private functions, which just don't really have any need to be exported.

*edit : i should add i have been making use of the fact that a lot of my modules are event emitters to work around this. so even if i don't expose the function as such, i trigger the event rather than use the function directly.

The article lay the testability issue, but offers no alternative... Since as far as i know there are none.

Also article is kinda crap as it fails to understand the reason they are truly necessary. But upvoting to generate discussion here.

The reason what are truly necessary? Private methods? The author doesn't seem to think they're worth bothering with given the trade-offs. Out of interest what makes you think they're necessary?

The reason they are used is to specifically hide implementation details from a consuming class or chunk of code. The reason we do that is so the caller cannot make assumptions based on internal details so in the future we can modify it radically and not impact the caller. This is abstraction and it's OOP 101.

There are several systems that must run untrusted code. Such as advertisement.

We actually depend on real private methods.

just because what you do every day makes it useless, it's presumptuous to assume it's the same for everyone.

Your code on the client side is merely a convenience for malicious advertisement code if accessible. How your client side code interacts with the server is what matters and the malicious code can do whatever your code can do with same privileges if you are including it on your page.

If you are running untrusted code in JS then real private methods have only given you a very false sense of security. You should have run it in a sandboxed process instead.

just because what you do every day makes it useless, it's presumptuous to assume it's the same for everyone.

Exactly why I asked...

fair. But it is still presumptuous to write an article about the subject and not even consider someone else has a use for it.

Here is a link to an article that discusses one possible way to test private functions: http://philipwalton.com/articles/how-to-unit-test-private-fu...

He basically exposes them while in a dev environment, and then uses a build tool to make them actually private. I agree with weego's comment though, its something that isn't usually worth the tradeoffs, as long as your team has discipline.

and what year was this article written?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact