
Enforced privacy is rude: advise instead - timruffles
http://sidekicksrc.com/post/enforced-privacy-is-rude/
======
btilly
The problem with this argument is that no matter how much you make the
developers aware that they are doing something wrong, there will be developers
who do it when under pressure. (Meaning to go back and clean it up, but we
know how that one goes.) And then when you break stuff, people _OTHER_ than
that developer will not be aware of how well you warned the developer.

In that situation, you'll be the one blamed. It is unfair, but that is what
will happen.

For a case in point, Microsoft encountered this one repeatedly in the 80s and
90s. (It didn't help that sometimes it probably was their fault..but most of
the time it wasn't. It really, really wasn't.)

------
drone
I'm in nearly complete disagreement with the author here. I guess it helps
that I've been burned more than once when using code from two third parties
where party A reached into the "private areas" of party B's code. Rather than
working with party B to resolve the incompleteness of the solution, they work
around it and then push it to everyone. Later, when party B makes a change
("Hey, it's private, and the public interface doesn't change!") and suddenly
breaks A's code - there are a bunch of un-related other parties which now have
a nightmare on their hands.

Case in-point (of which there are many, I'm sure): QExtSerialPort. The author
needed access to underlying Windows functionality that Qt didn't publicly
provide, however, there was this nice, private header file laying around they
could use. The Qt team later decided they wanted to remove the contents of
that file, because no one should ever be using it. Anyone who wanted to build
QExtSerialport had to go and grab the original file, and put it into the
correct location. If they had instead submitted a patch to Qt to fix the
problem, many hours would have been saved.

The author might get more points with me if they added "keep private usage
private," but instead they are advocating accessing private internals of 3rd
party tools in new open-source projects, which restricts the original
developer from making changes without impacting users of the third party
tools. Privacy is important - if you want to go around it fine, but you have
to expect the price for you and your users, to hand-wave around that is naive
at best.

~~~
timruffles
"advocating accessing private internals of 3rd party tools in new open-source
projects": no - I didn't advocate that as it's obviously insane :)

I said advise your users and trust they won't do insane things. This allows
_your_ users to patch/hack in their _application_ code, as a temporary or
exploratory thing. I didn't suggest releasing libraries that themselves
monkey-patch dependencies (ugh).

You're straw-manning.

~~~
Anderkent
Not really, it's the same thing except from the other point of view. If you
allow people to patch/hack in their application code, then someone will
patch/hack in their application code without telling you (because it works and
they're under time pressure), then once you change something it will break.

~~~
drone
Exactly this. Whatever you allow people to do - they will, at some point. At
some point, monkey-patched code gets used in someone's library, at some point
they put it on Github, and at some point it gets forked.

The OP is literally suggesting replacing "private" with either "public" or
"protected" in C++ code, and then expecting comments/documentation to
communicate what was already communicated previously by "private".

------
sz4kerto
It seems that the post argues that enforced privacy is too theoretical, in
practice, advice is better than enforcement.

I'd argue that the post is too theoretical, in practice, enforced privacy
works much better. People will do stuff against your advice. That's OK, you
say - they'll get in trouble eventually, but it was their decision. However,
it affects you, the library (app, etc.) developer, as your clients might turn
out to be more powerful than you.

Think of Linus' rant on not breaking userspace. He's right I believe. In
general, you are not allowed to break client code, even the client did
something he was discouraged to do.

Interfaces are contracts. The ultimate documentation is the code, not the
comment. You are saying that in the following case,

// do not access;

public int getSize();

comment has precedence over the visibility modifier. Well, no.

~~~
esailija
In practice enforced privacy is rarely used in languages other than
Javascript. In Java, C#, Ruby, Python, PHP advisory privacy is the default.

You would enforce privacy when you need to run untrusted code in same process
(like the Servlets model), which is not the case in any Javascript
application.

Enforced privacy in Javascript is also very inflexible, disabling many ways of
extensiond and reuse and encouraging god objects because you cannot fine-tune
the access - other internal "classes" are just as unpriviledged as some random
application code.

So in my eyes there are plenty of downsides for no visible upside - in all
other languages the developers seem to play well and not write code to access
privates. Maybe that's due to incompetence though.

------
jeremysmyth
This is quite naive.

When I learned to drive, my tester required that I demonstrate competence with
the controls, using them at the appropriate times, in the appropriate ways.

If my car exposed all of its private internal operations, I would have needed
to know how to use them, and demonstrated that I know that. I haven't a clue
about fuel flow and gear ratios, or air/fuel mixture or how the thermostat
affects how the engine operates. I'm quite happy that I didn't have to
demonstrate all of that too.

What the article ignores is that a good API provides everything the consumer
_needs_ , while keeping the API small and easily comprehensible. A driver who
has to keep track of 5 details is more likely to learn to use his car more
quickly, and less likely to crash than one who has to keep track of 200
details and make decisions about each one.

~~~
tomp
You give a great analogy, but IMO extract the wrong conclusions. The car
_exactly_ follows the philosophy outlined in this article; the driver/user
doesn't _need_ to know the internals, but you can always open up the hood and
adjust/fix/modify the inner working.

~~~
sz4kerto
There's one problem with the analogy - exactly the point what you're trying to
use... a car is almost never updated. There's no patch for the turbocharger,
the piston design stays the same after manufacturing, etc. So it's OK to
retrofit a custom turbocharger.

~~~
spacelizard
Not true. Every now and then you'll see a car recalled because of
manufacturing or design defects.

------
ryanpetrich
Enforced privacy is advisement. It's a signal that if you want access to a
library's internal behaviours or state you should either communicate your use
cases to the library's maintainer or fork it and manually integrate upstream
changes. This leaves the maintainer free to make changes to internal
behaviours and state without breaking an implicit API contract they didn't
realize they had made.

~~~
nathan_long
"Advisory privacy" allows all the options you listed. It also allows me to
write the lightweight patch that I need and use it - immediately, temporarily,
and at my own risk.

------
tbrownaw
Unenforced privacy is a Bad Idea for (shared) libraries that may be upgraded
independently of whatever uses them. It means that any internal change becomes
a breaking change, and when some application stops working after a library
upgrade _it is your fault_ regardless that you told the application developers
"don't do that" (the users don't know or care that you said that, they only
know that upgrading your library broke their stuff).

------
kijin
There's a middle ground between public and private, and some languages call it
"protected".

In FOSS libraries that I maintain, methods and properties that I don't want to
expose are prefixed with an underscore and designated as protected. Protected
members are not directly accessible, but anyone who wishes to play with them
can create a subclass to access them. They also don't need to make any further
changes other than subclassing, whereas private members might need to be
overridden or (even worse) reimplemented depending on the language. So I think
"protected" hits a nice balance between simplicity, openness, and
maintainability.

The requirement to create a subclass to access protected members might come
across as an inconvenience, but it sends the same message as the article's
"dodgy" JS syntax: _Here be dragons, tread carefully and don 't blame me if
your app breaks._ It would be very nice if users understood that the leading
underscore is meant to send the same message, but since they're apparently not
getting the message, a little more inconvenience might be needed.

------
brnstz
The two footnotes are good counter-arguments. The only time I can imagine
making a class final (can't extend) is when it's a matter of security (e.g.,
Java's String class).

Then there is a matter of "well, it's not MY fault if you didn't use the
public API and your code is now broken." I recall even Steve Jobs chastising
developers for doing this.

Building something with a sensible yet strict privacy model takes a lot of
upfront design. Makes sense for code that will be used by the masses, but
maybe not for a small project.

