
Law of Demeter - cardamomo
https://en.wikipedia.org/wiki/Law_of_Demeter
======
dwheeler
This is a weird deja vu. I created that page on Wikipedia back in 2003 so that
people could find out what it was. At the time there were a huge host of
things that Wikipedia didn't cover at all; it's remarkable how much Wikipedia
has grown since then.

As noted elsewhere, this is not really a "law" \- it's a design guideline that
many find (usually) helpful. However, it was named a "law" and the name has
stuck.

~~~
dragonwriter
> As noted elsewhere, this is not really a "law"

It is a law in the regulatory rather than scientific sense, in that it was
adopted as a coding rule in the Demeter project.

------
nabla9
Heh. I didn't know this was called the Law of Demeter. I learned this
principle as a joke:

Good object should be like like stereotypical reserved introvert Finn: respect
the privacy of others, don't talk to strangers, keep only few close friends.

~~~
jonny_eh
> stereotypical reserved introvert Finn

Who? Huck Finn? or a cartoon character?

~~~
dredmorbius
Nationality.

~~~
stcredzero
The cartoon Finn is very extroverted.

------
dang
A thread from 2016:
[https://news.ycombinator.com/item?id=12611651](https://news.ycombinator.com/item?id=12611651)

C2 page: [http://wiki.c2.com/?LawOfDemeter](http://wiki.c2.com/?LawOfDemeter)

------
inlined
The law of Demeter is sometimes a pain to work with but it makes your
codebases easier to refactor and test when there’s no calls to
foo.bar.qux.makeWidget() inside a WidgetFactory class. WidgetFactory instead
is constructed with multiple single-purpose objects that are easily mocked or
refactored.

This however adds complication to constructors in OOP and is what I assume has
always been the reason for the rise of Dependency Injection.

~~~
gowld
Please don't mock objects if possible. Prefer real, then fake, then mock for
testing error conditions that "cannot happen" during normal operation that you
can't force of fake in tests.

[https://medium.com/@CDehning/when-to-use-fakes-instead-of-
mo...](https://medium.com/@CDehning/when-to-use-fakes-instead-of-
mocks-c80188b9a3f1)

------
wongarsu
In my opinion this is a corollary of Conway's law [1]: "organizations which
design systems are constrained to produce designs which are copies of the
communication structures of these organizations".

Code that demands more communication then what people are comfortable with is
bound to fail (become unmaintainable and produce many bugs), so the law of
demeter codifies communication structures as a law about code dependencies and
architecture.

It follows that for small teams and personal projects the law of Demeter might
not be worth following since they don't suffer from these communication
problems. Also microservices an application of the law of Demeter to larger
chunks of code instead of just classes.

[https://en.wikipedia.org/wiki/Conway%27s_law](https://en.wikipedia.org/wiki/Conway%27s_law)

~~~
hyperpape
I think your comment is really about rejecting the law. You're describing a
kind of encapsulation between different modules, or packages (or some other
name for large collections of code). The Law of Demeter is really supposed to
apply at the class level, which is much more fine-grained than any kind of
organizational boundary.

~~~
wongarsu
I think in a monolithic Java or C++ application the organisational boundaries
often do flow along class boundaries (and actually the boundaries between two
programmers can be nearly as high as those between two teams).

In object oriented programming classes are basically black boxes of code with
well defined interfaces. In that sense modules, packages and microservices are
just new types of "classes" we invented since the law of Demeter was
formulated in '87\. The purpose of all of those constructs is just to give a
clear interface to a black box of code, just that those black boxes get
successively larger.

~~~
hyperpape
The Law of Demeter is more fine-grained, though. If I write classes A, B and
C, I'm not supposed to call A.getB().getC().doThings(), even if I'm the author
of A, B and C. So the Law of Demeter is potentially more fine-grained than
anything you might infer based on organizational structure.

~~~
TeMPOraL
That granularity makes it pretty much ridiculous. What if A's main purpose is
to give you Bs, and B's main purpose is to give you Cs that do things?

(I'm probably rehashing some trivial objection, but throughout all my years
doing OOP, that one "law" never made much sense to me.)

~~~
slowmovintarget
The point of the law is that if your conditions are true, you are designing
the code incorrectly.

The point is to avoid making what should be transitive dependencies into
direct dependencies. The reason for this is that you may terminate transitive
dependencies with interfaces, but only if you don't write train wrecks like
a.getB().getC().doTheThing(). You should, compose the interfaces so that you
are expressing a.doTheThing().

But sometimes ya gotta do what ya gotta do, which is why this was later
comically referred to as "The Jolly Good Principle of Demeter" instead of
"Law".

~~~
wongarsu
>The Jolly Good Principle of Demeter

Or as Jesus said: The Law of Demeter was made for man, not man for the Law of
Demeter (Mark 2:27).

------
galaxyLogic
Think about it this way. Your code says a.b().c().d(). But then you discover
that 'a' actually has method d() whose implementation is "return
this.b().c().d()". In other words

    
    
       a.d() <=> a.b().c().d() 
    

Which call should you make?

Obviously a.d(), not only because it is shorter but because it makes your code
independent of the fact whether a.b() returns something that understands c()
which returns d() whose result is what you want.

a.d() is better code then than a.b().c().d() because it makes fewer
assumptions about the rest of the system.

Now assume such a.d() does NOT exist. Would it make sense to create it? Yes
because THEN you will be able to write better code by more simply writing
a.d() .

Now there is a cost to writing a new method a.d(). But typically that cost is
not much because its implementation is simple "return this.b().c().d()" .

But then you realize b().c().d() should be replaced by b.d() to follow the Law
of Demeter again. So you actually must pay the price of writing two new
methods if you want to follow Demeter. Is it worth it? It depends on how much
you value maintainablity vs. having to write less code right now.

My guess is that following Demeter is almost always better.

------
robbrit
I've never found that sticking strictly to the Law of Demeter makes your life
that much easier for the reasons under "Disadvantages" in the article.

I prefer a more relaxed form of the law: you can do something like a.b().c(),
where b is a method that returns an interface. I think of it as a "composite
interface". You can repeat the nesting as much as you like, however it gets a
little hard to read after a while.

~~~
johnsimer
If you're trying to do b.c() you should inject b directly into your
constructor

------
octosphere
Also similar:
[https://en.wikipedia.org/wiki/Principle_of_least_privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)

------
pontifier
That's not the way I think, so that's not the way I program.

When I'm thinking about an algorithm for solving a problem I sometimes need
extra info from many other places. I sometimes need to use globals to get
around all this message passing nonsense that would otherwise clutter my
function calls with 50 parameters.

I'd use GoTos if they were better supported.

I don't usually program for others, and I don't call myself a good programmer
though. I usually just need to solve my own problems, then move on to the
next.

~~~
inlined
Some tips:

If your function call needs 50 parameters (I assume this is hyperbole) then
you need some encapsulation. Many codebases with complex functions actually
create a struct for the function call, which actually adds forwards
compatibility if new struct members are optional.

If you use globals to pass state then you’re probably not thread safe and
you’re certainly going to have problems testing because you’ll need to “prep”
global state before every test.

~~~
stcredzero
_If your function call needs 50 parameters_

There's an object asking to be born!

------
hellofunk
Whenever I see things like this "law," or the ridiculous "DRY" principle, I'm
glad I don't program deep in the OO paradigm. There have been so many code
sins committed in the name of DRY that I think it is a good interview question
to ask why it is useful to _not_ follow some of these guidelines.

~~~
twh270
I'm curious, what code sins do you see committed in the name of DRY? I ask
because in my org there's rarely even an attempt made to DRY anything, so I'm
not really aware of how people screw it up.

~~~
jrochkind1
Basically, over-engineering.

One example, sometimes DRY'ing up things that just happen to use the same
logic now, but don't existentially/necessarily use the same logic. When they
stop using the same logic, you add extra 'magic flags', or you've got to undo
the architecture.

Sandi Metz says "prefer duplication over the wrong abstraction ".
[https://www.sandimetz.com/blog/2016/1/20/the-wrong-
abstracti...](https://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction)
She also writes about how devs are reluctant to change existing abstractions
(for both rational and irrational reasons), making the wrong abstraction even
more expensive.

Which isn't to say that lots of duplication all over the place is a good sign
either, obviously not.

~~~
BeetleB
>One example, sometimes DRY'ing up things that just happen to use the same
logic now, but don't existentially/necessarily use the same logic. When they
stop using the same logic, you add extra 'magic flags', or you've got to undo
the architecture.

The DRY principle was created by the authors of _The Pragmatic Programmer_ ,
and as they formulated it, your example is _not_ DRY.

DRY is about _requirements_ , not code or logic. The specification of a
requirement should exist in only one place in the code. It aims to reduce
instances of "The requirement changed, and I thought I fixed all the places in
the code, but I forgot one and it became a bug." If two pieces of code have
very similar logic/code structure, but are tied to different _features_ , then
the DRY principle absolutely does not suggest you should refactor to make that
logic appear in only one place.

Pretty much every time I've seen people complaining about DRY, they're
complaining about something unrelated to the original DRY principle.

~~~
jrochkind1
Sure, and everyone complaining about OO isn't doing OO right, and everyone
complaining about scrum or agile hasn't actually experienced scrum and agile
correctly.

"No true scotsman" is a very popular way to argue about programming on the
internet.

But the original question was "what code sins do you see committed in the name
of DRY?", right?

I think in practice it can be hard for many people to tell the difference
between "the specification of the requirement being in more than one place"
and other kinds of apparent code duplication. Because it's not always entirely
obvious what "the specification of the requirement" is, or what it looks like
implemented in code.

Because it's so easy to do DRY "wrong", or to think you know what it means but
be incorrect (in general or a specific situation), is exactly why one should
be careful about it, and just because something appears to you (or a
colleague) to be about "DRYing up the code" doesn't necessarily mean it's a
good idea.

It doesn't mean there's nothing of value in the principle, just that one
should be careful with it.

There are few principles of software engineering that don't have some truth,
and also few that aren't misused or abused.

Certainly that particular example of "things that just happen to use the same
logic now, but don't existentially/necessarily use the same logic" is exactly
a description of when _not_ to de-duplicate code, that's why I mentioned it!
It is a "sin committed in the name of DRY", one reason to identify it is to
hopefully help others avoid it. The point is not that people doing that are
getting DRY right, it's exactly that they're getting it wrong, but in the
actual world of actual code, it is gotten wrong with some frequency.

If you aren't sure whether two passages of code that seem similar are really
sharing a "specification of a requirement in code" or not, those of us who
have experienced (and written!) lots of code that errs in acting as if they
were, might urge to prefer risking error on the side of acting as if they were
not. Or, again, as Sandi Metz says, duplication can be better than the wrong
abstraction.

~~~
BeetleB
>Sure, and everyone complaining about OO isn't doing OO right, and everyone
complaining about scrum or agile hasn't actually experienced scrum and agile
correctly.

There's a huge difference between OO/scrum/agile and the DRY principle. OO
does not have a well defined origin and definition. Agile does, and is
_intentionally_ not well defined. Scrum does have an origin, but I bet if you
read the original author's description of it, you'll find it also is not well
specified. Pick any "No True Scotsman" argument and the root cause of all of
them is the lack of a clear definition. That's simply not the case with DRY.

Furthermore, all these three are a lot more complicated than the DRY
principle.

The DRY principle has an origin, and a well established meaning. If you go to
its Wiki page, it's nice, concise, and precise. There's no real dispute about
it. Generally, those accused of misusing it have never even read the
definition, and are taking a concept they heard from an Nth degree source and
are interpreting it by its name.

There are likely more people in the world who misuse the principles of quantum
physics than those who use it and understand it properly. We don't criticize
quantum physics for it. It is precise, and is not the cause of all its
misuses.

The other problem I have about people criticizing the "overuse" of DRY: Had
the DRY principle never been formulated, _you would see no less of it_.
Coupling different requirements via a common code merely because the logic is
identical is probably as old as programming itself. I myself was guilty of
this (and bitten by it) long before I had heard of the DRY principle. Someone
formulating the DRY principle did not exacerbate the problem. People creating
these problems are mislabeling them because they've heard of the principle's
name. My challenge to you: Whenever someone couples code in the name of DRY,
ask them where they learned that they should do this. I doubt any of them will
list any reputable book, for example.

Which again contrasts this with OO/scrum/agile, of which there are several
books with quite different interpretations, and where one of the key
annoyances is that everyone _can_ back their stance using well known books.

Getting DRY right can be quite challenging in some projects - I'm not trying
to claim otherwise. In fact, I believe the original authors were not speaking
only of code, but also of documentation, databases, etc. One representation
that will cover all of the above. It is clearly challenging to take it to that
level. I'm all for criticizing it on those grounds. My complaint is that every
criticism I've seen of it wasn't about how difficult it is to follow
correctly, but about someone coupling two _clearly different_ requirements
with common code.

------
rthomas6
This is as good a thread as any to say that I don't like object oriented
programming. When I have had the occasion to use some open source library to
do something (mostly in Python, occasionally in Javascript), it's been rare
that I haven't dug into objects to figure out what the hell they were actually
doing, and the assumptions the creator made (that they didn't realize they
made) about how the object was supposed to be used.

I get the point of OO and see how it's supposed to be used, and how handy it
would be in large systems, but in my world the information hiding has been
more of a hindrance than a help. Maybe what I really want to say is that it's
harder than people think to do OO _well_ , and when it's not done well, it
just make everything more buggy and complicated.

------
ralusek
Roughly analogous to programming concepts of "encapsulation" and "decoupling."

An exercise I often use is to pretend like whatever I'm writing needs to be
usable on multiple projects, so it should be able to be a package that anybody
could find useful. Then you work backwards from the interface you'd like a
tool like this to have, and obfuscate anything other than the interface.
Obviously you're not doing this for most business/controller logic, but for
most other things it goes a long way towards avoiding the kind of spaghetti
code the Law of Demeter seeks to weed out.

------
Lapsa
haack once wrote a nice article on this:
[https://haacked.com/archive/2009/07/14/law-of-demeter-dot-
co...](https://haacked.com/archive/2009/07/14/law-of-demeter-dot-
counting.aspx/)

------
Animats
That way lies "objects".

------
utopian3
I wonder why it’s both a “law” and a “principle”? Did someone else change it?

~~~
inopinatus
Jim Weirich once called it the “Very Good Suggestion of Demeter”.

~~~
hyperpape
In the article linked above, Martin Fowler is quoted with an even weaker
formulation: the "Occasionally Useful Suggestion of Demeter".

~~~
inopinatus
They were very likely aware of each other. Jim Weirich said it in his talk
_The Grand Unified Theory of Software Development_ at Acts as Conference 2009,
just a few weeks prior to the Fowler tweet.

