

Rocket Science and the Law of Demeter - sandal
http://practicingruby.com/articles/shared/gulrqynwlywm

======
ggchappell
This is interesting stuff, but I'm having difficulty wrapping my head around
some of it. Here are some thoughts.

(1) Prohibiting function return values sounds bizarre at first. It's like
anti-functional programming -- _insisting_ on side effects. But eventually I
understood the reason: a returned object might have methods called on it. This
is bad because such objects are "strangers" (in the LoD sense); we should not
talk to them.

I guess the point is that traditional OO and functional style do not mix well.
You can do state-modifying OO stuff, or you can work entirely with return
values; you mix the two at your peril.

Or maybe the real problem is the everything-is-a-mutable-object idea. Applying
these ideas to programming in C++, we could return an int without fear, since
there are no methods to be called on it. We could also return a const object
(I think ...). In Python, we could return anything immutable: integer, string,
tuple, frozenset, etc.

(2) Concerning error handling, I see that prohibiting return values (which
also forbids raising exceptions, I guess), makes it difficult for a function
to signal errors to the caller. However, I do not agree that such signalling
becomes _impossible_. A method is allowed to modify its arguments; it can
signal an error that way.

But I'm a bit leery of the idea that requiring every function to handle its
own errors, will result in more robust code. The argument seems to be that,
since you have to write the error handling at the same time you're writing the
code that produces the error, you're more likely to get it done. I'm not sure
of that. I think the main trouble with error handling is that it requires
work, and most of us don't really want to do that work. If a certain
discipline helps motivate one person to write proper error handling, well,
good, but it may not effectively motivate another person.

To put it another way: these ideas make some kinds of error handling more
difficult. Does making something more difficult help it to get done?

But I haven't really thought this particular issue through. Prohibiting error
signalling is going to have significant effects on software architecture at
all levels. Some abstractions will simply be impossible, since they do not
have all the information necessary for the handling of their own errors. I'm
not at all sure what these effects might look like.

(3) Then there are the timing issues.

> There cannot be tight synchronization, as the sender cannot tell if the
> message is acted on or not within any "small" period of time ....

I don't get this at all.

~~~
sandal
Hi, I'm the author of the article, but keep in mind that the work is based on
David Smyth's ideas, and I haven't tracked him down to verify my own
interpretation of things. Instead, I was just hoping to start a conversation,
and I'm glad to see you've responded here. I'll do the best I can to answer to
your questions...

1) Yes, exactly my first thought as well! I've personally been gravitating to
a more functional-programming inspired form of OO, and so at first these ideas
bothered me. But then I thought about it a bit, and it seemed like these ideas
actually push things in the direction of Alan Kay's vision of objects: fairly
autonomous entities that interact solely through message passing.

In such a system, the interesting stuff gets pushed up into the messages
passing between objects rather than the state and functionality within the
objects. This makes it conceptually quite different than functional
programming, and probably has very different pros and cons.

2) Prohibiting return values does make a big difference to the way we need to
think about error handling. I think raising exceptions might be acceptable for
validating inputs, but for other kinds of failures, you would need to use a
callback mechanism to allow the client to handle problems. This could be an
argument you pass in, as you mentioned, or there could be a mechanism by which
either the caller or some other object could subscribe to failure
notifications. In any case, the object that caused the failure would be
responsible for how to notify its caller, if at all.

I am not sure that the argument for doing things this way is at all related to
trying to make things harder or for encouraging you to write error handling at
the same time you write a function (although those may be side effects).
Instead, I think that the point is to _centralize_ failure handling so that
the responsible object takes care of things rather than expecting every caller
to know all the possible ways things can go wrong. For the sake of brevity, I
cut an example from Smyth's correspondence that perhaps I should have left
intact. Hopefully it will help shed some light on this issue for you:

    
    
        >4. Since there are no return values, the objects need to be
        >   "responsible" objects: they need to handle both nominal, and
        >   forseeable off-nominal cases. This has the wonderful affect of
        >   localizing failure handling within the object which has the
        >   best visibilitiy, and understanding, of whatever went wrong.
        >   It also dramatically reduces the complexity of protocols, and
        >   clients.
        >
        >   For example, with POSIX file system calls, one needs hundreds
        >   of lines of code to fully handle all possible failures of the
        >   open() call. This code needs to be at every open() call in all
        >   clients, because the POSIX file system is not responsible.
        >
        >   A responsible object would deal with the forseeable off-nominal
        >   cases. Then all clients benefit.
        >
        >   Clearly, something outside of the responsible object may need to
        >   act if the responsible object cannot resolve a problem with local
        >   action. OK - then a wider scope object needs to be involved: the
        >   responsible object sends the problem message "up" to some object
        >   with broader visibility. True, this might be, in a trivial system,
        >   the client -- but the problem is reported as a different message,
        >   not a return value from the original message (don't think
        >   continuations, because we are talking distributed here).
    

SOURCE: [http://www.ccs.neu.edu/research/demeter/demeter-
method/LawOf...](http://www.ccs.neu.edu/research/demeter/demeter-
method/LawOfDemeter/Smyth/LoD-revisited2)

3) I stared at this "tight synchronization" statement for a long time too, and
it wasn't clear to me until I realized that Smyth's primary interest was in
developing guidelines that facilitate robust programming in distributed real
time environments. The point he was making here, I think is roughly this: If
you are not expecting return values from your functions, you can treat them as
if they were all concurrent calls which immediately run in the background. You
will never need to _wait_ for a response, because you won't be calling
functions on return values if you follow his variant of the LoD. Instead,
results will get pushed to you via callback mechanisms only, which means that
an arbitrary amount of time can pass between the time you call a function and
the time that you know it has finished, and you won't need to worry about your
functions blocking execution. The Ruby example I show in this article does
demonstrate these ideas, through the use of threading. In a language with
concurrent objects, you would get this effect "for free".

If you enjoyed this article, you might also enjoy another issue of Practicing
Ruby that is on a related topic. Please check out "Responsibility-centric vs.
Data-centric design", by Greg Moeck:
<http://practicingruby.com/articles/shared/lrwkumltjnxr>

Hope that helps! I'll try to keep an eye on the conversation here, but may not
have time to continue discussing things until next week sometime. Still,
thanks for sharing your thoughts.

