Hacker News new | comments | ask | show | jobs | submit login
Tell, don't ask (thoughtbot.com)
216 points by djacobs on July 19, 2012 | hide | past | web | favorite | 86 comments

Example 3 adds a nonsense method, EmailUser#send_to_feed. What does that mean? Email users don't have feeds. If we're going to evangelize OO purity, let's do it right.

    class Post
      def created

      def send_to_feed(feed)

    class TwitterUser
      def post_created(post)

    class EmailUser
      def post_created(post)
        # no-op.
The post merely tells its user that it was created. Then the user can decide to do something else. All users know about posts, but only TwitterUser knows about feeds.

Honestly though, if I saw the not-so-good code, I'd leave it alone. It takes a pretty big justification to double the code for the same features.

This is all nonsense anyway. There's no such thing as a TwitterUser or an EmailUser. You just have users, some of whom use Twitter, some use email (and may have multiple addresses), and some use both, and they can edit their settings to add and remove accounts and change notification preferences. So it's really has-a rather than is-a.

Adding unnecessary inheritance is far worse than the original problem.

I suspect the author is trying to use trivial examples to highlight the pattern. As with everything it's about judgement. If you're going to inherit another type be sure to observe Liskov Substitution Principle, otherwise in avoiding conditionals you're introducing another approach that will lead you towards entropy. That is after all what many of these principles are for isn't it? reduce entropy. Writing the code is easy, maintaining and adding crazy new features from the pesky business is where it gets expensive? Entropy kills apps.

In my experience observing (pragmatically, never dogmatically) OO principles like SOLID reduces entropy. Thus are worth applying.

Personally I prefer polymorphism and null object pattern over conditionals to deal with edge cases, though if you have a leaky abstraction in the first place no principle or pattern is going to save you!

I completely agree with you. I actually find the 'cleaner version' much much harder to read.

Every time I look at the code I have to 'recompile' it in my brain, just because it's doing such a simple thing in such a darn complex way.

The better code results if far fewer headaches down the line. Having had to refactor entire apps that have been broken because of incrementally added crap, I vote for good longer code over sloppy shorter code. Of course, this is highly dependent on the situation and application of course.

carrying on, why make all user models define a no-op?

   class User
     def post_created(post);  end

   class TwitterUser < User
     def post_created(post)

   class EmailUser < User

A better solution would be to just use Observers. One for email, one for twitter. The user shouldn't care about how to talk with these services.

Observers are one of the worst possible solutions because they lie outside the purview of, well, everything in the system. You don't ever see them in the code. You don't know they are there. They are pieces of unicorn code that have side effects that you won't know about or see because they aren't "in the code". Horrible solution.

Code should be simple and easy to understand. Observers add significant complexity and make your code vulnerable to unnecessary bugs because the code that acts on your objects is invisible to the normal control flow of the program and those who write or maintain it.

They add (at least in Ruby) a few lines of code to a program; and take things like Email out of the User model and put them in a more appropriate place.

And you'd never know that they existed if you look at the user model...

My stance is that things like sending email should be explicit calls so they are obvious. Sending email when registering a new user, for example, should be in the if user.save branch in your controllers or, if you have a more SOAish app, in the service that creates users.

def create user = User.create(params[:user])

  if user.save
    render 'welcome'
    render 'oh shit'

Or, refactor that out a bit (ONLY IF NECESSARY!)

def create user = UserService.create_user_from(params[:user])

  if user
    render 'welcome'
    render 'oh shit'

Inheritance is not always the right solution.

Especially when the examples are written in a language which has mixins as core functionality.

Mixins are inheritance. When people say "prefer composition over inheritance," they don't mean mixins.

For me, this technique is bigger than objects or encapsulation. It's about reusing existing "branch points" that a language gives you (whether it is polymorphism, method dispatch, namespacing) instead of explicit conditionals at a level higher than the language. My general take is that explicit conditionals in a high-level language are a smell. Sometimes they're necessary, but if you tell yourself that they mostly aren't, you tend to end up with cleaner code.

I'm not saying it's possible to avoid conditionals completely, but this article gives several good examples of where it's possible. Someone should put together a similar set of examples in a functional language.

Personally, I think this is what 'encapsulation' is about. People have generally thought about 'encapsulation' as guarding the data but I feel that it is about both data + behavior.

A quick tip: Any time you check an object's state to decide which method to call on it, you are breaking encapsulation. Call the method on the object and let it figure out what to do based on the state it is in.

> A quick tip: Any time you check an object's state to decide which method to call on it, you are breaking encapsulation.

Unless that object is self. But a good tip none the less.

It's a poor example of a reasonable technique. Reusing branch points is great. Adding an expensive branch point (inheritance to the User class) to replace a cheap one (if statement) is not a win, particularly when it screws up the model.

Edit: to clarify, by "expensive" I mean expensive in terms of human hours to understand the code, not computer performance. Class hierarchies are much harder to understand than if statements.

That depends on what you're optimizing for. If you care about making your code smaller (which reduces the occurrence of bugs), making it more readable, etc., and performance isn't much of an issue, then it probably is a win. If you care about getting maximum performance... well, then, you need to get a good knowledge of the target system's performance characteristics and how language features are implemented in order to decide if it's a win or not (or, y'know, just profile the two options, I guess); it's entirely possible that a string of pointer dereferences might turn out to actually be faster than a conditional branch anyway.

> If you care about making your code smaller (which reduces the occurrence of bugs), making it more readable, etc., and performance isn't much of an issue, then it probably is a win.

Maintainability over the life of the code is more important than optimizing its present state to the most elegant solution. The code is going to grow, requirements will change, cases will be added not present in the current system.

The example was small, but this technique can be used to great effect and great ease of reading when you have larger objects and sets of differentiation that you're working with.

if statements are nearly always bad, because they don't communicate anything and they make the execution flow opaque. Will this method do what it says it will? Who knows! Will this statement be executed? Well, unless some condition is met along the way! Sometimes they are necessary, as in guard clauses, but I find avoiding them as much as possible leads to far more maintainable code.

If, and only if, my code is too slow is your concern relevant. That happens so rarely I find it not worth spending time on.

(Edited original post to clarify that I was not talking about performance.)

If the inheritance already exists and make sense, adding another overridden method may be a win. The thing I'm arguing against is introducing a new class hierarchy to remove an if statement. Inheritance is a big gun and you shouldn't use it unless it significantly cleans up the code.

In this particular case, extending User to get TwitterUser is a particularly bad example because it's adding an is-a relationship when has-a is better. After all, users may have both twitter accounts and email addresses, and these can change dynamically (they can edit their notification settings). Unthinking use of inheritance is far worse than extra if statements.

To a certain level this is true. However if you hide all branching logic inside OO techniques, following the flow of logic and decisions becomes harder as they are hidden in deeply nested class hierarchies and overridden functions.

There's a saying about good OO code, everything happens somewhere else.

Is that supposed to be a good thing? I always found that style extremely hard to follow.

You have to change how you read code. Stop worrying about implementation details and see the objects API, and stop digging into every method, you don't need to see the implementation all the time. Step back, look at the classes and the messages between them and ignore the implementation whenever possible. When you understand how the parts work together, then you tend to know which part is broken for any given bug, and you know you can ignore most of the other parts entirely.

Yes it's a good thing, because such programs are simple and pluggable allowing you to add features by adding new classes rather than modifying and potentially breaking existing ones.

But they're not simple. The complexity is still there, it's just distributed and difficult to trace.

I much prefer the functional way of doing things. The complexity is still minimized, but I can see where things are coming from, and how data is composed.

While it's harder to add "cases" to types in the functional style, I find myself wanting to add functions over types far more often, and therefore, I find that it works far better for me.

Of course do what works for you. But distributed is the wrong word in a sense, the complexity is broken down into simpler parts that aren't complex, that's the point. If you're unwilling to adapt your reading style, then you won't see the benefits because your thought process isn't congruent with the style.

You clearly prioritize data over behavior, so naturally functional code fits your thought process better, but your though process is one among many. OO works well when your thoughts are behavior centric rather than data centric.

Functional and OO actually go very well together.

What I prioritize is the ability to quickly and easily answer the question "Where is the bogus value coming from?". When the inputs building up the value are spread all over the place, it's annoying to trace.

When you know how the parts work, you don't have to trace it. 99% of the time, I see a bug, I know what's broke, because I understand the program at a high level and know X just can't happen anywhere but Y. Object oriented programs are not a sequence of data transforms, if you insist on thinking of them that way then of course you're going to dislike OO. I've never had any of the problems you're relating to me, because I embraced a change in how I think when I changed to OO.

If you insist on thinking functionally or procedurally, well, then OO won't agree with you because you won't let it. You want to see everything in one place so you can see the big picture all at once; if that works well for you great.

But there's another way that we like, rather than seeing everything crammed into one spot in a complex way, we break it down into many small parts that are each individually stupidly simple. Each part assigned a responsibility in completing the overall task, and each part pluggable with any part having the same interface. The big picture is in the message names between the parts and the part names themselves, and the actual code implementing those messages is basically irrelevant. We find this simpler precisely because we can ignore everything but the one part we're working on, which is stupidly simple. And we can easily extend the system buy subclassing any part to change the behaviour of the program without touching the existing parts, but simply by adding new ones.

Quite simply, we don't want everything in one place, it's inflexible, brittle, not pluggable, and not simple in the way we define simple.

You're talking about rows versus columns in the expression problem. You prefer making it easy to add columns, I prefer making it easy to add rows.

If that's what you need to tell yourself. You seem like the typical OO hater, uninterested in grokking it, just interested in justifying your dislike with a bunch of hand waving about how you think it's complicated without presenting an alternative that has the same flexibility. As you've not added anything interesting to the conversation, I don't care to continue it; good day.

I think I've been trolled.

Everything in code is somewhere else, and you get there with a method call


It's a slippery slope, though.

When you have a chunk of 50 consecutive lines with branches, then yes, that might be worth an abstraction.

Sadly too often, in ruby, I see people turning 8 consecutive lines into 4 layers of indirection...

The slope flows the other way IMHO, most of the time most programs I've seen arent abstracting enough and the code is far to sequential and unnecessarily complicated because of it. Excessive if and switches seem far more common to me than excessive use of polymorphism.

Two sides of the same coin. Your example is newbie programmers, mine is the same newbies on their second project. ;)

As usual it's all about striking the balance. I wonder if one day we'll come up with a programming language that can enforce these things in a meaningful way.

My example is not newbie programmers, rather, much code written by experienced OO programmers who don't have a Smalltalk background. I can't say what it looks like now, but I recall Rails active record implementation being heavily procedural in nature when I first looked at it way back and DHH is hardly a newbie. Decompile most classes in the dot net framework, and you'll find a fuck ton of procedural code even though it presents an OO API. 500 line methods are not uncommon.

When you learn OO in a procedural language like ruby or java, you tend to have a slightly odd idea of OO. Just because a language supports objects doesn't make it object oriented, it just allows object orientation.

I thought I knew OO until I learned Smalltalk, and discovered just how deep the rabbit hole can go. If's, foreach, while, unless, switches, these are all procedural constructs. You don't truly grok OO if you can't write a program without using procedural constructs (as an exercise).

And who knows, maybe someone will invent a language that nails the balance, I won't bet on it soon though.

Well, this is leading a bit astray.

In principle I agree with you, but in general procedural constructs are not harmful and should be used where appropriate.

500 lines is indeed a bit much, but I've seen through 100 line methods without an urge to refactor.

The best programs are those that have both; good use of patterns and the odd suspiciously long method if appropriate.

The worst programs are not only the classic spaghettis but also those that dogmatically stick to a pattern even where it makes no sense. Java is notorious for the latter, but I also often see ruby programs where the author religiously clinges to the belief that no method can be allowed to exceed 5 lines of code. That, in combination with misunderstood unit-testing (foo.MUST_RECEIVE :bar), often leads to ridiculously tight coupling and effectively a monolithic brick that is resilient to change.

I call these programs Gnocchi-code. A close relative of spaghetti, just higher density...

Of course, that's where taste comes in. And I did say "as an exercise", I wasn't suggesting not using them ever, but as a kata to see how much you can do with only objects.

What's so "procedural" about 500 line routines (procedures, methods, functions)?

That's just crappy coding in any language. And I see it all too often, alas.

Having missed out on Smalltalk back in the 80s, I will venture to ask how one does conditional execution or terminates recursion without an "if", though? Even Lisp has its "(COND ...)" expression (yes, I know that's not OOP).

Sequential thinking. Long methods tend to come from thinking only about accomplishing a task step by step rather than building a machine that can solve many problems (an object model).

As for Smalltalk's conditional, of course it has them, but they're not procedural constructs of the language. Smalltalk implements its conditional behavior with an object model of course, the abstract class Boolean has two subclasses, true and false. The keywords true and false reference the single instance of each of those classes. Each class implements a set of methods like ifTrue:ifFalse: which take blocks (closures in modern smalltalks). True implements ifTrue by evaluating the block, False implements ifTrue with a no op, an empty method. Bam, Boolean logic implemented with objects and polymorphism.

Thus in Smalltalk, conditionals are method calls on booleans and come after the comparison rather than before.

  1 = 2 ifTrue: [ 'boom' out ]

excellent explanation of OO. thanks.

> I'm not saying it's possible to avoid conditionals completely

But it is. Smalltalk has no conditional statement, conditionals are implemented via polymorphism on the subclasses True and False (ignoring compiler optimizations).

Yeah, but even in Smalltalk, lots of #ifTrue: messages are a code smell.

Of course, you can write C in any language if you try hard enough.

That smells of a parlor trick -- passing one or two blocks / objects / functions to a boolean object to execute or not.

Having said that, I like how many OOP languages implement loops as a special case of "visitor", though.

It's not a parlor trick, rather it's what it means to be object oriented. Traditional language constructs are built with object and methods as library rather than special magic keywords. Whatever the problem, Smalltalk builds the solution using objects, thus it is oriented towards objects.

It doesn't just have objects, it's built out of them, Smalltalk "is" objects; library and language are the same thing. Your custom constructs are syntactically identical to core language constructs because it's all just library.

all of those examples look horrible, make debugging harder and implies strong code coupling.

example 1, we're mixing data with UI labels. How do you handle localization ? by coupling your localization code with your user data/behavior ?

example 2 : you're simply coupling the system_monitor with the alarm, while in the worst case, the alarm should be linked to the system_monitor. Now if you want to add a "report_to_government_agency" method, you'd add that inside the code of every single one of your monitors (knowing that you don't want to report a broken light, but it might be a good idea for a melting nuclear core) ? Note that I'm not saying that the first code is good, it's just as bad... Also, the method becomes very poorly named (I want to know if the sensor went back to normal, but every time I query "check_for_overheating", it just rings the alarm and doesn't give me any info back ???)

example 3 is just a poor usage of pseudo inheritance (and an abuse of duck typing). you create a new type of user for every messaging service again ? and if a user uses more than one messaging service, you just create a type for every combination ? Not very scalable nor readable, IMO.

The last example is just as bad. useless inheritance, senseless object. the definition of street_name is wrong. the doc will be around "Street name returns a street name or an error message if there's no street name defined" How do you know if there's no street name, now ?

All those examples basically make all extension harder. They're everything that's wrong with OO. an object is not about one structure that contains data and does everything that can be done with it. An object is about giving organized access to pertinent data and/or pertinent behavior and (potentially) allowing to change them.

This is a great little refresher. I recently inherited a codebase of about 3500 lines, 2500 of which are in one class. Not OO.

Incidentally, if this article blew your mind or if you're interested in Smalltalk, I can't recommend this book enough:


The author has a radical take on OO in Smalltalk and shows that going to absurd lengths to eliminate if-statements (well, ifTrue: et. al., because it's Smalltalk) can lead to both better readability and better performance.

Going down this path seems to put your code at risk for dangerous mixing of concerns, in some cases.

Unit-testing the check_for_overheating inside SystemMonitor looks complicated... The "sound_alarms" call inside probably needs to be a reference to a "Speaker.sound_alarms", right? Why should the SystemMonitor be locked to the API of a Speaker? etc.

The 'sound_alarms' call looks more like a local method to me, which could invoke the speaker in a loosely coupled fashion.

The point is more about putting the behavior itself in the object. For unit testing, you'd want to move the individual tests into their own classes, letting the SystemMonitor be the glue that calls them and routes responses to the appropriate system.

Your second sentence pretty much hit the nail on the head, as I was reading your first: I'd create a different "controller" object that glues the SystemMonitor and the Speakers together. Then you'd have the neat little "tell, don't ask" in THAT object, instead. (Pretty much what you suggested)

Disclaimer: not a Ruby user. I'm confused by example 4. Why would you return a message from either of them? Shouldn't messages in general belong to the view?

    {{ user.address || "No address on file" }}
The "not so good" code is essentially this, but inside a wrapper in view code. What if you need different markup for a missing address, you either stuff it into a method or change the method's return value to nil... and what if only street_name is missing, not the whole address? It looks like a big mess to me.

Your example is not functionally equivalent as address is a model instance with many address-related properties. Though I agree with your general premise. This, to me, seems like the Tell, Don't Ask solution:

    class User
      delegate :street_name, to: :address, prefix: true, allow_nil: true

    <%= user.address_street_name || "No street name on file" %>

Makes some good points. I can't help but notice, though, that the "good" examples are all longer.

It makes me want to design a programming language that makes good code shorter than bad code...

With the exception of example 3 (which is possibly a bad example, see the discussion above), this is because of how much code is necessary to tell what's going on, rather than an increase in code size.

Example 1: 5->1: It did get shorter.

Example 2: 5->8: One of the lines is showing a line calling the check_for_overheating method, which wasn't shown in the first example. The other two are declaring the SystemMonitor class, which was declared "off-screen" before.

Example 4: 7->13: Five lines for declaring and defining User.address, which was off-screen before. So it did get a line longer.

Ick, breaks command/query separation, the other OO rule that along with tell, don't ask I find to be a hallmark of well coded OO systems.

class SystemMonitor def check_for_overheating if temperature > 100 sound_alarms end end end

This has long been one of my favorite methods of using OO code to my advantage; and is one of the main reasons that my code is OO in the first place.

There are many cases where it's easier/lazier to have if-statements.

Going further, you can take the things learned in this article to make your general purpose code potentially faster, as well. For example, if a set of things that must be performed in order; and sometimes some of those steps are missing, instead of performing a null-check, you can have a default no-op case; that way, instead of:

   if(firstAction != null) firstAction();
   if(secondAction != null) secondAction();
   if(thirdAction != null) thirdAction();
in your inner loop, you can have:

While whenever firstAction, secondAction, thirdAction are set, you can say:

   firstAction = newFirstAction ?? noop;
   // etc.
All in all, I'm glad this sort of knowledge is getting out there. I'm just grateful for my having really, really good CS teachers back in highschool ( I don't remember my college talking about OO as a way of removing if-statements ).

* Users of Java, of course, will just need to write a NoOp instance of their interface to take advantage of this.

I'm not clear on how that makes your code more maintainable though. The first case makes it explicit that any of the actions can fail to occur, whereas the second one, on first glance, seems to have all the actions occuring. I, as a newcomer to this code, will almost definitely make that mistake, which will make debugging or maintenance harder.

The only way the second is even -as good- is if I'm constantly holding in my mind the various idioms you've used in the code, in this case that your function pointers are never null but will instead refer to an action that might do nothing. As far as I can tell, this is more work for me, for no gain.

I would love to hear your reasoning why this way of doing it is -better-, as opposed to just -more object oriented- (or -marginally less typing-).

I may expand on this later; but, whenever you're going to a new code-base, you're going to have to learn the various idioms that are at work in that code base. This is especially true when you're working with some more complicated languages where no one uses the whole set of it (see: C++).

When I'm writing my code, I personally find that being able to trust what my code is doing to be more readable. In the case I wrote above, more than likely, I would have arrived at that point by first writing whatever the first action was; and then coming to know that there could be two different actions that could have taken place (causing a method call/inheritance to occur) and then I realized that sometimes, nothing might happen. Now we have a nothing case. The nothing case did not negatively effect my code flow. I am not 100% sure I would write code like I had above in the first place, it would grow to that state organically; but, the advantage of trust later on, was worth noting.

I originally heard this concept a long, long, long time ago; and one of the interesting selling points that the person that told me it was that code could be faster run if it had no if-statements, reason being branching and branch prediction forces the processor to rewind; whereas a guaranteed jump is potentially less expensive, especially if the code is in the cache. Of course, this would be a premature optimization, but if the code occurred in the inner-most loop, there may be some gains to be had that otherwise wouldn't be.

I suppose, for me, it looks cleaner when you're dealing with larger projects. That said, as I contemplate it further, I could certainly see where it would slip up some people, especially newcomers to my code. This sort of creativity would probably primarily spring up in organic/fluid code where OO paradigms are already in place.

Thanks for making me think on this further :)

It's an interesting point about the speed. In this case, since it's just a function pointer, there's no polymorphic overhead, so avoiding branches is a no-brainer.

On the other hand, if you were actually extending a class to make a DoNothingClass version, then the overhead of dynamic binding plus the function call would make it somewhat slower (branch prediction on a NULL comparison will cost at most 5 clock cycles in a single-thread pipeline, or none if you predict right) on those checks where the DoNothingClass is the one you find. For instance, if you had a sparse array of Class* and wanted to iterate over them, the NULL check would probably be more efficient than pointers to a singleton NullClass, especially since branch prediction will start correctly predicting NULLs more often.

So, you know, trade-offs.

"will cost at most 5 clock cycles in a single-thread pipeline"

Being a little pedantic here: you can't safely say this part without knowing which kind of branch predictions the processor uses; and, more importantly, how deep the branch prediction can go. There are some processors that will branch predict once ... and then again, and again and again. The first one they get wrong, they have to roll back to that first one. But then, you're right. The more times that single method is called, depending on the implementation of branch prediction, the more often it's going to be right, and so as long as those values aren't changing often (I don't see why they would in the case we're trying to suggest), it may eventually fade into nothing.

Needs moar testing!

I liked the first example, but then it felt too preachy; OOP isn't necessarily the correct way to do things.

I really like the clean design of the site though, very fresh imo.

Awesome! This is think kind of article I love to see on HN.

Refactoring: http://www.amazon.com/Refactoring-Improving-Design-Existing-...

Read it. Learn it. Love it. Make it a habit that is so ingrained you do it automatically.

Extract method: do it reflexively.

I'm not trying to be elitist or superior, but I can't understand why this has spent such a long time on the front page. Shouldn't the content of the article be considered pretty basic knowledge at this stage of OO? Deferring implementation specifics using polymorphism was one of the first things I learned about OO when I was introduced to it (via C++) around 1992. It was definitely 'aha' worthy then, when the vast majority of commercial development was procedural, but these days? Shouldn't it be considered a basic tool in most devs' repertoire?

It should be, but I can assure you that it's not. Also, I think this has much less to do with OO than with conditionals at large.

If you're interested in this kind of thing I highly recommend Avdi's Objects on Rails book (it's free) and Destroy All Software's screencast collection.

Cool, I haven't heard of this before; I will think about it. Presumably, an exception would be 'view' type objects in an MVC setup?

Yes and no, the approach I've used in MVC (.Net) is passing View Models to the View (rather than pure models). That means you can push some of the "tell" into the view model rather than having that logic in the view itself.

I haven't done any RoR but I imagine a similar approach would work there.

I apply this principle to view objects as well. Can you give an example of something you think should be excepted?

I'm still thinking this through, here's what I came up with:

If there's an on/off button to show additional information on the screen, it would seem weird to me to have the button (view) try and display the additional settings. The view has no idea what kind of environment it is in, how does it know if it can display the information without moving other things around. But a controller object managing all the views on the screen does. This also avoids subclassing or adding methods to the view.

The first example isn't a good one. You don't want to have the user class responsible for returning the welcome message.

Excellent. As a self-taught hacker, I always felt like I was missing something about OO, because it's introduced with an analogy to real world objects, while glossing over the more subtle contracts that programming objects have with each other. Seems like we need a more developed vocabulary to express these things.

I first heard "Tell don't ask" in relation to Smalltalk.

If you want a developed vocabulary, Smalltalk is where you find it (and it's been there for thirty-odd years).

What would you recommend? Smalltalk or Pharo?

I'd recommend "The Art and Science of Smalltalk" by Simon Lewis.

Because it's so old, Smalltalk does things very differently to other development environments.

The GUI is strange by today's standards (Smalltalk was why Xerox developed the GUI - you can see how much extra work Apple did to make the windows/icons/menus that we use today).

And it uses an image (which is sort of like developing in a VM, and then copying that VM to another machine to deploy).

So diving straight into a Smalltalk may leave you a bit lost - whereas the book is quite a good primer on OOP in general and where those OO patterns came from (whether Squeak, Pharo, GNU or one of the big expensive implementations).

Pharo is a Smalltalk, and yes, Pharo.

I think the vocabulary has been there, it's just very poorly taught/communicated. The c2 wiki was where I first encountered it. Old OOPSLA program proceedings, if you can find them, have some jewels among the cruft. And there is always Smalltalk Best Practice Patterns...

Relatedly, a long, long time ago I found Allan Holub's PPT Everything You Know is Wrong quite good: http://ebookbrowse.com/everything-you-know-is-wrong-pdf-d281...

Wow, short article, but huge impact... I'm gonna start doing this... Makes so much more sense. Thank you.

The example is too simple to describe this design.

examples could be much better..

This principle applies much more broadly than just to programming with objects, for example it's an excellent way to deal with bureaucracies or managers who don't always read their email.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact