Hacker News new | comments | show | ask | jobs | submit login
Rails – The Missing Parts (joingrouper.com)
205 points by tomblomfield 1150 days ago | hide | past | web | 167 comments | favorite



The proof is always in the pudding. While there are good and reasonable times to introduce "interactors", this particular example is poor. The tests presented are anemic, and the code is absolutely not any clearer by being extracted and wrapped. I would indeed have kept all this in the controller.

The key point for an "interactor" extraction is imo when you have multiple models being created in symphony, like a Signup model. Or if you for some reason need to reuse the behavior.

But if all your controllers look like this, with one "interactor" model per action, you're doing it wrong.

Whatever floats your boat, though. If this is what you prefer, great. But please hold the "beginner's version" crap. Plenty of large apps are built with vanilla Rails. Basecamp is one.


I rewrote the code in this example to use the "Beginner's Version" of Rails (sigh). You judge which you like better: https://gist.github.com/dhh/9333694


Here's another version that doesn't even use private methods in the controller and uses a PORO for the email grouping: https://gist.github.com/dhh/9333991


Thanks for writing this. While I often look for ways to take external-interaction-responsibility away from Rails objects, the OP didn't really make sense to me, at least in terms of saving time and making things more logical.

However, there's a distinction that has to be made in your example and the OP's. In the OP, the failure of the Interactor, including the delivery of emails, would cause the controller to enter the "failure" branch:

    if interactor.success?
      redirect_to home_path
    else
      flash[:error] = interactor.message
      render :new
    end

Whereas your example, the controller would take the successful branch if the data model was saved, regardless of whether email delivery failed:

    if @grouper.save
      ConfirmedGrouperEmails.new(@grouper).deliver
      AssignBarForGrouper.enqueue(@grouper.id)
 
      redirect_to home_path
    else
      render :new
    end


So we're not comparing apples to apples here. However, as a layperson, I'd have to agree with you: Why should the error be raised to the grouper-creating user, when it should be going to the part of the system that handles mailing? But maybe the actual details are more complicated than that...


Why would the delivery of the emails fail? Because your SMTP server is down? That's an exceptional state, handle it with exceptions -- not with conditions.

Or maybe because the email addresses are invalid? Handle that when they are captured. It's way too late for that here.


The failure of delivery of emails should be handled gracefully such as within a message queue that gets retried until it succeeds or dies.


I think a lot of blog posts use trivial and contrived examples for the point of explaining the concept. There's not a lot of people out there that are good at explaining complicated refactorings. Katrina Owen comes to mind as one of the few that is good at explaining such refactorings.

Perhaps if someone made a post like this with starting code that was more complex it would be a better example, and also harder for you to counter it with a couple gists ;)


Isn't ConfirmedGrouperEmails here just an Interactor / Service Object anyway, but one that is derived from ActionMailer?

To my mind, this code is much clearer, which proves the OPs point.


I can't seem to find it on Google. What's a PORO?


Plain Old Ruby Object, as opposed to an ActiveRecord model.


Never seen the term used before, but by analogy to POJO and POCO it would have to be a "Plain Old Ruby Object".


Plain Old Ruby Object


I've spent the last few months re-writing a medium-sized code-base (several hundred thousand LoC) that looks like your version into code that looks like the blog version. Test suite run time has dramatically decreased, code is more loosely coupled and we see far fewer bugs.

> You obviously need to choose the patterns that fit the problem you’re trying to solve – it’s rarely one-size-fits-all, and some of these principles may be overkill in a very simple 15-minute blog application. Our objection is that Rails feels like it’s a Beginners’ Version, often suggesting that one size does fit all

This quote from the post (minus the "Beginners" jibe - sorry!) is my core objection. Your example works fine if you're working with a simple application. But the "Rails Way" feels like it starts to fall over when you get to medium or large codebases.

I feel like Rails could benefit hugely from a few core "Advanced" features that help you grow a small application into a larger, functioning business. Sure, you can add them yourself, but then you lose the advantage of shared standards and strong architectural conventions. Every new developer we hire has to learn what exactly we mean by "Interactors" or "Decorators" or "Policies".


Yes, rewriting a system after it's already settled and designed can indeed bring a cleaner code base about. But don't confuse that with "better architecture".

Don't even get me started on the hand-wavy "loosely coupled".

If this is a poor example, pick a good example. I'll be happy to code ping pong you whatever example you choose.

One way to spend hundreds of thousands of LOC on an application is to stuff it with needless abstractions. That doesn't make it "advanced", and it's not Rails that's falling over, it's probably just some shitty code.

Maybe that's a clue that your chosen paradigms weren't that helpful. Or rather, that they convoluted the code base instead of clarified it. It's certainly a red herring that you need to teach your abstractions to new developers and that it's an endeavor to do so.

POROs are great, though. Our app/models is full of them. We even added app/services too. The trouble I have is with people who, like you, fall in love with the flawed notion that their application is such a special snowflake that it deserves "advanced techniques". Bullshit. There are good techniques and bad techniques. Size of the application is rarely a factor, except in ballooning the ego of the programmer.

Anyway, the invitation stands. Present any piece of real code and we can ping pong on it. I enjoy talking about specifics and improving real code. I detest "oh that was just a poor example, but the general principle is..." kind of debates. If you can't produce a good example, you don't have a general principle.


> POROs are great, though. Our app/models is full of them. We even added app/services too. The trouble I have is with people who, like you, fall in love with the flawed notion that their application is such a special snowflake that it deserves "advanced techniques".

The main difference I see between the Rails philosophy and most of these "Use SRP and Interactors and things!" blog posts are that Rails is way more interested in using the correct tool for the job, and most of these blog posts are of the "use this for everything and every one!" variety.

I have one project right now that I decided I wanted a Command for one action. I'm accepting input from a webhook that I can perform asynchronously, it interacts with like 5 different models, and I wanted to beat up on the tests pretty thoroughly. So I did that.

Elsewhere I'm using concerns to make like 4 different models easily sortable.

And other places I'm using Module#concerning to keep models organized.

ONE TRUE WAY is overrated. I'd rather use whatever works given the context.


"ONE TRUE WAY is overrated. I'd rather use whatever works given the context." That's it.


This. This. This.


I hope I am not sticking my head into the lion's mouth here, but here (https://gist.github.com/BiggerNoise/9334673) is a link to a gist of some code in one of our apps.

Cases have Tasks and Activities. Activities are always associated with a case, and may be associated with a task.

In the controller, when assigning a case, one would type: AssignCaseCommand.new(params).execute!

The command class implements the Command pattern (the pretty well established name for interactors) and is a Composite as well.

BTW - the same command that assigns the tasks of a case is also used to merely assign a task. And, if we're just recording an activity, we would just use the create activity command.

I think it would be insanity to try and capture the rules associated with re-assigning tasks in the controller. (I am pretty sure that you would agree.)

--Minor Edit for clarity--


I asked for specific code, you gave me specific code, so here's how I would have structured that without the (imo, needless) command pattern: https://gist.github.com/dhh/9348053

It includes a lot of more context than your example, so I was guessing liberally at the missing pieces. But hopefully this is still easily digestible.


I think it's great that you stick your head into the lion's mouth, even if he chewed it clean off it's a great contribution :)

I think the main problem with your command pattern approach is that you didn't implement the full command pattern. At least, I assume you didn't, you didn't actually show the controller. The command pattern has one extra class, the invoker. The invoker takes commands, and invokes them. A great example of this is the worker queue, it takes generic worker objects that have an 'execute' method.

It is never the class that creates the worker objects, that executes them. Yet that seems to be the way you're going to implement the controllers that use these commands.

What I think you've built instead is an adapter pattern. You have abstracted a class of problems in a way that they have a common interface, so that in your controllers you have a uniform way of invoking them.

This is where I think dhh sees the muddiness in your code. Why are you abstracting your interaction from the controller, which is supposed to be the exact thing that should be controlling the interaction.

The only reason for that would be that the same interaction is happening in different controllers in exactly the same way. And that's something that dhh claims does not happen in real applications, or at least he demands an example.


Honestly, I was disappointed in his response. With all his harrumphing about real examples, I would have expected him to make an effort to understand a real example. But, as I said elsewhere, the downside of real examples is that they require quite a bit more effort to grasp than contrived ones.

To address your comments:

While it is true that one of the great features (and frequent motivations) for using the Command pattern is the separation of the creation and invocation, it is not the only motivation. The two big payoffs that we are getting are:

- Separation of invocation from the knowledge of how it is carried out

- Composition of complex commands from smaller, simpler commands

There's no requirement that the invoker is never the same as the creator. It's just that most often it isn't, particularly (as you noted) in a worker queue system. That's really just a feature of the implementation that the command readily enables rather than a fundamental property of the pattern. For us, sometimes it is, sometimes it isn't; that's not the primary payoff that we're going for.

I can't support the characterization of this as an Adapter. This is fundamentally behavioral. I have a class that represents an action that I want to have happen. All of the places that want this action do not need to know what is involved, they only need to create the command and hand it off/invoke it.

FWIW - I have five different places in my software that create an AssignCaseCommand. - One controller

- Three commands that can create this as a sub command

- A utility job

Plus that simple concept is available to any custom implementation code that we write.


The Command pattern always reminds me of free monads. By building a new free monad you can encode the structure and logic of a computation statically and then evaluate it repeatedly. One of the things that I think is missing with the command pattern is that it's probably not as easy to decide to, for instance, execute your commands in a pure environment.

This is, of course, a primary advantage of having a static representation of the domain-level demands of your code—you can choose many ways and many environments to execute it within. It's a function I use in real code all the time.


If reassigning a case is part of the domain, it belongs in the model.

I don't know how to put this gently, but this concoction looks like a hot mess with no separation between modeling and controlling.

These commands have a ton of domain logic all tangled up with activity reporting, formatting, and parameter parsing. It looks like a big ball of mud to me :(


> It looks like a big ball of mud to me :(

That is of course the most popular of all architectural patterns. :)


Most popular by what metric?


Practiced.


Source?

Edit: Curious why the downvote, I ask a legitimate question to the parent to provide a source that the command pattern is the most utilized architecture in the industry so please don't downvote without an explanation.


Not the command pattern. The "Big Ball of Mud" architecture.


Well, that's what happens I guess when you spend all day working on one problem and stop to check HN and randomly comment...guess I was too tired to realize what the parent comment actually said!


Perhaps that's why the OP used a simple example. Complex examples are pretty hard to get your head around at first glance; even sane systems can be perceived as a 'hot mess' upon a quick read without any context.

I disagree that re-assigning a case belongs in the case model. It is a complex interaction between several models. Extracting it to a command object allows all of the rules to be easily understood.


I feel like showing off some tests would help when the example is this complex. That would give a bit more context in terms of what you're trying to accomplish here.


True, the tests do demonstrate what can be accomplished, but that kind of gets back to my point: Any example complex enough to unambiguously warrant the use of a command pattern is going to involve non-trivial effort to learn the context. Asking that effort of someone reading a blog post is a bit much.

Unfortunately, it's easy to dismiss simple examples as not being worth the complexity and demand real examples which can be dismissed because they cannot be easily understood.


It's a model, all right—but it may not be part of the “Case” model. It might be part of a CaseAssignment model that is (for lack of a better word) a virtual model. Or it's part of the controller that does the changes. It mostly depends on how your code gets used and where it gets used from.

When I did something similar[1], I implemented it as a “virtual” model and applied validations to make sure that the PlanChange was properly constructed, and then performed the change in a transaction. I was able to test the hell out of it both with and without actual database interaction. But I was able to use it in three different locations and removed lots of special case code that had built up over two years of development.

You probably don't need Commands. You probably need something to orchestrate manipulating multiple ActiveRecord objects simultaneously in a transaction. It's still a model, but it may not be the model that you thought it was.

[1] Account from Plan [a] to Plan [b] that has a different quota, affecting Device assignments, possibly including moving some devices to Plan [b], some devices to Plan [z] (a default plan), and disabling some devices entirely because there's no quota available anymore.


Although I would never call that a model, I think that we're doing similar things for similar reasons.

Command, Virtual Model, Verb Model, Interaction. If I were working on your team I am sure we could agree that one of those would do for a name.


My contention is that there often are times when the complexity of the business requires a fairly extensive set of operations to occur in concert, including the creation of several models, email-sending etc.

In this case, the Right Place (tm) to put these is in POROs / Interactors that can be tested in isolation and re-used across the code-base if necessary. This code should not live in controllers and certainly not in ActiveRecord models.

It sounds like you agree with that, since you're using them in your own codebase?

In that case, is there not an advantage to standardising the API for these POROs in Rails? Simple methods like fail!, success?, failure? and message

Another way of asking the question - why do so many Rails developers fail to realise that they can use these concepts? Why are so many Rails codebases packed full of enormous ActiveRecord models that are crippled with numerous before_save callbacks?

Sure, shitty coders write shitty code. But can't you lend them a hand?


I don't get why POROs would need some special API. I also don't get why you're pulling code out into things like ConfirmGrouper that still depend on the DB. Why not a true PORO and something like ConfirmGrouper.for(grouper)? Why not just write methods that return bools?

Though I don't know if I agree that the actual sending of emails belongs in a PORO either and not a controller. It seems like determining whether or not emails need to be sent is the kind of logic I would put in a PORO and do the actual email sending from a controller method as in DHH's example.

Edit: Just saw DHH's second example, that's essentially how I would write it.


> In this case, the Right Place (tm) to put these is in POROs / Interactors that can be tested in isolation and re-used across the code-base if necessary.

Key phrase: "if necessary".

It sounds like you're advocating for building abstractions before you actually have a reason to use them. Your reason seems to be "we may need to reuse them in the future" or, "one can see where we might move in a direction where we'd want want to re-use this code", or worse yet, "we're definitely going to build some functionality after this release when we'll want to re-use this stuff."

I've been there, really, I have. Build the abstraction when you actually need it, not when you think you're going to need it, or pretty close to needing it, etc. Because 98% of the time, YAGNI.

> In that case, is there not an advantage to standardising the API for these POROs in Rails?

Not a big enough that I've come across to warrant building domain-specific abstractions and special rules that, as @dhh mentioned, will serve only to raise the barrier to entry to your codebase.


> It sounds like you're advocating for building abstractions before you actually have a reason to use them.

If it sounds that way, I've over-simplified the examples in the blog post in the interests of clarity. I'll take more care in future! The specific interactor in question is used in 4 separate places.

From the post:

> You obviously need to choose the patterns that fit the problem you’re trying to solve – it’s rarely one-size-fits-all, and some of these principles may be overkill in a very simple 15-minute blog application.


"Sure, shitty coders write shitty code. But can't you lend them a hand?"

It's not clear to me that adding more APIs and conventions that said shitty devs won't read about, understand, or even be aware of is actually useful.

If shitty devs are your problem, don't fix the dev tools - FIRE the tools pretending to be devs...


There are a lot of brilliant developers giving talks at every RubyConf advocating these very techniques...


The standard API for POROs is Object (and Class, etc.). Standardizing a new API necessarily makes them no longer "plain".


The current app I work on has a lot of warts, but only five `before_save` callbacks (there's maybe five times that many total callbacks, although there are lots of validations).

If you really want to standardize the interface, use the tools that already exist: ActiveModel and validations. Your “Interactors” are models; they just happen to be coordinating models that affect multiple ActiveRecord models simultaneously. (I did this for an account plan change system that affected billing and device assignment to a plan. It could easily affect a dozen or more records, all in a single transaction, but was itself easily testable.)


> "I detest "oh that was just a poor example, but the general principle is..." kind of debates. If you can't produce a good example, you don't have a general principle." - DHH, March 3rd 2014

That's going in my quote book. Thank you DHH.


I think that labelling the use of POROs as an "advanced technique" is simply wrong. How could it be? In fact it's simpler than implementing an AR callback (with more complexity behind the scenes than anything!)

You are using app/services and POROs in your own app? So are we. But I guarantee they don't look like yours because there is no endorsed convention.

Why not? Because Rails is still clinging to such an outdated idea as MVC? Pure MVC at a large scale is bullshit.

"Size of the application is rarely a factor". I don't agree. Size of the application is always a factor. A big factor.

What I'd like to see is the community embrace conventions outside the fabled "MVC" paradise. Rails is "convention over configuration", and is content to hold your hand for your first thousand lines of code... but then what?


sometimes more abstraction makes for simpler code, it's just that there's now simple presentation code for the frontend and also simple code for persistence.

avoiding a new layer of abstraction because it can just about be done in vanilla rails isn't a solution, it's just throwing more lines of code at the problem.

recently i've found that everything i do at the frontend intermixed with the controller, models and helpers could be made simpler with two way databinding.

it's also simpler, if someone want's to see if it looks right you can plug in some fixtures and tweaking css can be done independantly instead of needing to write something in ruby to affect how something in html looks.

maybe for a project with a large codebase is to use lighter frameworks such as backbone rather than having to do full rewrites in emberjs or angularjs. and let the code rewrite itself, ie. just clean up the portions of code that were only there for presentation.


You can't possibly think that there's never a use case for the Interactor pattern. Please watch Uncle Bob's talk: http://www.confreaks.com/videos/759-rubymidwest2011-keynote-...


Let's just say I have philosophical differences with the views presented in that presentation.

Also, I never said never. I said this particular example was a poor example and the general principle derived from it was equally poor. Garbage in, garbage out.


I can imagine people thinking that this is a strength, but my frustration with that talk is that it isn't presented with much to do with Rails.


I can possibly think that there's never a reason to reach for a pattern when it's better to let a pattern emerge from your code and then refine it. As far as the video is concerned I personally won't bother to watch it—Bob Martin sells snake oil as “clean code”.


I agree 100 percent with the statements in the first two paragraphs on "Part 1 Inflectors". I have first hand experience with every statement you made from testing to managing the callback dependency mayhem. This post seems interesting to me mainly for the test suite benefits alone, being able to maintain independency in the test-suite base would definately save lots of bloated stubbing and/or db hits. Im wondering what kind of time improvements the suite got?


Say you wanted to do the same thing from a Rake task or the console. Wouldn't an interactor be a nice thing to have then?


I've only been involved in a couple of Rails projects (at least since the wedding list management app I wrote for my own wedding back in the v1 days), I'm in complete agreement with David.

And honestly, Rails already has something that works beautifully for an “interactor” extraction. It's called ActiveModel. Add a bit of other code (include ActiveModel::ForbiddenAttributesProtection, ActiveModel::Validations::Callbacks, and ActiveModel::SerializerSupport; extend ActiveModel::Naming) and you've got an object that acts like an ActiveRecord object but orchestrates changes across multiple models in a clean, constructive way—without introducing an entirely different way of working with your code.

I've done this at both places I've been doing Rails and it works at both cleaning up the models it orchestrates and keeping the business logic in the places where it belongs. I'm working actively to get rid of some of the Rails Fads that have come through and wreaked havoc on the current codebase (we've got Presenters and Commands and all sorts of other stuff that is just making the code harder to understand with little value).

There's a million ways to skin this particular cat, and claiming that “adult” Rails needs this particular gem or that particular way of writing code says a lot more about the person claiming it than the people who aren't following that particular fad.


> And honestly, Rails already has something that works beautifully for an “interactor” extraction. It's called ActiveModel.

The thing which Rails and DHH offer no guidance whatsoever is the separation of persistence from the domain model. The Active Record pattern is itself a conflation of those two things, so this is an opinionated choice, and I respect it as a sane default for a wide variety of applications. It works perfectly fine up to a certain scale but when the business logic reaches a certain complexity and the persistence logic reaches a certain complexity then it makes sense to separate them. A lot of proposed solutions might be overkill, but keep in mind that DHH and the company formerly known as 37signals specialize in minimalist software, so they combat this complexity from the UX down rather than conceiving architectures to support it. I agree with this philosophy insomuch as all else being equal, simpler is better, but the problem is that some applications are necessarily more complex than Basecamp, and sometimes we need more than what Rails provides out of the box. Convincing DHH of this is pointless because he doesn't have to deal with it and he has no incentive to understand anyone else's pain in this regard.


Here's the thing, though: Rails purposefully doesn't provide more than the minimum. I disagree with DHH on many things (ActiveRecord’s persistence model being based on MySQL misfeatures has been one of my biggest frustrations with Rails since he introduced it at a RubyConf where we both presented), but he's right that this sort of thing doesn't belong in Rails.

Everyone[1] needs a persistence layer and domain objects. That ActiveRecord is both ActiveModel and persistence is neither here nor there—and Rails 3 did a lot to making it possible to separate ActiveModel (domain object behaviour) and ActiveRecord (persistence layer behaviour) in a logical perspective so you can choose to persist differently than ActiveRecord does. Everyone needs some kind of controller. Everyone needs some kind of presentation layer. Aside from providing sane defaults for all of this, Rails doesn't try to dictate how your application is built.

The “more complex” architecture required for your application is unlikely to be a good fit for my application’s “more complex” architecture. For Rails to anoint one particular “more complex” architecture…is to fail anyone else who doesn’t need that architecture or would need something different. If you want to promote a particular type of architecture, there are ways: write an Engine; include generators. Give people guidance. Tell people why they want to use your particular architecture, but also tell them why they shouldn’t use it. If a lot of people need it, they will adopt it. Like Rspec. Like cucumber. Like ActiveModel::Serializers.

And, honestly, consider whether you really need your “more complex“ architecture or whether you can combat it the same way that Basecamp does. We're doing that with our product team, trying to find features we can remove and reimplement later when we really need them. Ours is a fairly mature product, so it's hard—but it's happening.

[1] For some large subset of everyone. Anyone who is reaching for something like Rails, Django, etc. needs some sort of persistence layer. Anyone who needs to store data needs that. Everyone needs domain objects.


Where did I say Rails should provide this? My point was primarily that attempting to convince DHH of your architectural choices is sisyphean and ultimately pointless since he has crafted a nice bubble for himself where he doesn't have to be exposed to architectures or complexities that he finds distasteful.


> The thing which Rails and DHH offer no guidance whatsoever is the separation of persistence from the domain model.

This seems to suggest you were looking for guidance or sane defaults for this. If that's not what you intended it to mean, I'm sorry for misreading it.


Your interpretation is not what I intended but neither is it completely unfair.

I am looking for guidance (or rather good discussion and ideas) about how to address this issue for larger Rails apps, just not from Rails or DHH. I don't think it's Rails' place to do this since I don't really think a one-size-fits-all approach is right and it sort of tramples over the core philosophy of ActiveRecord which I believe is a useful default. On the other hand I don't buy into DHH's philosophy that its all astronaut architecture either.


I agree with you - Interactors are particularly useful when you're creating multiple models, and the example could have been better.

But they're useful for dealing with side-effects of a particular operation too (sending multiple emails, notifying admins). Much better than ActiveRecord callbacks.

If you put this logic in the controller, what happens when you want a separate API controller that does the same thing? Or some admin functionality elsewhere in the codebase? There's zero possibility of code re-use.


Coulda, woulda, damn shoulda. You're future coding with your "what ifs". Most controller actions are not reused. The time to extract for reuse is when you need to reuse. Not crystal balling about that you probably will be in the future.

Because when the reuse case actually arrives, you might find that you need to reuse some, but not all of the action. If you're literally doing the exact same thing for both web and api, why are you using a separate API controller? Just use respond_to with formats.

Second, yes, you shouldn't have side-effects like sending email in your models. But you don't need to! Just stick that logic in your controllers. That's what it's there for -- to render the views (of which emails is one of them -- see http://david.heinemeierhansson.com/2012/emails-are-views.htm...).

The AR callbacks are wonderful for coordinating the domain model. Often times you'll want to create auxiliary objects when something else is created. That's what's it's there for. Or to otherwise keep the integrity of the domain model in place.


Sure, I think we're saying the same thing here.

This example is from a real-life codebase, and the interactor was extracted from 3 or 4 (bloated, repetitive) controllers. Perhaps this could have been made clearer in the post.

I'm not arguing at all that every controller needs an Interactor, or all Rails codebases should start out with Interactors present.

I am saying that Interactors are very useful concepts for re-using code, and that many medium and large Rails codebases could benefit from them.


Please do share the 3 or 4 controllers all sharing this logic. I'd be happy to play code pong with them.


Regardless of reuse, controller actions can get awfully large if unchecked. What would you say is the maximum LOCs for a public controller method?

Callbacks in Controllers and AR::Models tend to make things worse IMO for anything other than authentication; and for intricate actions interactors seem like the best defence.

I have had the (mis)fortune to jump into a number of large Rails codebases and I can say with hand-on-heart that the only ones that made any sense off-the-bat were ones using a this-or-similar pattern.

The community is embracing policies, interactors and decorators for a reason. Real developers are having real problems when Rails apps get a certain size, and there is no endorsed method on how to handle these problems.

Surely a couple of asides on the Rails docs and some official endorsement could help point new developers in the direction of a possible solution? A solution the industry already seems to be taking; regardless of whether 37Signals deems it fit for their particular domain.


> What would you say is the maximum LOCs for a public controller method?

As small as it needs to be in order to get the job done, and as large as it must be to get it done clearly. Sometimes that's zero lines of code; sometimes it's 500. You can usually factor down a 500 line method, but it may be worth asking what the breadth cost is should the code really be single-use.

If your metric is anything else, then you're playing Stupid Metrics Games, like that espoused by Code Climate.


You're really against Code Climate huh? Just what did it do to you to garner such hatred? I'm not personally a fan but I can see how it may help large companies in certain situations.

Anyways, when I asked about a "maximum" I was not asking in absolute terms. What do you see as a reasonable max for the majority of use cases?

At which point do you start feeling bad about your method?


I'm against Code Climate because it promulgates nonsensical metrics in ways that are actively harmful to good engineering. Just like Bob Martin and his concept of “clean code”. All of these things are at best sigil markers. They aren't solutions, and they can't provide any answers.

I don't have a max length to a method, because I'm not a prescriptivist. What I have is a point where I stop understanding what a method is doing, or I can't explain it cleanly. Sometimes I keep the method as long, but better comment it. Sometimes I factor it out. The point at which I do so varies based on the code, time, and many other factors.

I've worked on a C++ codebase that had a 3,000+ line function that was nearly impossible to factor out because of a non-trivial amount of state that would have either:

a. required passing much of that state as parameters to the factored out functions/methods (even if you put it in a wrapper object/struct); or

b. extracting all of the (working) behaviour into one or more objects whose sole purpose is to encapsulate the behaviour of this function.

I absolutely hated working on that function, but it was essentially the main loop of the program. You could step through those 3k lines and get a fairly decent feel for what the program was going to do, when, and how often it would repeat. The hardest part was where people before had extracted code out…that was called exactly once. We tried three times to extract the code—and failed three times, ending up leaving the code the way it was because our extractions were more complex and less understandable than the existing crappy code.

My point is that there's no such thing as a reasonable max for the majority of use cases. It depends on what you're doing and in what code.


"had a 3,000+ line function that was nearly impossible to factor out"

Sounds like the only reason it was nearly impossible to factor out was because someone didn't stop at line 500 and say "hey this is getting out of hand".


I hope I wasn't a contributor to the 3,000+ line function you're describing.


No, that's an earlier company I worked at. It existed before I got there, and I suspect it's still there now.


Do you have an example of a public controller method larger than even 25 lines that does its job clearly?


Controller method that I can share? No.

This method in mime-types is reported by Code Climate as a code smell. They're wrong: it's the smallest it can possibly be while still correctly performing the necessary goal; any smaller, and you have to break it into multiple smaller methods that provide no value except keeping Code Climate happy. https://github.com/halostatue/mime-types/blob/master/lib/mim... The method is ~35 lines long. There are other cases I can provide from open source work (https://github.com/halostatue/mime-types/blob/master/lib/mim... is a good example: deprecated code, parser for a file type where splitting into multiple methods only complicates the logic flow).

It is true that the larger a method gets the less readable, understandable, and clear it gets—but that should never translate into the sort of nonsensical request you made, asking about maximum LOC. Asking that paints you as a prescriptivist who doesn't bother understanding the context the code requires.

I've been writing software for a long time, and while I try to write methods as short, clear, composable “paragraphs”, I sometimes will write something much longer than is readable because I can't figure out a meaningful way to break it down. That comes over time and reading and interaction with the code.

The best developers in my experience are pragmatic. They are aware of design patterns, but don't treat them like sewing patterns. They are aware of development practices, but don't treat them like holy writ. The GoF design patterns book is an excellent descriptivist treatise, but then people started treating it like a cookbook and looked for reasons to implement Patterns everywhere, rather than extracting the patterns from their code.

Blog posts like this one (that tell me that I should use an Interactor) pattern are actively dangerous, because they provide dicta without properly explaining the pain that the pattern evolved to solve, or the proper evolution of the pattern.

The current code base I work on shows a lot of evidence of Rails Fads just like this one, and I'm trying to get my team to step back and ask why we do things certain ways and to go back to first principles. Don't just reach for a Presenter because it's what was done before. Don't even reach for an ActiveModel::Serializer because it's what we're preferring now for API representations. Figure out your problem. State it clearly. Write the solution clearly. Find code that works similarly and figure out (a) if and (b) how they can be extracted into common code.

There is a cost to adopting things like Presenters and making smaller methods: your interface becomes larger. You can complain all you like of large files and functions, but code bases that have large numbers of classes whose purpose aren't clear…are harder to navigate and understand. (I have, in the current code base, unextracted code from external classes when that external class is used in one place and it makes the behaviour more understandable. It also provides a better place to understand where similar behaviour may appear later so that we can properly extract code if and when it is necessary later.)


> They're wrong: it's the smallest it can possibly be while still correctly performing the necessary goal; any smaller, and you have to break it into multiple smaller methods that provide no value except keeping Code Climate happy.

Wrong? Smallest it can possibly be? Those are strong claims.

I definitely disagree with Code Climate from time to time. But I try to keep an open mind.

I've found that trying to follow dumb rules can be enlightening, even when my first reaction is that I disagree. This goes for Code Climate as well as arbitrary rules, such as Sandi Metz': https://gist.github.com/henrik/4509394

I looked at your `priority_compare`, for example. As someone who did not write that code, I find it a little hard to follow. It has conditionals four levels deep (counting the ternaries). Your `Mime::TYPE` class is quite long, as is its test.

I would let the Code Climate feedback inspire me to attempt to extract a method object in this case, along these lines: https://gist.github.com/henrik/9355101

That would result in more code overall, but smaller objects on average, and to my eyes readability is improved because the complexity is a little abstracted. Do you truly feel it provides no value?


Your extraction attempt actually proves my point: your code is more complex, is less readable, is demonstrably wrong, and destroys performance in at least two ways (I’d benchmark my “ugly” code against yours every time and win).

More complex and less readable: you’ve taken the comparison out of context of the class which is being used for comparison and put it in a different class and file that has to be opened for understanding a priority comparison. Worse, that separate class and file isn’t going to be reusable for anything else, because it depends heavily on the (admittedly large, but justifiably so) public interface of MIME::Type. So, readability is made worse for the sake of a dubious improvement in testability and no value in reuse.

If your editor supports code folding (it should, and you should be using it), then you don’t even have to see the implementation of `priority_compare` if you don’t need it.

Demonstrably wrong: you’re calling `#simplified_value` twice. You could make that not as ugly by caching, but that’s already known to be a waste of time because your extraction…

Destroys performance: `priority_compare` is used for sorting (as the code comment states outright). Creating a new object for each and every comparison is going to send sort performance to hell in a handbasket with rockets on it. It’s the same problem that plagues people who use decoration heavily: performance sucks and memory use gets ugly quick. The calling of `#simplified_value` twice doesn’t help here, and caching the value only makes memory use worse.

The code comment on `priority_compare` is clear: it’s a specialized implementation of `<=>` used for sorting from MIME::Types#[]. That should immediately halt any consideration of extracting the code into an object that needs creation, and continuing on with that is a failure of good software development in favour of purity suggested by stupid rules. Even extracting it into a class method is of highly questionable value, because now you have to (as you do with your extraction) know the left and right comparison objects, rather than (safely) assuming that the undecorated version is the object itself. It’s rather the difference between operators overloading in C++ where you need to provide both left and right values vs only providing the right value because the left value is the object itself.

I considered separating the if/elsif conditions to separate methods, but you still need the if/elsif conditions, and all you’re doing is pushing code around in ways that, once again, will not be used outside of `priority_compare`.

I had considered implementing `priority_compare` at the call-site, but it’s used at two places in MIME::Types. So, the `priority_compare` is reused, but the component parts aren’t, really.

I will admit to laying a trap with `priority_compare`: I’ve worked on this code off and on for ten years. Code Climate, being a naïve program, cannot understand how the code is going to be used, but only understands how it’s defined and measures it against some fairly simplistic and stupid metrics. They can be interesting when you are first approaching a program to understand hotspots, but their value drops to zero very quickly compared to the understanding you have from working with the code regularly.

Where I think Code Climate (and similar tools; they just happen to be fairly high-profile in the Ruby community) goes wrong is that a lot of inexperienced and naïve developers tend to treat it as Truth that Must Be Followed. Many of these folks are also fans of so-called “Clean Code” techniques and otherwise lack the experience necessary to know when not to follow these sorts of rules and assume that Code Climate is good becaus it simplifies the score to a GPA/grade.


Wrong: I'm sure it is. I did it without tests and didn't go through the logic carefully since that wasn't at all the point. The design was.

Performance: could be. If your context is one where performance is paramount and trumps readability/maintainability, that may certainly be easier to achieve with less factored code. I spent zero time trying to optimize the code.

I suspect this design doesn't make that as hard as you make it sound, though. My team writes code in the style of my example, and we don't have problems optimizing when it's called for. Though we avoid optimizing at the expense of readability unless we see a real problem.

I find it interesting that you think my example is clearly "more complex" and "less readable". I, of course, feel it is the other way around.

The fact that you've worked (mostly on your own, going by the contributor stats) on this library for ten years will of course mean that you know your way around it.

As a visitor to the codebase, I see big classes and big chunks of code that require me to load a lot into my head and digest it before I can make sense of it. Breaking that up a little means I can make sense of a smaller piece of more self-documenting logic, then dig down to lower abstractions as needed.

If you're saying that you find your way around your code better than my refactored code, I believe you.

If you're saying you think the refactored code would be harder to understand for the average developer, I think you're dead wrong.


Re: wrong: it's not only wrong, but your admission that you didn't go through the logic carefully (that is, you didn't pay attention to the comments, you didn't pay attention to the spots where it was called, etc.) says that you didn't actually pay attention to the design of the system. You refactored for a nonsensical version of “object purity” (trying to satisfy for those metrics) instead of correctness. You claim to have tried to make a point about the design and failed, because your refactoring at best did not help readability, didn't actually improve any of the metrics that Code Climate reports about, and objectively made the code perform worse in the context of how it's called.

This is what happens when you refactor in a vacuum, and the danger of following tools like CC or even the “advice” given in the article at the top of this chain. I would never let a developer who works with me get away with the sort of refactoring attempt you did.

As far as your code being more complex and less readable, you have traded minor depth complexity for greater breadth complexity without simplifying the main comparison at all, and (as I keep pointing out) made the performance worse, to boot. Your code is less readable because now I've got some kind of comparison object that I have to instantiate—and it takes two objects called “one” and “two”. Now, when looking at your priority comparison object, I have to jump between mime/type.rb and mime/type/priority_comparison.rb to understand the data values that are being compared.

The reality is that the MIME::Type class is dead simple: it's a validated data object with the self-documenting descriptive properties. MIME::Types is even simpler: it's a container that knows how to search for its contained class. I've done some fairly aggressive refactoring (it just used to be two files) to extract parsing and file interaction from the main classes, and what's left is just what's required to interact with the data provided by a MIME::Type and to work with a MIME::Type collection (although some of it is going away because the previous on-disk data representation was too anemic, and the current on-disk data representation provides just what's needed in a fairly efficient manner).

That you look at these and say “big classes!” says much more about your own biases than about the code. mime-types provides very little deep functionality, but a lot of documented and self-documented code. It just happens that the data included has a lot of attributes, and I'm a big believer in backwards compatibility. Next year, when I release mime-types 3.0, I'll be removing the deprecated functionality—and it'll cut the code in MIME::Type by about 35-40%, but it will still leave things that will trip up naïve complexity advice and refactorers.

I misspoke, though, about improvements to priority_compare: when I feel comfortable making mime-types use refinements, then I would probably make an in-place refinement of Nil to support #<=>. That probably won't happen until 2015 or 2016, though—I have to keep supporting versions of Ruby that don't support refinements until after the interpreter version EOLs.


Also glanced at the load_from_v1 code.

I can't argue with the "deprecated code" bit – if you never need to understand or edit that code again, by all means leave it. There will be no payoff for spending time reducing the complexity.

But "splitting into multiple methods only complicates the logic flow" sounds less like Code Climate being wrong and more like you not having found good refactoring techniques, maybe.

I see several things that I think have a good chance of helping readability: extracting the whole thing to an object, then replacing the inline error handling with an error handling method ("begin" is a smell), extracting (and naming) whatever happens on each iteration of the "data" and so on.


"The AR callbacks are wonderful for coordinating the domain model. Often times you'll want to create auxiliary objects when something else is created. That's what's it's there for. Or to otherwise keep the integrity of the domain model in place."

No, they're not. They're great for coordinating activities around your database. That's not a domain model, unless your domain is databases. Conflating domain with database through the use of active record doesn't mean that structural models are domain driven, and events based around database interactions aren't going to be any more domain driven than the active record objects themselves.


Please take a second to think about who you're saying "No, they're not." to. You'd think perhaps that he knows what he has AR callbacks in AR for, regardless of what you might think they're in there for.

The whole idea of ActiveRecord is that you to make your models with a clean interface to persistence. Among the features of that interface are the callbacks. They are there for your domain model to make use of, that's the whole story. If persistence is not part of your domain, why are you extending ActiveRecord?


Please. I'm well aware of who the parent was. I'm well aware of what his intent was. And I'm well aware not only of the strengths of the active record pattern (which I implemented in at least two different languages five years before he put Rails out there—as did many, many others) but also its inherent weaknesses.

You keep using the term "domain model" to refer to active record objects, which makes it clear that you've bought into the Rails appropriation of the term. AR objects are not domain models. If they were, the persistence would be handled elsewhere. Compared to a real domain model, AR objects are just a short step above using a naked database driver itself (and not necessarily an improvement thereof.) So to answer your question ("why are you extending activerecord?") its because you're taking a shortcut to dumping data into your database, and failing to create a real domain model while you're at it. That's a path that works ok for plenty of apps—but you want to know why rails has a reputation as being great at prototypes and shitty at scaling complexity? Because ActiveRecord is shitty at scaling complexity, and that's a hell of a lot more relevant to the vast majority of applications than scaling performance.


Oh I insist.

"If they were, the persistence would be handled elsewhere."

Persistance is handled elsewhere, it's handled in the superclass. If you think that's not a good idea, you shouldn't be using ActiveRecord, because that's the way it works.

You could argue there's not enough abstraction from the DB in AR, but that's a different discussion. Here we discussed if the persistance callbacks were an appropriate tool for in the domain model, and I reckon they are.

Also, any fool knows that putting lots of code in a single file is not a good way of scaling complexity, so yeah at some point you're going to need to break your code out in concerns and what not. ActiveRecord most certainly does not stand in the way of that.


Jesus christ. You aren't seriously arguing that handling it in a parent class removes it from the AR object's responsibilities, are you? In point of fact, that inheritance pattern is a bad idea, and I don't use it when I'm implementing an actual domain model, but that's seriously irrelevant.

1) I did argue exactly that; 2) it follows from (1) that an evented mechanism based on the database lifecycle of such objects is about the persistence of said objects, not about the domain.

lastly, if you think code complexity is about how fat your models are, I urge you to get out of "the rails way" for awhile and do some heavy reading. Moving code into concerns is just geography, and having lots of code in a single class is merely the most visible symptom of complexity.


No. I was not arguing anything about it, but the fact that it is a proper abstraction.

The persistence is part of the domain. You need to know about when your objects are persisted and when they are not, this can be an intrinsic part of some of your business logic. Thus the events are relevant to your domain.

Of course not all your business logic needs to be aware of persistence, and nowhere do you see me arguing that all business logic should be in concerns included into the AR class.

Don't put words in my mouth, and don't assume I am stupid. You are arguing in a rather ad-hominem way, and it is diminutive to your arguments.


Fair enough—I was being a bit of an asshole. However, when you start your contribution to the thread by insinuating DHH is above reproach, I'm not filled with sympathy. Especially when his architectural hipsterism is IMHO responsible for ruining a couple microgenerations of developers who learned to code through rails.

I can't be the first guy who's put his head through a wall upon hearing you say persistence is part of the domain. The only time persistence is domain is when your domain is the data store. "But AR is the database"—totally correct. the domain of AR is the database. But that is almost never the same as your business domain, and trying to mix the two creates a great deal of incidental complexity.


To be fair, a great many of web apps can justifiably say "my domain is the data store." In fact, whenever I use BaseCamp, that's exactly what strikes me almost immediately. Stock rails tends to be a great fit in those cases; your business objects are bags of data. Of course, those projects don't happen to be all that interesting to work on technologically, but I wouldn't want to work on MS Access "apps," either.


That's incidental isomorphism between the two, and its different from your domain being your data store. You're right, there's a great many apps that can get away with it for awhile. And for those apps that fit in that mold and just kind of stall out or never grow, more power to them. Any app that continues to grow in terms of features or scale WILL break out of that mold—unless they're tethered so tightly to it that it becomes an anchor (which, let's face it, is most rails apps in the wild.)


AR is not just about persistence. AR is an object-relational mapping which maps an obect oriented object in your code, to a row in a relational database table. It is an adapter between two different ways of representing data. But don't forget that Objects are not just data, they are also behavior. They are your domain. The 'object' comes first and the 'relational' comes second. AR encapsulates your data and makes database access transparent, but it still represents your domain.

Martin Fowler's Catalog of Patterns: http://www.martinfowler.com/eaaCatalog/activeRecord.html


Yup. Hence "services", as soon as your domain starts to get complicated. You need an abstraction layer between your application flow control (controllers) and your persistence layer (AR, or whatever). At least, that's what I'm given to understand about this. Does that sound right?


Concerns are just a complicated way of putting lots of code in a single class. The interface is the same if you put everything in the same file or if you break it out into a dozen mixins/concerns.


>> If you put this logic in the controller, what happens when you want a separate API controller that does the same thing?

You refactor.

In my experience, anytime you're writing something for the sake of "possible code re-use", you're wasting time. Code should and does get refactored often. By adding levels of indirection from the outset, you add a barrier to refactoring and likely additional unnecessary code.


I bet dhh is going to say something like "then refactor your controller into a service/interactor/etc., but not before".

YAGNI, and all that.


the example is poor, butI think you're missing the core point here -- slow tests. I don't think this really improves readability or organization much -- but my real world experience with a giant slow rails test suite gives me 100% confidence that patterns like this are the only way to not have a completely intractable giant test suite.


Definitely agree. Would love to see DHH commenting on the slow tests problem.


How long does the Basecamp test suit (excluding any integration tests) take to run?


Large - yes, but with an extremely simple domain model (and better for it).


Perhaps because it's written with examples in java but I often feel like no one in the rails community has ever read Eric Evan's Domain Driven Design[1]. It's far and away the best material I've ever seen on how to organise large code bases. It covers pretty much every suggestion that I've seen from the rails community. Sometimes the rails community can feel like the fitness industry, everybody just rebranding things that have been done before.

[1] http://www.amazon.com/Domain-Driven-Design-Tackling-Complexi...


I would argue that both the Rails community and fitness industry may operate that way, but for good reasons. Any time you have an ongoing an significant influx of beginners, looking to hit the ground running, bad practices will be everywhere. A handful of established "bibles" or textbooks aren't going to be enough to get the message out to the masses. It may not be appealing to those already "in the know", but good practice messages need to be repeated and communicated in new ways. You also see the same dynamic at the gym, of the experienced gym-rat lamenting the things that personal trainers get paid to teach newbies.

In that same vein, this may be a repeat message of a concept that is not novel, but as an experienced non-web developer who is fairly new to Rails, I learned something from reading this.

I also appreciate you pointing out what sounds like a good development resource (Domain Driven Design).


Sure - I don't think anyone in the Rails world is claiming to have invented these principles.

The problem is that Rails ships with a very limited set of core architectural concepts, and many inexperienced Rails developers feel like they've got to cram all of their code into a Model, View or Controller.

Once your codebase reaches a certain complexity, principles from other programming paradigms are extremely useful.


> "The problem is that Rails ships with a very limited set of core architectural concepts, and many inexperienced Rails developers feel like they've got to cram all of their code into a Model, View or Controller."

The problem is that people think Rails is an architecture to begin with. Rails is just a framework that uses the MVC pattern (mangled slightly to fit the realm of HTTP). In the end, MVC is nothing but a directory structure for our files. What you put in those files and directories is up to you. Uncle Bob gave a great keynote on this called Architecture: The Lost Years. Here's a link: http://www.confreaks.com/videos/759-rubymidwest2011-keynote-...


You have to realize: Rails is incredibly populist in that it's "good enough" architecture for many devs. It's not terrible, but it's not at all the same as learning basic architectural principles for building apps. Devs don't stray outside of it much, they just deal with it when it gets to be a problem.

In this way, we've successfully commoditized another differentiating factor of developers so we can ship on Internet Time(tm).


Yes, that's a wonderful talk


Interactors, domain objects, service objects, etc. are still models; people just don't generally believe that their models are allowed to inherit from things other than ActiveRecord::Base.


They're not models, they're Objects. A model is an object but all objects are not models. The difference is what sort of role they play in your app.

A Service Object work, could just echo hello world every 15 seconds and still be called a Service Object, while a model in your application doing the same thing can no longer be called a Model.


I keep looking at interactors and thinking "I'd make this a command object and then hide it in the model" - so I'd be doing something like (inside the 'member' class)

    method confirm_grouper () {
      MyApp::Action::ConfirmGrouper->new(leader => $self)
                                   ->run
    }
and then the controller would simply do -

    my $result = $member->confirm_grouper;


Problem with your assumptions is that inexperienced developers do not understand architectural concepts.

So you could cram every design pattern known to man and they would still cram everything in their controller.

API's are hard to discover and it takes time through trial and errors or being teached the way things are.


To be fair, Rails does have concerns which let you easily compose your models out of modules, rather than having big god-object models, and it is easy to load other arbitrary collections of code too. So it is not really limited to MVC.

For beginners, I'm not sure it would be helpful to introduce a whole load of named patterns, as it just leads to cargo-culting and overuse of patterns without understanding whether they even apply.

It was interesting to read about a different approach though - thanks for the article.


The problem with concerns is that they don't actually break down the god classes, they just separate the pieces of the god class into different files and combine them at runtime.

The important metric here isn't "wc -l app/models/god.rb", it's "God.public_instance_methods.length".


You don't have to use them for that, you can use them for composing models out of shared bits of functionality, for example 5 of your models have a status - put all the code about statuses in one place in a concern and include Status in your models, and possibly similar for controllers - this cuts code duplication.

There is also an argument for making models smaller by splitting some of their functionality into modules too (what you're talking about), but I think that's a weaker case - better to use concerns for shared code.



In my first Rails job, I was handed a copy of this book to read. I would agree with your comments, although I cannot speak for the Ruby Community as such. Well worth reading.


There's also a condensed version for free over at infoq[1](requires registration, but definitely worth it).

[1] http://www.infoq.com/minibooks/domain-driven-design-quickly


probably because in most cases the domain model doesn't get that complex.


I've spent more time thinking about clean architecture and design than most and the conclusion that I've come to is while Ruby gives you all the tools to write clean code and great architecture, it doesn't offer the best tools to do that job.

Ruby and Rails feel best as a prototyping platform for getting something out the door fast, proving a concept, and not worrying so much about correctness or maintenance.

I don't think if you are doing a lot of TDD and larger systems that piling on more Ruby and Rails is the right answer. I think once you know what you are working with, a well designed language with a compiler is a huge help and would remove a ton of useless tests and stuff that you end up writing in Ruby by hand.

This very likely leads to an API driven system with an API written in a strongly typed, compiled language like C#, Java, Scala, Haskell, or Go and writing your front end in whatever makes the most sense for your team.

At that point you get the benefits of a nice rapid development platform like Rails for your front end, and a fast, testable, API written in something else using all the clean code architecture you want.

The trick is, you do everything ugly in Rails or PHP or whatever in your initial prototype and you might not even write tests. You just ship working code until your prototype has proven a business case. Then, you move it towards something with lower TCO over time. Make the investment in your code when it makes sense to invest in it and use the best tool for the job at each step.

You probably never need to leave the standard Rails MVC stuff on most projects unless they are wildly successful and/your long term needs change. Even then, you can probably keep the rails front end stuff and just talk to an API and be very happy.


I agree completely that the Interactor pattern makes for cleaner Models and Controllers in a Rails codebase. I've used both the Interactor gem and PORO (in a directory outside of /app or /lib).

Having worked with the Interactor gem for a little while once you break things down into small interactors that can be used by the Organizers, I have two main complaints.

1) inputs are unclear. With calling new or some custom class method you can use descriptive variable names to say what the object expects to be present to do it's job. With the Interactor gem you end up adding in comments describing what keys need be present in the context and what keys will be added to the context so the next programmer can use the interactor you've created without having to go through and grok everything.

2) You end up having to (re)create a failure protocol to communicate with the controller and display to the user. We take the AR errors functionality for granted in our controllers/views with interactors you have to come up with a similar system.

2.5) as a result you end up writing a lot of boilerplate fail! and rollback code

2.5.2) and a non-atomic operation like notifying the payment gateway can break the whole model of rolling back and you have to end up raising so your user doesn't end up in a invalid state or get charged twice.


Agreed, especially on (1). I don't love the lack of explicit inputs.

We've got a convention that all Interactors must be commented to specify what inputs they take.


Along the same line but heavier weight than 'Interactor' is 'ActiveInteraction'[1].

Responds in much the same way active record does, allowing validations, errors, forms built from interaction objects, etc.

[1] http://github.com/orgsync/active_interaction


We tried using DHH's concerns in place for interactors, but ditched them for PORO/service objects because they can be tested outside Rails.


Yeah - I feel like concerns are a bit of an anti-pattern, especially as they're normally used in Rails.

You've got a God class that exposes 500+ public methods, so you split it into concerns. But now you've still got a God class with 500 methods that's split across 10 files. Good luck understanding that.

Concerns are useful when they're genuinely sharing functionality between classes - not just for splitting up "Big Bags O' Methods".


> Concerns are useful when they're genuinely sharing functionality between classes - not just for splitting up "Big Bags O' Methods".

Hmmm. I'd say it depends.

Of course having concerns (a fancy word for modules IMHO) shared between classes is great. But i've also found cases where separating a god class' behaviour into different concerns helped a lot. The important thing in those cases was recognizing and avoiding dependencies between different concerns, so instead of ending up with a god class split across many files of inter-dependant code (i.e. worse than before), you can end up with a "god" class split into its minimal "core" part and a couple of concerns that only depend on that core part but not between each other. Yes, you could still call that a god class, as it will have a very fat public API, but at least you can reason about what it does in terms of different concerns, so you gain something :)


Concerns are useful when they're genuinely sharing functionality between classes

I've found them genuinely useful for that - there is a lot of code for things like urls, publish status, ratings, roles, permissions etc that can be shared between models if they have similar functionality.

They're just a recognition that composition via modules is often better than inheritance.


I don't like Concerns because usually they are just hiding behaviors in another file (I do like routing concerns though!). You can test Concerns outside of Rails though, all you need is ActiveSupport which actually loads quite fast.

require 'active_support/concern'

require 'my_cool_concern'

class DummyClass

  include MyCoolConcern
end


This assumes that MyCoolConcern doesn't depend on too many things in your main model class or in (god forbid) other concerns that you've included in that model.

Or, perhaps, to turn this argument around, it's better to say it this way: A Concern is poorly-constructed if it cannot be tested in isolation, so therefore something like MyCoolConcern should be written with its testability (inside of DummyClass) in mind.


One thing I really enjoy with this post is the conclusion. It doesn't try to tell you it's the only way, just that it's the way they found to fix their problem with the constraints they had, as it should be.

Rails is easy to extend, people often forget that. Great post.


We've also come across the limitations of Rails as a one-size-fits-all solution for larger codebases. I don't think this is a fault of Rails, but more of the developers using the framework. Software engineering in general has more focus on a wide variety of design patterns and doesn't limit itself to one framework. In my opinion, Rails is the best framework to create wonderful web applications. Modeling business logic requires different kind of architectures than the simple MVC-ish structure Rails provides.

At MoneyBird we are using the Mutations gem to represent something like the interactors mentioned in this article. One major advantages of Mutations, is that is also does input filtering.

https://github.com/cypriss/mutations


I've been looking into incorporating this pattern into my larger Rails apps as well. Another benefits of interactions is DRYing up your code for use in APIs. Some of the more popular interactor gems:

https://github.com/orgsync/active_interaction https://github.com/cypriss/mutations


If you need a gem to implement some of these complementary patterns, you're doing it wrong. Draper, Naught, DisplayCase and all of the other gems of patterns are over abstraction 99% of the time. Implement the pattern yourself. Most of these Presenter and Service object patterns are different variations of the Decorator pattern. Ruby happens to ship with 3 great ways to implement decorators: SimpleDelegator (my personal favorite)[0], Delegator[1] and Forwardable[2].

0 - http://www.ruby-doc.org/stdlib-1.9.3/libdoc/delegate/rdoc/Si...

1 - http://www.ruby-doc.org/stdlib-1.9.3/libdoc/delegate/rdoc/De...

2 - http://www.ruby-doc.org/stdlib-2.0/libdoc/forwardable/rdoc/F...


The biggest pain points that gems like Draper solve are allowing view helpers in your decorators (which is by no means straightforward when you get into routing helpers), and dealing with some of the strangeness that you get into by wrapping ActiveRecord models (JSON serialization was a biggie for us). It's less about what Ruby gives you, and more about navigating around some of the darker, messier corners of Rails.


They just wrapped the view_context with a method_missing proxy (another form of a Decorator) that the controller instance gives you. You can accomplish the same very easily using dependency injection in your controller action:

MassivelyUpVotedPostPresenter.new(@post, self.view_context)

Or just inject the whole controller instance and you have your router helpers too!

MassivelyUpVotedPostPresenter.new(@post, self)


Thanks for the shout out! I'm one of the developers of ActiveInteraction. Since most of the interactor gems are pretty similar, here's why you might prefer ActiveInteraction over Mutations, Interactor, et al.:

- Built-in ActiveModel validations. - Designed to be used with form objects. - Easier and more flexible composition. - Translatable with I18n. - Quacks like an ActiveModel.

That being said, Mutations is a great library for PORO interactors and ActiveInteraction wouldn't exist without it.


Mutations is a great gem. I've used in my latest project and it has been an exceptionally useful tool for managing complexity.


I've found a happy balance developing with rails by adding two strategies:

1) Use presenters for display-only logic to keep controllers concerned with managing requests only.

2) Using service object / custom classes in /lib for actions on models as well as abstracting any common related functionality. Creators, Updaters, Processors, Orchestrators, etc. Keep your models only concerned with data and not data transformation, and your controllers from doing it as well.


I have documented my strategy for moving from a simple MVP to a maintainable and complex Rails app. Basically I look at factors like how many objects are affected by an operation, and how complex the additional processing is. TL;DR: Use Rails magic when possible, and Object Oriented best practices when necessary.

I created a flowchart that illustrates this strategy: http://rails-recipes.clearcove.ca/pages/how_to_change_object...


Yes, definitely.

One of the future posts (coming soon!) will be on Presenters / Decorators.

I would argue these custom classes should still live in /app, rather than /lib, but I may be wrong.


This is great, and does not only apply to Rails. In many other frameworks it's common to lump business logic in controllers instead of models. Either way you end up with the same thing: eventually you'll need to add a 4th (at least) type on to MVC.

> Here at Grouper, we’re long-time users of Ruby on Rails – along with other New York startups like RapGenius, Etsy and Kickstarter.

Etsy doesn't use Rails though, it uses PHP.


Fixed, thanks


Can anyone recommend a github repo of a Rails app implementing interactors or any of the similar concepts listed in the comments?



How about just "features"?

Maybe there are some "user" features (login, logout, change name) and "cart" features (add/remove item in cart).

These features would all live horizontally, be able to call each other (design your object graph wisely!) and would never be able to inspect or elaborate on implementation details!

These features may or may not talk to some database for its own persistence, but that's up to the functions.

And the features are pure functions in your language of choice. They have no knowledge of the web, or requests, they're perfectly transient. They act on a "state" object that you pass around to them, which in tests is just a blob of memory and in production is a real database or wherever else you store state.

It's simple, yet so powerful. This is how we do it, and it's scaling very nicely.


As always, the question is should be in Rails or not.

I think that the answer is clear, this should not be a part of rails since it's not true to all apps.

I think that if you want to have something real quick, you don't need an interactor/service class.

When you have a bigger app, you definitely need that, or you will get to a point where you code is split into models/observers/callbacks/lib/app/concerns and you can't find anything.

Rails generators are pretty easy to extend, this way, when you generate a new model, you can easily create the service class for it.

You can also EASILY create new generators that will generate the service classes for you with your defaults and templates and what not.

Not sure Rails is missing that, however, it is definitely a best practice that people with bigger apps should use TODAY.


Mailers, concerns, helpers, observers and a lot of other things in Rails aren't universally used by all apps. Even something as simple as an empty directory and some generators in a stock Rails app would go a long way in educating people that these are patterns that you can use / provide a common language for where these sorts of things should go and what we should call them.


I recommend this article from the guys from Code Climate, it is really mind-opening: http://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decomp...

The author describes different ways of refactoring fat ActiveRecord models, but most of the ideas can be used outside ActiveRecord, as "best practices".

I have been using some of the patterns described in it and I am really happy with the results. The code becomes more clearly decoupled, is easy to test and results in clean, slim models.


When reading DHH's comments it's important to remember that the domain model for Basecamp is extremely simple. It's a brilliant piece of software that's been well thought out, but the underlying domain concepts are extremely basic.

I wonder if he's ever worked with an app that has a complicated domain.

His comments taken from this perspective make a lot of sense. Rails was written for Basecamp.

For large apps with real complexity, the rails way falls short.


"The nail in the coffin is ActiveRecord callbacks; before_save hooks on one class that modify the state of other objects."

before_save can modify the state of other objects. It doesn't have to. One might use it to modify only the state of the current model. You are reaching a bit, blaming the callback, when what you are really concerned with is an awkward and potentially dangerous use of the callback.


What's the different between interactor and wisper? Wisper is better doing the job in your example.

interactor: https://github.com/collectiveidea/interactor wisper: https://github.com/krisleech/wisper


A good litmus test of an experienced Rails developer is how large their /lib directories are in relation to project sizes.


By this I hope you mean that the smaller it is, the better.

/lib is intended for non-app specific code. The halfway house before code is extracted into either a Rails PR or a separate gem.


Even better is realizing that you can put whatever you want into /app, and having /app/presenters, /app/decorators, and /app/services directories in your application.


Is a large /lib a sign of experience or inexperience? I try to keep as much as possible out of /lib and moved into gems.


Inexperience. lib/ quickly turns into a dumping ground unless you are careful to keep it clear of business logic.


I wrote a blog post along similar lines [1]. The benefits are clear to me: nearly 200 tests run in about a second. For that reason alone I wouldn't consider coupling domain logic with ActiveRecord.

1. http://www.smashingboxes.com/domain-logic-in-rails/


One thing I can't figure out is whether to place my service objects under `/app/models` or under `/lib`. There doesn't seem to be a clear consensus on this? I tend more towards the former, because autoload works better during development. Also, I consider them part of my domain model. What do you do?


There's nothing stopping you from creating extra folders under /app to contain various types of service objects. Personally I have:

  /app/factories - for classes which create active record objects
  /app/services - for classes which do something I generally don't care about the return value of. Often perfect for background jobs.
  /app/decorators - most of these are Draper classes
  /app/forms - form objects
and a few others from various other gems.


  /app/interactors
It's a trivial thing to add new folders like this to autoloading:

  # add to application.rb
  config.autoload_paths += %W(#{config.root}/app/interactors)



I use /app/interactors or /use_cases (per some Uncle Bob talk about architecture). I don't consider them models in the 'Data Model' sense so I don't put them in /app/models


Anything that is application-specific, we put in /app

Anything that's generic and re-usable, we put in /lib. This code is normally a prime candidate for gems - only very simple code stays in /lib for very long.


I have app/services and just add that in application.rb to the load path.


You don't even need to add that to the load path as of rails 4 - we have app/validators, app/services, app/forms, app/inputs etc etc, all autoloaded with no config.


If anyone's interested in a more real-life example, then this post presents a Rails controller refactoring from the Redmine project.

http://blog.arkency.com/2014/02/rails-refactoring-the-aha-mo...


The first missing part: A language with a sane syntax. For some examples of the multitude of ways in which Ruby syntax is awful, see [1].

The second missing part: Ruby is Rails. Trying to learn Ruby at the same time as a modern web framework with all its moving parts? Forget it.

The third missing part: Tutorials. Okay, I haven't done a rant on this. Google rails tutorial. The first helpful result I get [2], I scroll down. The first meat is "Setting the application home page". I see this:

    Blog::Application.routes.draw do
      get "welcome/index"
Huh? "This is your application's routing file which holds entries in a special DSL (domain-specific language) that tells Rails how to connect incoming requests to controllers and actions." So it's not even Ruby or some recognized Web glue language like HTML or CSS; it's some domain-specific language? Okay, I totally don't get it, so as suggested, for more details, I should refer to "Rails Routing from the Outside In" [3].

This is even more confusing. It says you should do something like this:

    get '/patients/:id', to: 'patients#show'
Huh? What's the deal with the colons and octothorpe? I can guess colon is the signifier for an ID in the URL, but why the pound sign for #show?

    get '/patients/:id', to: 'patients#show', as: 'patient'
And what's the deal with the colons?

I could go on and on. I'm sure that with a few hours of pain and frustration, I would be able to figure out exactly all the oddities of Ruby syntax, or the template language, or the DSL whatever, and understand this example well enough to extend it.

But that's not the point. The point is that, because I have to spend those hours, it means that Ruby/Rails is poorly designed. In Django, by contrast, things are almost always simple and obvious.

[1] https://news.ycombinator.com/item?id=5872899

[2] http://guides.rubyonrails.org/getting_started.html

[3] http://guides.rubyonrails.org/routing.html


Ruby has a sane syntax. It probably seems a little foreign if you're new.

'#' is the standard notation used throughout the Ruby language for instance methods.

Think of it like this:

  Blog::Application.routes.draw do |implicit_self|
    implicit_self.get( '/patients/:id', { to: 'patients#show', as: 'patient' } )
  end


I like the interactor gem but don't understand why the method activating it is "perform".

Why isn't it "call" so it can be swapped out with a lambda, or method reference without hassle?


As in this article. http://blog.arkency.com/2014/02/rails-refactoring-the-aha-mo...

Services should be run with call.


It sounds similar to what Lotus is trying to do.

http://lotusrb.org/


How do you even know about this, so unknown, Thanks :).


I thought Etsy uses PHP. Could be wrong.


I can confirm that Etsy.com is primarily PHP. We have no externally facing Rails projects. We have some internal tools on Rails.

-- spike@etsy.com


Sorry, fixed!


I would love to take a glimpse on the Basecamp code, dhh ;)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: