Hacker News new | comments | show | ask | jobs | submit login
IF-less programming (github.com)
64 points by alisnic 1757 days ago | hide | past | web | 114 comments | favorite



This is getting ridiculous

" ifs are a smelly thing in a Object Oriented language."

What's smelly is such a absurd assumption. Yes, let's replace every small decision in our program with inheritance and another level of abstraction, that will work out swell

Please go read an Assembly manual because you have obviously forgotten what a computer is and does

And then we end up with unmaintainable systems with a class hierarchy higher than the Empire State.

Of course, as mentioned, replacing an if with a "software structure" (like pattern matching in an functional language) is beneficial, but to call it a code smell is ridiculous


> And then we end up with unmaintainable systems with a class hierarchy higher than the Empire State.

Perhaps, but I would rather have testable code than untestable code that is nothing more than a series of unnecessary, nested if blocks. We have all seen code that has a structure like

    if (depObj1.isSomething()) {
        if (depOboj1.getAttr() == CONST1) {
            // Do something
        } else if (depObj1.getAttr() == CONST2) {
            // Do something different
        }
     } else if ... {
        // repeat with minor differences
     }
Most of the time the above structure can be abstracted. Abstracting It also has the benefit of making this code more testable: you don't have to set up many different objects, just one.

The OP is correct: in OO languages if statements are a code smell. Like all smells, though, they are not hard and fast rules, but rather indicators that things could probably be done better. No one, certainly not OP, is recommending abandoning if statements. What he is recommending, and I agree with, is that they can and should be avoided, and certainly not turned to as a primary tool in the toolbelt.


As long as there is a mechanical transformation from one style to another, this is not abstracting, is merely shifting the syntax.

You do introduce more named test points, but you risk of creating too many tests at a too small granularity, that will only slow you down for no good reason. Consider using a code coverage tool to help you navigate all the branches, without explicitly naming them.

Keep in mind readability. Jumping around source files is costly on the reader, don't break the flow if you don't have to.


It's irrelevant if you're using ifs or different objects, the number of conditions/tests needed for what you are testing is the same.

Now, you could make depObj1 of type Obj1 and Obj2 to replace the if, which may simplify testing because the method is "simpler" in Obj1/Obj2/ObjBase but in the end it isn't!

Why? Because your test of Obj1/2 has to take care of their dependency to ObjBase. It's usually a dependency hell, needing lots of workarounds to test properly.


I might agree if that type of if statement only appeared at one point in the code, but there're normally several scattered around which all have to be updated when obj3 is introduced. Also introducing a class hierarchy often makes it easier to move responsibilities between objects as the code evolves.

For example I recently did some work on a graphics library which had a lot of ifs to do with line and fill styles. By changing those into an inheritance hierarchy I found a couple of places where clauses had been missed and so fixed some bugs, but more importantly when we needed to add new fill styles which were more complicated to draw it became very easy to turn the fill style objects into fillers which had all the responsibility for filling paths, not simply setting the graphics state.


Yes, there are situations where changing the structure of the code is better than having an 'if' like the example you gave.

But this doesn't mean that it's better to completely eliminate ifs


> "ifs are a smelly thing in a Object Oriented language."

It is partially my fault because I failed to express what I really ment. English not my native, so sorry.

What I really ment in this sentece is "why using (a lot of) ifs can be a bad practice in a Object Oriented language"


Ok, with that I can agree.

Or maybe better: how can ifs be replaced (sometimes) using OO principles


My take on why ifs are "bad" is that they are statements, just like for, while, switch and others. Being statements they can't be composed as you would do with expressions.


That's not universally true; there are languages where if is an expression rather than a statement (e.g., Ruby.)


Most languages have a ternary if operator like (?:). I don't see how this is such a big issue.


To be honest, most of the ifs I write are usually error handling conditions so I sort of agree with the principle.

If you've been writing OO code for 10-15 years, you evolve into a model which is pretty if and block free. It just sort of happens one day. It isn't decomposed into millions of classes either - just a data model and some kind of effector such as specification/Builder/visitor. SRP only needs to go as far as method really as well.

I don't use functional pattern matching either.

If your code reads like math, I expect that its probably done right.


I know that this isn't probably the case you speak of when you say "If your code reads like math, I expect that its probably done right.", but I have a huge tendency to minimize code size by converting everything to "math". I converted a few hundred lines of C# code full of if statements made by a junior dev into this (more or less):

  var minutesToAddToAlert = (timeToAddInMinutes) + (900 * ((timeToAddInMinutes + 
                            (currentTimeOfDayInMinutes - 510)) / 540)) +
                            (2880*(((currentTimeOfDayInMinutes + (1440*dayOfTheWeek)) + 
                            ((timeToAddInMinutes) + (900 * ((timeToAddInMinutes + 
                            (currentTimeOfDayInMinutes - 510)) / 540))) ) / 7200));

  var nextAlertDate = currentDate.AddMinutes(minutesToAddToAlert);
There is definitely a lot more to it than just this, but I don't like blocks of code that don't fit on my vertical 24 inch screens. So I always end up converting "logic" (if's and else's) to math. It's a pet peeve of mine (as is overusing ternary operators as my own personal way of doing haiku), and while I certainly decrease the vertical size of the code itself, it makes it a hell of a lot harder to understand for everyone else. I then have to write comments explaining what stuff like this does, and those comments can get rather obtuse depending on the time of day.

Everyone winces when they see my commits with this message: "Math. Converted X and Y to M-A-T-H. Math. :D"


I'd probably actually break it into TimeSpan arithmetic for the sake of clarity and the fact that it won't require comments or so many brackets:

  var accumulator = new TimeSpan();
  accumulator.Add(TimeSpan.FromMinutes(currentTimeOfDayInMinutes));
  accumulator.Add(TimeSpan.FromDays(dayOfTheWeek));
  // ... etc - can't be bothered to do the rest...
  var nextAlertDate = currentDate.Add(accumulator);
No magic numbers, no arithmetic and clear to anyone who understands the TimeSpan class (which is the all powerful lord of the fourth dimension in .Net).

Oh and I work on a 1280x800 screen on an old ThinkPad T61. If the function doesn't fit on that, I'm concerned :)


Oh I would definitely agree with you just based on the clarity it provides (not even taking into account code size or elegance), but my coworker couldn't use TimeSpan because of certain domain specific criteria so I took his word for it and didn't even give it any thought. I just asked him what the problem/challenge was, grabbed a piece of paper, and did it with math and then just copied it to the code. In my defense, while testing both methods my solution worked reliably, while his had some pretty weird results on some extreme edge cases. With the amount of nested If/Else's he probably missed a couple of things as has been discussed on this thread previously.

In my defense, moving the logic to a pure arithmetic solution resulted in a procedure that takes a bit less than one third of my screen, while the old one was at least 6 or 7 pages of scrolling through nested conditionals.


Agreed, I wouldn't call ifs a code smell but rather OOP itself a code smell.

In any decent language, if will be an expression, not a statement. There is no difference between if and a function.


Oh yes we all live in the land of the ideal where everything is functional and has no side effects of course and we're blessed with parenthesis.

There is a huge difference between an if and a function. Consider SICP where lazy evaluation is introduced (read it if you haven't). An if statement encompasses the lazy evaluation as a native concept whereas a function's arguments are always evaluated (assumption for now - please carry on reading).

So consider an if statement as a control flow structure which automatically supports lazy evaluation and leaves the return semantics up to the user. This abstraction is clean from the highest level, right down to the CPU which uses conditional branching to perform the if operation. There is a 1:1 match all the way down.

The function on the other hand does not natively support lazy evaluation and enforces the return semantics. You then have to apply a lazy evaluation mechanism over the top (another layer of abstraction!). This layer of abstraction conveniently ends up using conditional branching to perform the if operation when it gets to CPU level. There is not a 1:1 match all the way down.

Big hint there: your functional if at the end of the day is just a compiler abstraction over a state machine which uses ifs.

As for OO, the industry I think has decided who won that battle.

But alas, no religious war is required. Use what tools work for you.


But for functional programming there's lambda calculus. OTOH there's no "OOP" calculus

"your functional if at the end of the day is just a compiler abstraction over a state machine which uses ifs"

Yes, that's the 'sad' part of it. In the end there's the Intel/AMD/ARM chip and nothing more


In what languages is if a function?


In Haskell/Ruby/CoffeeScript: a = if b then c else d

In Lisp: (setq a (if b c d))

Only Lisp implements them as a function (more precisely, a macro), but to the programmer there's no difference between using if and calling a function.

Of course, they don't compile into function calls in assembly code :)


It is called "special form".

http://www.lispworks.com/documentation/HyperSpec/Body/03_aba...

IF can't be implemented as a function because of evaluation rules.


Ah, that is correct. You can't use if as a first-class function in Lisp.

I still thought you could pass macros around like normal functions in Common-Lisp. The Lisp interpreter I wrote for fun could do that since the function value is resolved before arguments are evaluated for late macro expansions.


Also R and I think were mostly in agreement here. However I'm of the opinion that more than just these languages (languages with lazy evaluation or macro's) are decent and that in for example Ruby there are differences between if and a function, as this example shows (my first lines of Ruby)

    def iff(cond, x, y)
      if cond then
        x
      else
        y
      end
    end

    puts iff(true, 1, puts('2'))
echoes

    2
    1
if may behave as a function, but you can't write a function that acts like if.

edit: R does have real lazy evalution, e.g.

    > iff <- function(cond, x, y) { if (cond) x else y }
    > iff(TRUE, 1, cat('foo'))
    [1] 1


There is a difference to the programmer, and it's pretty important as far as "if" goes: if "if" were a function, its arguments would need to be evaluated before they were passed in, which defeats the purpose of "if", which is to conditionally execute only one branch.


You did say that only Lisp implements "if" as a function, but I thought I'd share that in Haskell "if" is syntactic sugar for a case expression. That is:

  if x then y else z
is sugar for:

  case x of
    True -> y
    False -> z
There have been suggestions to replace this sugaring with a proper "if" function, e.g.:

  if :: Bool -> a -> a -> a
  if True x _ = x
  if False _ y = y
to remove arguably redundant syntax, but it seems unlikely that this change would happen. See more here: http://www.haskell.org/haskellwiki/If-then-else


If is an expression in a lot of languages. The syntax you describe is identical to the "ternary statement" in C-style languages.


The ternary is an expression, not a statement :)

You can write: a = b ? c : d; in C/C++, you can't write a = if(b) { c; } else { d; }.


Oh, yeah. My mistake. Although what you describe is a lot more functional in style. You could probably emulate it in a language with higher-order functions.


LISP maybe?


Right, object oriented is not everything, sometimes ifs or switches are just fine.

I would suggest that decision tables, rule engines, workflow, DSLs, even finite automaton and other abstractions etc are better than having many many objects that are avoiding ifs :)


That's not really fair. The whole point of OO (at least dynamic dispatch) is that the dispatch system itself can evaluate a type condition, so you can rest a lot of weight on that implicit condition rather than having to write your own. If you have too many conditionals, maybe you aren't using that enough. Likewise, if you have list comprehensions or at least higher level functions like map, reduce, and filter, having too many loops is a little questionable.

If we wanted to program in assembly, we would be programming in assembly.


Did you watch the linked talk by Misko?

This is not about getting rid of "if" (as he mentions in the talk), this is about structuring large software systems for maintainability, extensibility, testability, etc.


Functional pattern matching (more rigorous IFs) is open for functions but closed for extension. You can write as many functions as you like, but you need to modify all of them if you add a new data type.

OO polymorphism is closed for functions but open for extension. You have a fixed number of functions (methods), but you don't need to update all the call sites if you add a new data type - merely implement all the methods.

The requirement for what needs to be open and what can be left closed is what determines which choice is better.

For example, a GUI framework is best left open for extension, because every application GUI usually ends up with specific widgets custom-designed for that app - sticking with standard widgets tends to make apps form-heavy and lacking in polish. But for the widgets to work in the framework, they need a fixed set of methods to be consistently manipulable.

A compiler AST is best left open for functions, because the majority of work on an AST is in algorithms that transform the AST. The language changes at a much slower pace than additional optimizations and analyses, and frequently new language additions can be represented in terms of existing AST constructs (where they are syntax sugar, effectively built-in macros). So having a more fixed set of AST node types is less of an issue.

Choosing one or the other on the basis of the orthodox religion of OO or functional programming, meanwhile, is just obtuse.


That's the classic "Expression Problem" (aptly named wrt its common appearance in ASTs).

Note that it's relatively straight forward to express the function-extensibility through the visitor pattern in an OO language; in a functional language, you could probably implement your own dynamic dispatch technique to get the type-extensibility.

In both cases, you'll end up with some boilerplate, depending on your language. E.g. for a complex language with lots of different AST node types, you'll end up with lots of tiny classes, mostly just implementing stupid constructors.


Function-extensibility using the visitor pattern in OO sucks compared to functional pattern matching. It is almost unusable except for the most trivial tree traversals.

Frequently you want to be able to pass arguments down and return values up the tree traversal. Using visitors, you need a whole separate class for each unique function arity, or you have to use tedious little data carrier instances. Sometimes you want to switch between them half-way through the traversal, e.g. use your constant-folding visitor during an optimization visit, and the visual overhead of constructing visitors and keeping track of them, making sure they're all linked together properly, is really ugly. Don't even think about matching the visitor method on more than one parameter type, like you might do with e.g. overload resolution of binary operators. I've been there working on a commercial compiler, and I don't want to go back there again.

Haskell can implement the OO style using existential types (forall), a bit like Go's interfaces, except once you put a value into an existential type variable, you can't typecast it back out again.


> Function-extensibility using the visitor pattern in OO sucks compared to functional pattern matching.

Not only that, but using the visitor pattern tilts the extend types / extend functions into extend functions realm, soundly defeating the supposed advantage of being open to extend on types. As such, visitor pattern is precisely the same as a switch statement.

The only difference left is language specific. In Java, adding a new type will make your compiler whine if you have abstract methods in the visitor. That's because javac can't tell when a switch over enums is exhaustive.


> Haskell can implement the OO style using existential types (forall),

Although on emight say that passing functions (closures) or records of functions explicitly might be a more idiomatic pattern.


I totally agree with you, the purpose of the article was to explore but not to obsess over the subject.


Welcome to HN where all subjects are obsessed over obsessively..

That said I do think all things in moderation. At some point if you abstract the logic so far out humans have a hard time reading it or computers have a hard time optimally running it then you've lost the gains. I can think of several times when I was coding something using a class and noticed I had to write a lot of logic to really use the class the way I wanted. I wish in those situations the class was better designed to avoid spaghetti code to use it.

That said many times the use of an object is nothing like what the original author thought it would be. It is hard to think in advance about every potential use case, shoot for the most common cases and make it all readable. Hopefully later someone else will come along and make it more useful to the real world use cases.. :-)


Some of this seems petty. For example, the author gives the following "bad" example:

  def method (object)
    if object.property
      param = object.property
    else
      param = default_param
    end
  end
And suggests that this is better:

  def method (object)
    param = object.property || default_param
  end
To me, this is an example of an if statement by another name. Sure, you cut out a few lines, but it's still an if-then construct. Both examples are most likely going to be treated the same way by the compiler as well.


I disagree. Sure, the code is functionally equivalent today, but the next person touching the 'or' version might be less likely to introduce side-effecting computation. The 'if-else' version is completely open-ended.

We need to think about the effect our choices have on the next person who touches the code. Often with a different baseline people make different changes. It's a micro-form of 'No Broken Windows'.


"We need to think about the effect our choices have on the next person who touches the code."

And that next person might be the newbie on the team that doesn't have the business domain knowledge built up over years of working for This Company. So I'll use the if statement to keep intent clear.

Or maybe it'll be someone whose primary programming language doesn't use the || operator. Or maybe, for some unknown reason, it'll be a PHB. So I'll use the if statement to keep intent clear.


Setting the baseline gives you a higher probability of a better change than you would have otherwise. This assumes that not everyone on the team is a newb or a code-dabbling PHB. If they are, there's not much point in writing any code.


If the idea is to keep the intent clear, the or statement is a much better choice, even if less familiar.

"I want to set param to the object property or the default" is much clearer than "I want to check if if I have an object property and if I do then I'll set the param to the object property otherwise I'll set the param to the default"

That is just mixing in a lot of noise about how you do it instead of just what you are doing.


Not an if statement, but unnecessarily clever code that doesn't reveal its intent. An example of the tension between general clarity and a concise, well-known idiom. The kind of thing that makes C++ impenetrable to those who don't write it regularly.

I'd rather add a method to Object/Nil called something like #or_else so that we could write

  param = object.property.or_else(default_param)
which, though still terse, at least describes what's going on.


I was talking in terms of the constructs which the language can provide to avoid ifs.

> And suggests that this is better:

It depends in what terms are you thinking. A syntactic improvement is a also a benefit, isn't it?


Whether or not the second example represents a syntactic improvement is more of a personal preference. Personally, I find the first example more readable when quickly scanning code.


Yes, but that's probably a matter of familiarity over clarity. This is a perfect time to use the irrationally-dreaded ternary operator.

  param = object.property.nil? ? default_value : object.property
Verbose, although less so, but reveals intent almost perfectly. Would be better if the colon were "else" or "otherwise".


I just realised that the statement "it's still an if-then construct" illustrates a common problem among people practising OOP: conflating implementation with interface.

The OR is not an if-then construct. If anything, the if-then construct is a specialised use of OR, given than if-then-else is literally XOR.

Both are an implementation of the interface "coalesce a value with null/0/false".

I only mention this because it points to a lack of precision in our thinking which I see time after time get in the way of using OOP/OOD effectively. Maybe that's a failing of OOP, but I think that if it were, it'd be a failing of programming in general, too.

This helps explain why I teach OOP the way I do: start by following the rules of removing duplication and improving names, which encourages the programmer to ask ever-more-interesting questions about what tools are available to help follow those rules, which encourages the programmer to learn OO theory as needed, rather than having it shoved in their face all at once. This just-in-time learning leads to longer-lasting understanding for many (most?) people.


Absolutely! And in the second example, what happens if "property" is 0 ? (My Ruby is rusty)


0 evaluates to true in Ruby. Only nil and false evaluate to false in Ruby, so it'd work as intended.


I don't like the OP code either. To someone not familiar with this language, || produces a boolean value (true or false), and that value gets assigned to param. A better approach would be (without knowledge of the language in particular):

  def method (object)
    param = default_param
    if object.property
      param = object.property
    end
  end

Here, its clear that param has a default right away.


I agree. The null coalescing is just a syntax sugar for if. While it should be used, it is not the right example in this context.


I agree, and I prefer the first one, it just seems clearer for me.


No! This practice is absolutely terrible! Object-oriented programming _can_ allow you to make if-free programs, but it will cost you readability/maintainability. Many people have been fighting against the overuse of inheritance and oo patterns that emerged in the 90s. Do not fall in the trap again!


I think this is (once again) just a question of finding the right balance. Using polymorphism to avoid manual conditionals is a practice that can often improve readability and maintainability. One just shouldn't drive it too far.

I do agree that people are massively overusing inheritance. In particular its often used as a means for code reuse, not for polymorphism. In most cases composition should be favored over inheritance.


I don't advocate a completely if-less style, but I do often grep and count ifs and elses in a code base before digging in, just to get a sense of what sort of developers have been there.

I think the if-less style is great kata material. You learn a lot when you apply it on a toy project rather than a real one. Sometimes you need to overdo things in order to understand what their limitations are.


Couldn't agree more.


You know whats a smelly thing in OOP ? Overblown abstractions, design patterns and stuff for the sake of flexibility which in the end mostly will never need that kind of flexibility but in turn readability and clearness suffers instantly.

I am all for Design Patterns and abstraction, but only when it makes sense.


In full agreement.

We have a rule about only using patterns if they become apparent rather than selecting a pattern up front.


I guess I don't see the harm in spending a little extra time up front to design things properly rather than just jumping in and writing code, hoping to stumble upon the correct design pattern. I've found it much easier to find and solve problems up front rather than waiting for potential landmines, at least, the time required to fix those problems is less if they're exposed earlier in the process. It's certainly possible to over-engineer a solution if you select a design pattern up front, but maybe you'd be better suited by having a rule against over-engineering.


It's hard to define a rule against over-engineering. The only approach I've seen that has any success is to ban any and all design up front, and only ever write code to implement user-facing features.


We "spike" stuff i.e. do a quick sanity check before we commit to something. A pattern will become apparent there and then if there is one. Sometimes this spike is re-factored into the product or is scrapped and rethought. Multiple eyes see every design - that is more important than anything.

You can't quantify over-engineering until it is done unfortunately.


I guess it depends on what you learn from the subject. The general attitude I have is "Here's this thing, let's think about it", rather than "ZOMG, drop your ifs, they are BAD"


In general, this kind of don't-use-this patterns are not very constructive. It reminds of this obsession of not having global objects at all cost. And we end up with singletons, static classes, etc. that are all just global objects in disguise. If I need a global, I use a global and call it that, same if I need an if/else statement.

It's an interesting article though, especially the conclusion: "Any language construct must be used when it is the more reasonable thing to do. I like the fact that the presence of if-s can indicate a bad OO design."


I think your quote sums up the intention of this article and most of the "don't do this" posts.

What they all are getting at I think is that constructs in a language can be "abused" or used less than effectively either in terms of processing efficiency or user/programmer interaction.

So when they say "don't do this" it's more to show cases where doing whatever "this" is can be bad. Not that it's always bad and should never be used.

Also, the "don't do this" mentality can be a useful exercise to teach yourself alternative ways of getting things done.


Fine, but the point is that the guy is only half-way through his reflexion. OO programming is an effective way to organize the business logic. And that logic should be coded in a sequential fashion to be readable and maintainable. Finding the right balance would be a much more interesting subject than to say 'ifs suck'


Yeah, I've seen people go down this path. In the next step they realize they need an if-statement to determine whether to instantiate a PDFPrinter or a MSWordPrinter. Which they conveniently hide inside an AbstractPrinterFactory or a ComplexPrinterBuilder. This is probably how the whole Enterprise Java thing got started.


My own pet theory is that 95% of the enterprise code carefully architected to be extensible is never extended.


Similarly, when there are no extension points, extension get hacked in (= "if" everywhere !)

Here is my experience with architected extension points:

1. Generally the first extension point used set the trend, other developers will follow the "pattern" blindly. Hacking to make it fit rather than use another more appropriate extension. That is both bad and common (everyone has had to work with too little time)

2. There is often a sharp refocus close before or after release 1.0. A lot of the extensions disappear at that stage (demo, experimental feature, cross-platform/framework support, performance targets are set, security infrastructure is decided, server setup, integration test env. available instead of simulated, ...). Structural change (like removing extension point) become very difficult to justify after release 1.0.

3. Technical debt is very often called "selling feature" at management level.

But yeah, real world code sucks.


This is an interesting idea and I think it's interesting to have it somewhere in mind while programming.

A few thought though :

- If you really push this to the extreme, you'll end up with all the logic hidden in the classes inheritance hierarchy. I'm not sure this is more readable/extensible than if/else statements.

- Most of the example given by the author to use "language features" are just syntactic sugar. Using collection.select instead of collection.each or || instead of if else is really just a matter of notation. I doesn't reduce the number of test cases required for your code and it might lead to "magical" one-liners that you have to read 20 times to understand.


For those who have never used a functional programming language, those often allow you to do "if-less" or at the very least "if-lite" programming via pattern matching.


Is pattern matching really that different from switch statements, which are really just fancy if statements?

Trying not to sound sarcastic, but if people are for if-free programming, pattern matching does not seem to be the answer for me. When I add a new type in haskell, I usually find myself having to look through all my pattern matchings.


Is pattern matching really that different from switch statements, which are really just fancy of statements?

Mostly, yes. Pattern matching also provides destructuring, allowing you to bind constructor arguments and pattern match against such arguments as well. For instance:

    fun (Just (x:_)) = ...
But some of the downsides are comparable to switch statements, e.g. if you modify:

    data MyType = Foo | Bar
to

    data MyType = Foo | Bar | Baz
You will have to (potentially) update all functions or case expressions to account for Baz. One could use parametric polymorphism, comparably to the linked article, to make more extensible code. In such a case, one would define a type class such as:

    class (Show a) => Printer p where
      printIt :: p -> a -> IO ()
And one could make particular printers of this typeclass. You could even throw in existential quantification so that a function does not specialize to a particular Printer.


Pattern matching is a lot more expressive that switch and if statements, as it combines testing with elimination. For instance, let's say we want to write a new tail function, which returns the tail of a list but returns [] when the list is empty (pseudo-Haskell):

    tail2 xs = case xs of { Nil => []; Cons(x,xs) => xs }

    tail2' xs = if xs == Nil then [] else tail xs
In the second case, tail2', the compiler won't stop us if we switch the two branches. In the first case, tail2, we only get access to the tail of the list if the list is actually non-empty.

In essence, the difference is that if statements throws away any static information about the test result, whereas pattern matching constructs lets that information flow to each branch through variable binding.


I second the question, pattern matching is cool for destructuring or for completeness checking (not sure those are fundamental properties of pattern matching though), but it does not solve the problem of adding new behaviour without changing existing code any more than an `if` does.


This is not a bug - its a feature! Pattern matching/ifs are good for creating new functions, while OO style makes it hard to add new methods.


The thing with pattern matching is that it is as safe as the OO approach, in the sense that the compiler will warn you if you don't match all the cases and that forcing you to use an explicit ADT protects against boolean blindness (for example, writing code that assumes you are in case 2 if you are not in case 1)

That said, when it comes to extensibility, pattern matching is more similar to if statements then the OO - its easy to write new functions but hard to extend the original ADT with new cases. If being able to add new functions is important it might be better to use if statements then to do a major rewrite to use OO instead. (You could always use a visitor pattern if you want the extra compiler safety but that can get very complicated, IMO)


Those who use if are obviously too flimsy to decide what their programs should do. I would not trust such persons to write any code at all.


It's not just an OO thing. If you're using conditionals, you might actually want something else - a dispatch table, for example. Thinking about alternatives to conditionals will probably result in better code most of the time, but actually trying to go if-less seems forced.


Replacing a switch statement or large if statement with a virtual call (enum -> interface refactoring) can definitely be very beneficial. It can turn [crimes against humanity](https://github.com/aidanf/Brilltag/blob/master/Tagger_Code/f...) into perfectly respectable code.

But, obviously, don't take this too far. If you find yourself not using an if statement (or ternary operator) when writing the absolute value (for non-optimization reasons)... you've gone too far.

    int UnclearAbs(int value) {
        return value * (value >> 31);
    }


Abuse of bitwise operators is an abomination, but I'd instinctively write "value.abs" (scala), which is clearer than an if.

Obviously many things are ultimately implemented using if, but it's too low-level a construct to be using for day-to-day work.


This is an aside, but don't use the ruby "to_proc" approach that listed in the article. i.e:

  result = collection.select(&:condition?)
The "&:proc" methods are (very, very likely) slower and they also "leak".

When I say "leak", the VM doesn't Garbage Collect the parameters of the proc until it is used again. Most of the time this is fine, but when it's not, you're wasting considerable amounts of memory. This is known and is considered within the spec.

I know they are semantically equivalent, but the MRI is doing something weird internally. (ps. Learnt this the hard way).


This is a terrible example of IF-less programming. The conditional is still here, it is just implied.

IF:

        def method (object)
	  if object.property
	    param = object.property
	  else
	    param = default_param
	  end
	end
Claimed to be IF-less:

        def method (object)
	  param = object.property || default_param
	end
It may be easier to read, but in the end you are still writing an IF statement.


It's not only easier to read, but it clearly shows the intent, that it is only an assignment. It's faster to comprehend too, you don't even have to look at the right hand side if you're not interested. The first example obfuscates the intent a bit. Also, in other languages, you'd have to declare the variable first and that's even more lines of code.


It would be just as clear in expressing that the intent is an assignment (and more clear what is being assigned, since it avoids the coalescing-OR) though less concise to leverage the fact that Ruby's "if" is an expression not a statement:

<code> param = if object.property then object.property else default_param end </code>

Though, actually if you want to do what the description says (use the default if the property is unassigned, represented conventionally in Ruby by the accessor returning nil) rather than what either the good or bad code does, you probably want:

<code> param = if not object.property.nil? then object.property else default_param end </code>

(The difference between this and the other versions shows up if the property is assigned, and its value is false.)


The code tag does nothing, do this instead:

Text after a blank line that is indented by two or more spaces is reproduced verbatim. (This is intended for code.)

http://news.ycombinator.com/formatdoc


As an example of taking this to an extreme, I was at a coderetreat where we had to implement Conway's Game of Life without any if statements (or similar, such as switches) -- we had to use polymorphism instead. The result was that my partner and I ended up reimplementing a subset of the natural numbers as distinct classes.

http://mike.zwobble.org/2012/12/polymorphism-and-reimplement...

I'm definitely not advocating this as good programming practice, but the point is that if you're used to always using if statements, then it's hard to learn alternatives. By forcing yourself to use the unfamiliar, you might find some situations where polymorphism is better suited to the problem, whereas you would have previously defaulted to using ifs.

(barrkel has already left an excellent comment on when the two styles are useful, so I won't repeat it:

http://news.ycombinator.com/item?id=4977487)


The first example is an actual example of how removing ifs can reduce complexity, but the last few seem misguided. It is no easier to test `collection.each {|item| result << item if item.condition? }` than `collections.select(&:condition)`; they are all but equivalent. The exception handling example doesn't actually show the benefit of not using ifs, it shows the benefits of using exceptions over return values. Setting up default values via || is also a nice trick, but it hardly makes a macro difference.

Also, "# I slept during functional classes"? I don't know ruby, but the `each` method seems to be just a variant of map, which is a pretty fundamental functional construct.


each is basically an entirely non-functional variant of map if functional is defined as no side affects.


One reason why "ifs are smelly" has become a maxim in some circles is because they represent an under-tested code path. In such areas as safety-critical/life systems, where a different codepath can be taken on the basis of a single var, this can be a very, very dangerous practice. Certainly in safety-critical, a reduction of "if"-based codepaths represents higher quality software in the end.

I have seen cases of radiation-derived bit-rot which don't manifest in any way until a certain "if"-path is evaluated by the computer - this seriously does happen and can still happen in modern computers today.

Having an abundance of such code switch points in a particularly large codebase can be a degree of complexity that nobody really wants to manage - or in the case of disaster, be responsible for .. so this maxim has been pretty solidly presented in industrial computing for a while. Make the decision-making as minimal as possible to get the job done, and don't over-rely on the ability of the computer to evaluate the expression in order to build robust software.

Now, its sort of amusing that this has propagated into the higher-order realms of general application development by which most Class/Object-oriented developers are employed .. but it is still an equally valid position to take. State changes in an application can be implemented in a number of different ways, "if" being one of the more banal mechanisms - there are of course other mechanisms as well (duffs devices, etc.) which are equally testable, yet more robust - simply because they break sooner, and can thus be tested better.

I take the position, however, that a well-designed class hierarchy won't need much navel-gazing decision-making, which is what the ol' "if (something == SOMETYPE)" statement really is: a kind of internal house-keeping mechanism being done by the computer at runtime, instead of at compile-time.

So there is a balance to this maxim, and the key to it is this: how complex does it need to be, versus how complex can the codebase be before it becomes unmanageable. If you're not doing full code-coverage testing with 100% testing of potential codepaths, then every single if statement represents a potential bug you didn't catch yet.


A nice blog series on if-less programming (in Portuguese): http://alquerubim.blogspot.com/search/label/ifless


Not endorsing, but much more expanded by the Anti-IF campaign http://www.antiifcampaign.com/ (which focuses on "bad IFs")


I'm a little surprised nobody is lamenting the performance hit this kind of technique will incur vs just using an if statement.

(reaching into my way back machine, ifs essentially compile down to a few comparison instructions (which are often just subtractions) and a jmp instruction (depending on the platform), it's literally built into the processor! For a simple if statement we might be talking a handful of cycles to eval the if vs an extended call stack pumping and dumping exercise)


For inner loops etc., having less branch prediction misses or none at all can actually outweigh having to do slightly more complex calculations.

http://stackoverflow.com/a/11227902


I'm actually curious about the internals of how a modern OOP system works internally once it's boiled down to the CPU level. I'd imagine there's still lots of branch prediction issues in complex OOP systems.


Well, that you don't use OOP for those inner loops is kinda taken for granted I think. That is, you certainly don't program for code beauty first and foremost -- OOP may not hurt in a particular case, but if it does, code beauty may have to go... example: http://www.particleincell.com/2012/memory-code-optimization/ (which is not about OOP per se, and not about branching, but illustrating that knowing what the CPU actually does (not just what it did decades ago) is really important when talking about performance)


2001 called, it wants its debate back. For all the new kids on the internet: OO doesn't enable re-use, inheritance generally sucks, and bloating your code with new types just to solve something three lines of if/then/else could solve isn't worth it.


If you're doing "ifs" on the same condition sets in various functions then you should consider encapsulating the condition in class hierarchy. If there is just one if for a condition set, introducing a class hierarchy is just bloat.


how does "try to use less X because Y" become "don't use X"? and why is this considered good?

to clarify: my question "why is this considered good?" isn't about "if-less programming", but about taking ideas to dumb extremes.


You did not read the article, by no means I suggest to get rid of the IFs completely.


Replacing static decisions with polymorphism is indeed often a good idea, but there's nothing wrong with using if when it's appropriate.


The problem isn't if, it's "else if".

If-then-else ladders tend to evolve to be very difficult to understand, maintain and debug


An often better alternative to inheritance for conditionals is configuration with functions as values.


Don't. Just don't. Do you think Da Vinci would've painted just with oil because people think real painters work just with that? No he didn't, and so you shouldn't too.

Don't limit yourself just to blindly comply to some silly idea. Use everything you know to get the job done, and once you get it working, make it beautiful.

If statements are an incredible tool. Just ask any Erlanger and they will either tell how much they miss it, or just lie to your face. ;)


If statements are rife for abuse and can be an indicator of poorly thought out structure. This article mimics my own experiences, namely that overuse of if statements is a smell and can usually be avoided to the benefit of the code.

I long ago abandoned else clauses. It was a short time thereafter that I realized that if statements themselves weren't all that necessary, most of the time.


There is no programming construct that exists that is not rife for abuse.


Oh great. What will you people come up with next? Variableless programming?


Erlang is mostly IF-less (destructuring pattern matching in function heads) and doesn't have variables.

A = 1,

A = 2, % fatal error because 1 != 2

So, yeah, variabless programming FTW!


Erlang has mutable variables (the process dictionary), it just makes you do more work to get at them instead of immutable ones and prevents them from being directly shared and causing synchronization problems.


No, the process dictionary is not a mutable variable - there is no natural idiom to use values stored in the process dictionary as variables in code, you have to get them out and put them in via immutable variables.

Any given Erlang process has meta-information about itself, how many reductions it has, how big its heap is, which flags are set. These are the global state of the process.

The process dictionary allows you to store and manipulate your own global state of the process - and the people (hands up, that includes me) get smart and use it as local state of the programme and then get their bum bitten badly and swear never to dance with the dark side again... :(

Not bitter :)


The way I see it, a programming language should...

1) correspond to computer architecture (which excuses the distance from human thinking)

2) correspond to human thinking (which excuses the distance from computer architecture)

So what's the excuse for pulling such strange rules out of nowhere? The sort that have no counterpart outside of the language itself? Is it just for the sake of making programming more of a puzzle, or...?


These aren't strange rules at all. They are pretty common in functional languages (of which Erlang is one) - less so in procedural, imperative and object-orientated languages.

The reason is simple, with immutability you know what the value is, and you can pattern match on it with confidence.

Variables that can change value are a mini-version of global state with all the reasoning problems that 'globality' gives:

"what is the value at this point in the code?"

"when does the value change"

"what range of values can this have depending on which code path was executed?" ie sometimes the value is changed and sometimes not...

Trust me, once you have gone immutable, you don't want to go back.


Yep. It's called "functional programming" and it's pretty froody. You should check it out.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: