
Destroy All Ifs – A Perspective from Functional Programming - buffyoda
http://degoes.net/articles/destroy-all-ifs
======
barrkel
Define true as a lambda taking two lazy values that returns the first, and
false as one that returns the second, and you can turn all booleans into
lambdas with no increase in code clarity.

The straw man in the post - talking about a case-sensitive matcher that
selectively called one of two different functions based on a boolean - is
indeed trivially converted into calling a single function passed as an
argument, but it's hard to say that it's an improvement. Now the knowledge of
how the comparison is done is inlined at every call point, and if you want to
change the mechanism of comparison (perhaps introduce locale sensitive
comparison), you need to change a lot more code.

That's one of the downsides of over-abstraction and over-generalization:
instead of a tool, a library gives you a box of kit components and you have to
assemble the tool yourself. Sure, it might be more flexible, but sometimes you
want just the tool, without needing to understand how it's put together. And a
good tool for a single purpose is usually surprisingly better than a multi-
tool gizmo. If you have a lot of need for different tools that have similar
substructure, then compromises make more sense.

This is just another case of the tradeoff between abstraction and
concreteness, and as usual, context, taste and the experience of the
maintainers (i.e. go with what other people are most likely to be familiar
with) matters more than any absolute dictum.

~~~
rrradical
Someone else addressed the details of your counter argument, but I'd like to
respond to it generally.

It seems like every time someone writes an article on how to write better
code, there are responses about how it doesn't make sense when taken to some
logical extreme, or some special case, as if that invalidates the argument.
(FP techniques in particular seem to provoke this.) But code design is like
other design disciplines-- good techniques aren't always absolutes.

Do you really think that because the given example doesn't apply to every
situation it's a 'straw man'? It is a little tiring to hear all code design
advice dismissed this way.

~~~
unsignedqword
The thing is that the tone of the article seems to suggest taking such an
extreme: I mean, an _" anti-if_" campaign? There's like, only one sentence of
concession near the end towards those unconvinced by the argument.

~~~
bunderbunder
FWIW, I'm pretty unimpressed by the anti-if campaign's website. They've
clearly put style over substance. It's a beautiful website, but I spent some
time poking through it and I can still only guess at what exactly they're on
about. It seems to be something about if-statements being bad, but beyond that
it's rather a muddle.

I'm trying to be charitable, though, so let's assume that the core of their
idea is something coherent. I'm guessing it's really about something I do
think is an important point: How inversion of control is a design pattern that
lets you create code that's much easier to manage, because it greatly limits
the extent to which certain kinds of decisions need to be federated throughout
the codebase.

If that's the case, then real sin (and the article author's) is mistaking if
statements for the problem. Conditional branching is not a problem; I think
most of us can agree it's an essential operation. The real problem they should
be after is poor encapsulation. Where if statements come into it is that, if
you've got badly architected code with poor encapsulation, one of the symptoms
you'll see is that there will be a proliferation of if-statements that crop up
all throughout the code. Every single frobinator will need to stop and check
whether the widget it's operating on is a whosit or a whatsit before it can
take any sort of action whatsoever. Lord help us if we ever try to introduce
wheresits into the system; we'll have to go modify 50 different files so we
can replace all those if-statements with switch statements.

It's probably nowhere near as fun to write an article that advocates a high
level design methodology as it is to write an article that makes a bold
contrarian claim like "If statements bad", though.

~~~
biocomputation
You are totally right.

As an example, long if/else chains that check state can mean that you need
another object, or another virtual function, or some other niblet of
orchestration.

Likewise, I'm not really impressed by the anti-if campaign. At some point,
abstractions cause the exact same problem they were designed to solve, and
produce code that is difficult to reason about or change.

------
ufo
I'm surprised that the article and none of the comments so far mentioned the
"Expression Problem":
[http://c2.com/cgi/wiki?ExpressionProblem](http://c2.com/cgi/wiki?ExpressionProblem)

Basically, if you structure the control flow in object oriented style (or
church encoding...) then its easy to extend your program with new "classes"
but if you want to add a new methods then you must go back and rewrite all
your classes. On the other hand, if you use if-statements (or switch or
pattern matching ...) then its hard to add new "classes" but its very easy to
add new "methods".

I'm a bit disappointed that this isn't totally common knowledge by now. I
think its because until recently pattern matching and algebraic data types (a
more robust alternative to switch statements) were a niche functional
programming feature and because "expression problem" is not a very catchy
name.

~~~
jake-low
I was familiar with the problem but didn't have a name for it; thanks for
providing me with one.

What kind of work has there been on creating programming paradigms that make
it easy to both add new types and new methods? Is it a CAP-theorem-type
problem where every solution is a trade-off, or is there a way to have your
cake and eat it too?

~~~
chenglou
There are languages (libraries) that solve it. For reference, check Clojure's
multimethods and OCaml's polymorphic variants.

~~~
smallnamespace
The tradeoff is that techniques like multimethods significantly weaken the
contract that people normally expect from methods/classes.

For example, if I'm writing a normal Java class, I know where I go to find
methods dealing specifically with instances Foo (namely Foo and its children
and direct users); with multimethods, it's more likely that there is some
multimethod out there in an unrelated class that looks for instances of Foo.

~~~
eru
Good tooling (think something along the lines of Hoogle) can help here.

------
kazinator
Problem is, a decision has to be made somewhere about _which_ function to pass
into that "if-free" block of code. The if-like decision has just moved
elsewhere. That is a win if it reduces duplication: if a lambda can be decided
upon and then used in several places, that's better than making the same
Boolean decision in those several places.

Programs that are full of function indirection aren't necessarily easier to
understand than ones which are full of boolean conditions and if.

The call graph is harder to trace. What does this call? Oh, it calls something
passed in as an argument. Now you have to know what calls here if you want to
know what is called from here.

A few days ago, there was this HN submission:
[https://news.ycombinator.com/item?id=12092107](https://news.ycombinator.com/item?id=12092107)
"The Power of Ten – Rules for Developing Safety Critical Code"

One of the rules is: no function pointers. Rationale: _Function pointers,
similarly, can seriously restrict the types of checks that can be performed by
static analyzers and should only be used if there is a strong justification
for their use, and ideally alternate means are provided to assist tool-based
checkers determine flow of control and function call hierarchies. For
instance, if function pointers are used, it can become impossible for a tool
to prove absence of recursion, so alternate guarantees would have to be
provided to make up for this loss in analytical capabilities._

~~~
eru
In a language like Haskell you wouldn't want to prove the absence of
recursion, but that all recursions in the program fit into a handful of
patterns. (Eg 'structural-recursion' or 'tail-recursion'.)

Some type systems are strong enough to put that kind of analysis / constraints
directly into the language. (Haskell might already be strong enough with GADTs
and other language extensions enabled.)

In any case, the Addendum at the end of the blog post provide a different
perspective on the problem you mentioned.

~~~
cousin_it
Tee hee, Haskell doesn't have tail recursion (e.g. foldl takes linear space),
and structural recursion in Haskell isn't guaranteed to terminate (e.g. if
you're given an infinite list).

If I were in charge of developing a safety critical system, and someone came
to me with a proposal to write it in Haskell, I'd be very skeptical.

~~~
wyager
??? Haskell absolutely has tail recursion; foldl just evaluates non-strictly
and therefore can leave thunks in memory. This is fine for e.g. reversing a
cons-list. Regardless, it is tail recursive (and uses constant stack space).
foldl' is also tail recursive and has strict semantics.

Structural recursion can't be guaranteed to terminate in any language that
supports codata unless you have some sort of totality checker (e.g. via a
monotonically structurally decreasing requirement imposed at the type or value
level). I don't think any mainstream language supports this out of the box.
Liquid Haskell does offer this, though.

I agree that standard Haskell is inappropriate for safety critical software,
but only because it allows dynamic allocation. Any program using dynamic
allocation is probably unsuitable for safety critical software. Now, a
terminating and fixed-memory subset of Haskell a la Clash would be interesting
for safety critical software...

~~~
cousin_it
The point of tail recursion is using constant space, not constant stack space
(does Haskell even have a stack?) Anyways, the Haskell spec allows foldl' to
use linear space just like its lazier counterparts. The fact that it uses
constant space is an implementation detail of GHC. Reference:
[https://github.com/quchen/articles/blob/master/fbut.md#seq-d...](https://github.com/quchen/articles/blob/master/fbut.md#seq-
does-not-specify-an-evaluation-order)

Structural recursion always terminates in SML. Supporting infinite/cyclic
values in algebraic data types is a misfeature, and they are trivial to rule
out without using a totality checker. Heck, I can implement a guaranteed
finite linked list in Java :-)

I think something like MLKit would be a more promising start for implementing
a safety critical system. Tail and structural recursion actually work there,
and it statically replaces most uses of GC with region inference. Though it's
still a very long shot, I'd prefer something more proven.

~~~
wyager
Tail recursion can't use constant space if it's strictly generating another
data structure of the same size. That doesn't even make sense.

Interesting fact about foldl'. Regardless, in practice it is strict and tail
recursive. As I mentioned earlier, this does not mean the same thing as
constant space unless the reduction function returns a fixed size result.

Yes, you can guarantee that a linked list in Java is finite because Java does
not support codata.

Haskell's tail call recursion is also often optimized to be allocation-free,
unless, again, it is generating some data structure.

~~~
eru
> Yes, you can guarantee that a linked list in Java is finite because Java
> does not support codata.

What about another thread running that keeps generating pieces to the end of
the linked list? (No problem, with mutation.)

~~~
cousin_it
To prevent these and similar "what abouts", here's an implementation of a
guaranteed finite linked list in Java.

    
    
        class LinkedList<T> {
          public final T value;
          public final LinkedList<T> next;
          public LinkedList(T value, LinkedList<T> next) {
            this.value = value;
            this.next = next;
          }
        }

Here's how you construct it:

    
    
        LinkedList<String> myList =
          new LinkedList("Hello",
            new LinkedList("World", null));

Here's how you iterate over it in constant space:

    
    
        while (myList != null) {
          System.out.println(myList.value);
          myList = myList.next;
        }

------
qwertyuiop924
This is, as many commenters have noted, just another overzealous programming
doctrine. Just like 'GOTO considered harmful.'

Here's the deal: if is a flow control primitive. Just like goto and while. If
(heh) that primitive isn't high-level enough to handle the problem you are
facing, it is incumbent upon you as a programmer to use another, higher level
construct. That construct may be pattern matching, it may be polymorphism (or
any other form of type-based dynamic dispatch). It may be a function that
wraps a complex chain of repeated logic, and is handed lambdas to execute
based upon the result. It may, as in the article given here, be a funtion that
is handed lambdas which apply or do not apply the transformation described.

The point is, there are many branch constructs, or features that can be used
as branch constructs, in most modern programming languages. Use the one that
fits your situation. And if that situation isn't a that complex, that
construct may be if.

Fizzbuzz using guards is the most clean and modifiable fizzbuzz that I've seen
in Haskell.

Although now that I think about it, if you provide a function with a list of
numbers...

~~~
eru
Not all control-flow primitives are necessary.

Eg Haskell and Scheme get by without 'while' and 'goto'.

Haskell would do just fine without a built-in 'if': you can define 'if' as a
function via pattern matching.

Given that perspective, the article would be a call to use more expressive
types than Booleans to match on---and in lots of cases not to match at all,
but provide what would be the result of the match as an argument to the
function.

~~~
qwertyuiop924
Scheme and Haskell have other primitives that take the place of while and
goto.

But yes, using more expressive match types or parameters is a good idea. As
for providing the result as an argument, that can be a good pattern, but isn't
always practical. Note what I said in my original comment about using your own
discretion.

~~~
eru
They don't have `other primitives': they have function calls. Most languages
have function calls these days.

~~~
qwertyuiop924
Yes, I read LTUI and LTUD. But in most languages, function calls and loops
don't have the same semantics. I'll call that a different loop primitive.

~~~
eru
For C, this seems to be implementation defined.

(At least for C as encountered in the wild, I don't know about C the
standard.)

Most modern C compilers support tail call optimization.

I don't know about `most languages'. Eg I know Java on the JVM doesn't do tail
call optimization. Lots of languages probably do not require TCO of their
implementations, though.

------
white-flame
This whole campaigned is misguided.

"Bad IFs" are a code smell, and they're being scapegoated when the real
problems are management demanding that simple hackish prototypes & tests be
deployed into production, management that doesn't allow time for refactoring,
and poor programmers who think that "bad IFs" are good code.

But the main site also doesn't do any reasonable job of defining what a "Bad
IF" even is.

The crux of the matter is that programmers need time to craft the details of a
project to avoid or correct technical debt. These sort of reactions just point
out one tiny portion of technical debt itself and doesn't solve any
fundamental problems at all.

(and yeah, I known I'm ranting against the Anti-IF campaign, not the
particular take on the linked site. But this article just seems to
parameterize the exact same parameters that are branched on anyway.)

~~~
lifeisstillgood
I think that aiming at the management of the coders and the business users is
putting the emphasis in _exactly_ the right place. Once we get to womdering if
eliminating IF stmts will help, we have passed by so many opportunities for
10x value delivery.

~~~
bunderbunder
The "technical debt" metaphor gets so much better if you take the analogy more
literally than most people do. Like for financial debt, the optimal amount is
not necessarily zero. Oftentimes taking on or carrying debt allows you to
generate more profit than you could by avoiding it or paying it down.

That said, most places I've worked manage it poorly. Few people really
understand that, just like financial debt, it's something that needs to be
taken on and managed in a mindful and deliberate manner.

~~~
eru
Also, if you do take the finance metaphor, going into debt is not good by
itself. It's the investments you make with that debt that are good, and
potentially outweigh the burden of debt. (And can be cheaper than equity
financing.)

Going back to programming: debt-fuelled programming should buy you something,
eg speed to market, and is not a good in itself.

------
Animats
The idea that each type has its own control flow primitives is bothersome.
It's taken over Rust:

    
    
        argv.nth(1)
            .ok_or("Please give at least one argument".to_owned())
            .and_then(|arg| arg.parse::<i32>().map_err(|err| err.to_string()))
            .map(|n| 2 * n)
    

I'm waiting for

    
    
        date.if_weekday(|arg| ...)
    

Reading this kind of thing is hard. All those subexpressions are nameless, and
usually comment-less. This isn't pure functional programming, either; those
expressions can have side effects.

~~~
MichaelGG
The annoying part there is the repeated "|x| x.". Rust should have syntax to
reference a method of an object, instead of having to write a wrapper. So it'd
look like .map_err(???.to_string()).

~~~
kmiroslav
Groovy and Kotlin use the implicit "it" parameter for lambdas that take just
one parameter, which is very convenient:

    
    
        listOf(1, 2, 3, 4).filter { it % 2 == 0 }

~~~
SatvikBeri
Scala allows underscores, and sequential underscores refer to the next
element, so you can do e.g.

    
    
        list(1, 2, 3, 4).reduce(_ + _) == 10

~~~
kmiroslav
Doesn't that make the parameter anonymous, though? Can you println that _ and
see the value of the current element?

~~~
vorg
Use a function that prints then returns its parameter:

    
    
      list(1, 2, 3, 4).reduce{print(_) + _} == 10
    

Use it if you or your language hasn't defined such a function:

    
    
      list(1, 2, 3, 4).reduce{it:= _; println(it); it + _} == 10
    

In fact, any name for it will do.

------
throwaway13337
This just seems to obscure the logic. Not unlike how polymorphism can make
code flow harder to read, though feel more clever.

There is a place for it - like when you're trying to express a set of logic
that will be guarded by the same condition, but always at the cost of some
complexity.

A set of conditionals is probably the most obvious way to express branching.

~~~
dwb
Try it before you knock it. I might have said something similar before getting
into Haskell, but now I'm nodding along happily. Dealing in meaningful data
types with small composable functions is very pleasant for me now.

~~~
TeMPOraL
> _Try it before you knock it._

That's what I recommend too[0] - with the added caveat that you shouldn't be
afraid to "knock it" if it turns out to be honestly bad.

Sometimes the idea turns out bad, sometimes it turns out great - but you'll
never know it if you don't try; just be honest with yourself during that
trial.

[0] -
[https://news.ycombinator.com/item?id=12108138](https://news.ycombinator.com/item?id=12108138)

~~~
dwb
Absolutely, but I'd also add that what works in one language/environment may
not work in another. I wouldn't be surprised if writing Haskell-style code in
Python didn't work out that well, for example!

~~~
eru
I have tried. Two things get in the way quickly, and that's even just
expressing thing, not even looking at performance yet:

    
    
      * Python standard library functions, especially the ones on dicts, mutate and don't return the new dictionary.
      * Python's syntax for creating functions is awkward: lambdas are cumbersome, and so are the operator package and eg functools.partial; there's no really convenient way to compose functions.

~~~
andrewaylett
Your first point is actually something I really like about Python's API
design: in general, methods operating on collections either mutate the
collection /or/ they return it. So it's clear at the point of use whether
you're dealing with the same object or a new one.

This is something that bugs me about the fluent builder pattern in Java --
continuing to return `this` until suddenly you don't any more, and you can't
re-use 'intermediate' values because they're actually all the same object.

~~~
eru
Sure. I'd just like to have a nice set of operations to manipulate dicts that
don't mutate and return the result, too.

------
dwrensha
I recommend Bob Harper's essay on "boolean blindness":
[https://existentialtype.wordpress.com/2011/03/15/boolean-
bli...](https://existentialtype.wordpress.com/2011/03/15/boolean-blindness/)

An excerpt:

> The problem is computing the bit in the first place. Having done so, you
> have blinded yourself by reducing the information you have at hand to a bit,
> and then trying to recover that information later by remembering the
> provenance of that bit.

~~~
AstroJetson
Thats why you use Lua, it lets you have multiple return values. So you can get
a boolean back to let you know if the strings were the same, an int to know
where they ceased matching and a boolean to let you know if they are case
different. It's then up to the programmer to decide how much enlightenment
they want.

The destroy all IF reminds me of GOTO considered harmful of the 70's. There
are other ways to fix the problem.

~~~
eru
What's the difference between multiple return values and returning a tuple?

(Apart from that languages with multiple return values tend to have some
special syntax for binding only the first few members of the returned tuple?)

~~~
harveywi
The difference, in terms of type theory, is that a tuple is a product type [1]
but a type representing multiple possible return values (to represent
different outcomes) would be encoded using a sum type [2].

[1]
[https://en.wikipedia.org/wiki/Product_type](https://en.wikipedia.org/wiki/Product_type)

[2]
[https://en.wikipedia.org/wiki/Tagged_union](https://en.wikipedia.org/wiki/Tagged_union)

~~~
eru
I don't think eg Go uses a sum type:

[https://gobyexample.com/multiple-return-
values](https://gobyexample.com/multiple-return-values)

Typically they use product types to simulate sum types.

------
lilbobbytables
Often times when I read about "ideal" ways of programming, I'm curious if it's
ever implemented in a production code base built by a team.

~~~
runeks
Me too. Particularly because every programmer has their own idea of what a
"right"/ideal style of programming is. Here, apparently, we must not use
conditionals.

The more I write code the more I realize that the entire purpose of the code
is to have some effect on reality, and the more reliably it can do this, the
better the code. I find I code a lot better without design principles, because
trying to remember which patterns are "good" and "bad" just obscures the
attention I would have used to look at the code and sense whether something
would work in this particular situation.

~~~
janekm
Not functional code though. The aim of functional code is to be side-effect
free, and affecting reality really gets in the way of that.

/snark, but articles like this really do fall into that trap...

~~~
runeks
I know you're making a joke, so I'm not writing this to correct you, but I'd
like to point out that Haskell/pure FP is different not because it denies
affecting reality, but because it only offers a one-way interface to affecting
reality: it allows you to alter values in reality using pure functions, but it
denies you the ability to "pull in" values from reality, into your pure
functions.

This paradigm is powerful because it accurately reflects how our universe
works: it is possible for a thought to affect reality (through a human being
acting on it), but it is not possible to "pull in" an object from reality,
into your mind. The only way to form a thought about something is to look at
that thing, and try to construct - in your mind - a thought that reflects
certain properties of the thing you're looking at. You can't "pull" that thing
from reality into your mind, thus creating a thought. No such interface exists
in this universe, as far as I'm aware.

It is, however, very possible for a human being to choose to act on a thought,
thereby causing the thought to have a side effect. The analog to this in
Haskell is applying a pure function to a value in IO. The function is pure,
but we can use it to alter a value that resides in IO (reality). Similarly, a
thought, in and of itself, does not affect reality (it is pure); it requires a
human being to act on it - "apply it to reality" \- in order for it to have an
effect.

In short: Haskell allows your program to alter reality, but it does not allow
reality to alter your program.

------
dahart
I read everything I could find on the Anti-IF site and didn't understand what
the mission is exactly. They qualify and mention they want to remove the _bad_
and _dangerous_ IFs, but I couldn't find examples that differentiate between
bad ones and good ones -- are there good ones according to this campaign?

I like using functional as much as anyone, and removing branching often does
make the code clearer and remove the potential for mistakes.

But I admit I have a hard time with suggesting people prefer a lambda to an
IF, or to not ever use an IF. A lambda is, both complexity wise, and
performance wise, _much_ heavier than an IF. And isn't is just as bad to
abstract conditionals before any abstractions are actually called for?

~~~
CamperBob2
_I read everything I could find on the Anti-IF site and didn 't understand
what the mission is exactly._

I have a similar problem, in that every time I try to understand the
perspective of functional-programming advocates, I find that the authors
always seem to illustrate their points with examples like this:

    
    
       match :: String -> Boolean -> Boolean -> String -> Bool
       match pattern ignoreCase globalMatch target = ...
    

If I'm already literate in Haskell or Clojure or Brainfuck or whatever
godawful language that is, then chances are, I'm already familiar with the
strengths of the functional approach, and I'm consequently not part of the
audience that the author is supposedly trying to reach.

So: are there any good pages or articles that argue for for functional
programming where the examples can be followed by a traditional C/C++
programmer, or by someone who otherwise hasn't already drunk the functional
Kool-Aid?

~~~
l_dopa
The problem's not on your end -- a lot of these blogs are just junk, probably
the vast majority of ones that fall under "advocacy". As far as I can tell,
the author's objection to conditionals is based on a misunderstanding of a
different blog post[0]. It's nonsense.

Really understanding where FP is coming from requires an introduction to
programming language semantics[1]. Interesting stuff, but not immediately
useful to a working C programmer.

[0] [https://existentialtype.wordpress.com/2011/03/15/boolean-
bli...](https://existentialtype.wordpress.com/2011/03/15/boolean-blindness/)

[1]
[http://www.cs.cmu.edu/~rwh/pfpl.html](http://www.cs.cmu.edu/~rwh/pfpl.html)

------
externalreality
I tried to ask the author the follow: (kept getting deleted as spam). Perhaps
he will see it here but its unlikely due to the fact there are many comments
as it is.

Hi John,

Are you familiar with Jackson Structured Programming?

[https://en.wikipedia.org/wiki/Jackson_structured_programming](https://en.wikipedia.org/wiki/Jackson_structured_programming)

Notice how the focus in on using control flows that are derived from the
structure of the data being processed and the processed data. Notice how the
JSP derived solution in the Wikipedia example lack if-statements.

Pattern matching allows ones to map control flow to the structure of data.
What are your thoughts on that? I think inversion of control has other
benefits but I don't think it has much to do with elimination of `if`
conditionals, the pattern matching does that.

Also, I noticed one thing:

In the article you mention `doX :: State -> IO ()` as being called for its
value and suggest that if you ignore the value the function call has no
effect. Isn't it the case that a function of that type usually denotes that
one is calling the function for its effect and not for any return value? Its
value is just an unevaluated `IO ()`.

~~~
mbrock
The return value of the function is a description of an effect. Calling the
function doesn't cause the effect to happen. That's why you could, for
example, call the function many times and get a list of IO actions which you
then execute in parallel or backwards or whatever. Hence "inversion of
control".

~~~
externalreality
I was debating whether on not to put that last sentence because I knew that it
would lead to a technical discussion that was aside from the meaning of the
question. My question is more -- why choose an `IO ()` as an example of
something being called for its value (especially since the article isn't aimed
at a Haskell audience)

~~~
eru
Yeah, that's probably not a wise decision on part of the author. The IO monad
is nifty but of minor importance in the grand scheme of things, and distracts
when making a mostly language independent point.

------
AYBABTME
The author seems to ignore the fact that passing lambdas like this merely
changes where the IF or SWITCH statement is made. I can agree that passing
functions instead of booleans is better and more general. But pretending that
IF/SWITCH are thus avoided, is delusional.

For instance, at some point there will be a decision made whether the string
matching must be case sensitive or not. If the program can do both at runtime,
the IF will be, perhaps, in the main (or equiv.).

~~~
buffyoda
Indeed, that's the whole point of inversion of control, is pulling the control
out of the caller and into the callee. That's the primary reasoning benefit of
functional programming.

~~~
TwoBit
I see no benefit to that. It makes more work for the caller. I want that
function to do something for me and I want the leadt amount of unnecessary
work on my side. Just like a good boss who delegates.

~~~
buffyoda
That's why not everyone's a functional programmer. :)

------
astazangasta
Why don't we just treat this like writing?

Good writing has one clear imperative: communicate meaningfully the intent of
the author to the reader. Good code is no different; it is merely expressive
writing in a different language, with, perhaps, greater constraint on its
intent.

Some people make up rules like "don't use adverbs", or "don't split
infinitives", in an effort to write better. But this doesn't necessarily
produce good writing; sometimes an adverb is just what you need.

The same is true of code. These are useful things to think about, but "destroy
all ifs" is akin to "never use a conjunction".

~~~
swift
I get what you're saying, but that's definitely not what good writing means in
the context of, say, poetry, or literary fiction. Programming is best compared
to technical writing or cookbooks, I think.

I realize this is one of those irritating "actually," replies, but what can I
say, I'm sensitive about this topic. =)

------
MrManatee
If I understood correctly, the article suggests that as a general principle
you should replace your union types and case-by-case code with lambdas. I feel
almost the opposite.

Article: "In functional programming, the use of lambdas allows us to propagate
not merely a serialized version of our intentions, but our actual intentions!"

Counterpoint: The use of structured objects instead of black box lambdas
allows us to do more than just evaluate them. For example, Redux gets a lot of
power by separating JSON-like action objects from the reducer that carries out
the action.

But let's take instead the article's example of case-insensitive string
matching. One tricky case is that normalization can change the length of the
string: we might want the german "ß" to match "SS". Sure, the lambda approach
can handle that. But now suppose that we want a new function that gives the
location of the first match. It should support the same case-sensitivity
options (because why not?). But now there is no way to get the pre-
normalization location, because we encoded our normalization as a black box
function. Case-by-case code would have handled this easily.

------
jwatte
The first problem is that the "match" function is considered in the first
place. It's too general. It should only be used in higher order constructs
where its flexibility is actually needed.

Second: The enum based refractor is actually valuable and fine IMO. If you
need string functions, stop there.

Now, shipping control flow as a library is a cool feature of Haskell. But, if
those arguments are turned into functions, the match function itself isn't
needed! It just applies the first argument to arguments 3 and 4, then passes
them to the second argument.

match :: (a -> b) -> (b -> b -> Bool) -> a -> b match case sub needle haystack
= sub (case needle) (case haystack)

Does that even need to be a function? Perhaps. But if so, it's typed in a and
b and functions thereof, and no longer a "string" function at all. And,
honestly, why are you writing that function?

Typing it out where you need it is typically less mental impact, because I
don't need to worry about the implementation of a fifth symbol named "match."

sub (case needle) (case haystack)

------
galaxyLogic
Isn't this exactly the Smalltalk way? In ST what looks like if-statements
actually are messages passed to instances of Boolean, with lambdas (in
Smalltalk: BlockClosures) as argument. The boolean then makes the decision
whether it will evaluate the lambda or not.

------
vittore
When I read things like "anti-if" I recall this brilliant illustration that I
saw several years ago -
[http://blog.crisp.se/henrikkniberg/images/ToolWrongVsWrongTo...](http://blog.crisp.se/henrikkniberg/images/ToolWrongVsWrongTool.png)

------
dozzie
The inversion of control flow from called to calling function is an
interesting way to describe (part of) functional programming style. I hadn't
thought of that this way, even though I use it for quite some time.

------
skybrian
General principle: for every possible refactoring, the opposite refactoring is
sometimes a good idea.

So, yes, replacing booleans with a callback is sometimes a good idea. But in
other situations, replacing a callback with a simple booleans might also be a
good idea.

Also, advice like this is often language-specific. In languages whose
functions support named parameters, boolean flags are easy to use and easy to
read. If you only have positional parameters, it's more error-prone, so you
might want to pass arguments using enums or inside a struct instead.

------
nialv7
Someone found a hammer, and now everything looks like thumbs

------
nn3
tl;dr: prefer callback hell instead of straight forward ifs and somehow that's
progress.

~~~
Scea91
Yeah, and the true fun starts when you try to debug it. Debugging streams in
java is nightmare compared to debugging the same logic written in a simple
foreach loop with a bunch of IFs.

------
js8
The idea that functional programming is a type of inversion of control reminds
me of similar idea I had, when comparing OOP and FP.

In OOP, you encapsulate data into objects and then pass those around. The data
themselves are invisible, they only have interface of methods that you can
apply on them. So methods receive data as package on which they can call
methods.

In FP, in contrast, the data are naked. But instead of sending them out to
functions and getting them back, the reference frame is sort of changed; now
the data stays at the function but what is passed around is the type of
processing (another functions) you want to do with them.

For example, when doing sort; in OOP, we encapsulate the sortable things into
objects that have compare interface, and let the sort method act on those
objects. So at the time sort method is called, the data are prepared to be
compared. In FP, the sort function takes both comparison function as an
argument, together with the data of proper type; thus you can also look at it
as that the generic sort function gets passed back into the caller. In other
words, in FP, the data types _are_ the interfaces.

So it is somewhat dual, like a different reference frame in physics.

The FP approach reminds me of Unix pipes, which are very composable. It stands
on the principle that the data are the interface surface (inputs and outputs
from small programs are well defined, or rather easy to understand), and these
naked data are operated on by different functions (Unix commands). (Also the
duality is kind of similar to MapReduce idea, to pass around functions on data
in the distributed system rather than data itself, which probably explains why
MapReduce is so amenable to FP rather than OOP.)

It also seems to me that utilizing this "inversion of control" one could
convert any OOP pattern into FP pattern - just instead of passing objects,
pass the function (method which takes the object as an argument) in the
opposite direction.

I am not 100% convinced that FP approach is superior to OOP, but there are two
reasons why it could be:

1\. The "nakedness" of the data in FP approach makes composition much easier.
In OOP, data are deliberately hidden from plain sight, which destroys some
opportunities.

2\. In OOP, what often happens is that you have methods that do nothing rather
than pass the data around (encapsulate them differently). In FP approach, this
would become very easy to spot, because the function that is passed in the
other direction would be identity. So in FP, it's trivial to cut through those
layers.

~~~
delian66
[http://people.csail.mit.edu/gregs/ll1-discuss-archive-
html/m...](http://people.csail.mit.edu/gregs/ll1-discuss-archive-
html/msg03277.html)

------
asQuirreL
The article seems to advocate type synonyms like the following:

    
    
        type Case = String -> String
        -- ...
        type Announcer = String -> IO String
    

I would argue that these are actually much worse than not having type synonyms
at all.

(String -> String) functions could do anything to your query parameter and
text, the type is too coarse, and the inhabitants too opaque for us to reason
about them easily. Naming the type suggests the problem is solved without
actually having solved it. It is like finding a hole in the ground, and
covering it with leaves, so you don't have to look at it anymore. You are
literally making a trap for the next person to come this way.

In an ideal world you would be able to use refinements to say that you want
any (f :: String -> String) such that `toUpper . f = toUpper` but without such
facilities, I think I may just settle for:

    
    
        newtype Case = CaseSensitive Bool
    

Sometimes, your type really does only have two inhabitants.

~~~
joeyh

        data Case = CaseSensitive | CaseInsensative
    

This is just as efficient as the newtype, and leads to clearer code when
matching on the value.

Also, sometimes types you thought only had two inhabitants get a third one
added later, which this facilitates.

~~~
asQuirreL
Clarity is a bit subjective, I think. The difference between:

    
    
        CaseSensitive
        CaseInsensitive
    

Is harder to spot (for me) than between:

    
    
        CaseSensitive True
        CaseSensitive False
    

This is because the bit that is the same is all on one side, and the bit that
is difference is all on the other side. Case in point, your data definition
has a typo: `CaseInsensative`, which occurs after the `In` shifts it away from
the bit it should be the same as in `CaseSensitive`. Every little bit helps.

What's more, while you may be right that at the surface, the two
representations are equally performant, what the newtype has that the data
declaration does not, is the Prelude's definitions of all the boolean
operators. If you wish to perform any more complicated logic with your data
declarations treating them as booleans, you must either cast them to booleans
(which comes at a runtime cost), or you must replicate the functionality of
the Prelude for your custom type (which comes at a development cost).

Your branching logic (which, let us suspend disbelief and say is "not so bad",
just for now) may require the combination of multiple such booleans, which in
your encoding scheme would each get a different type due to their semantics,
then we can't even viably define our custom boolean operators, so are forced
to cast everything to booleans.

The point I'm making here is that outwardly, you want the type to reflect the
semantics of how its values are used, but inwardly, you want access to its
representation in a way that makes it easy to combine (or put another way,
depending on who's looking, the semantics of a value changes).

Also, there is nothing stopping you from changing code later to meet changing
needs. Using a newtype now doesn't preclude you from ever using a data
declaration in the future. Certainly, you will have to change the patterns and
constructors used in a couple of places, but that is a matter of minutes: Time
you have already spent weighing the future implications of this decision in
your mind right now, so this sensation of time saved is a fallacy.

------
Eliezer
I thought the argument was going to be "Conditionals are bad for running on
GPUs."

------
oliv__

        It’s no wonder that conditionals (and with them, booleans) are so widely despised!
    

They are?

~~~
drauh
Granted, I'm a mostly self-taught programmer, but I would have thought that if
something appears in formal logic,[0] it should have an analog in a
programming language.

Even standard algorithms like quicksort[1] use conditionals.

And, while I can see how massive switch statements suck, normal conditionals
are common in everyday life: "If they don't have a dark roast coffee, get me a
medium roast."

All of which is to say, I really don't understand what he's getting at. The
last example he gave seemed to make things even more complicated, and it
basically renamed "true" and "false" to more descriptive things
(forRealOptions, dryRunOptions), which seems to my untrained eye to boil down
to the moral equivalent of a C enum.

[0]
[https://en.wikipedia.org/wiki/Material_conditional](https://en.wikipedia.org/wiki/Material_conditional)

[1]
[https://en.wikipedia.org/wiki/Quicksort#Algorithm](https://en.wikipedia.org/wiki/Quicksort#Algorithm)

~~~
ta0967
> normal conditionals are common in everyday life: "If they don't have a dark
> roast coffee, get me a medium roast."

"They had dark roast so I got you nothing as requested."

IOW, this program is either incomplete or wrong. Cf. "Get me the darkest roast
they have." \- ifless, concise, robust.

~~~
kowdermeister
So if it's an undrinkable mud you are still happy, code executed perfectly :)

------
yawaramin
view-source:[http://antiifcampaign.com/](http://antiifcampaign.com/)

Find in page: 'if('

2 hits.

So, yeah.

------
true_religion
This is the starter code:

    
    
        publish :: Bool -> IO ()
        publish isDryRun =
          if isDryRun
            then do
              _ <- unsafePreparePackage dryRunOptions
              putStrLn "Dry run completed, no errors."
            else do
              pkg <- unsafePreparePackage defaultPublishOptions
              putStrLn (A.encode pkg)
    
    

This would be nicer if you could do multiple functions with pattern matching.
In Elixir this would be:

    
    
        @spec publish(boolean) :: any
        def publish(true = _isDryRun) do
              _ = unsafePreparePackage dryRunOptions
              IO.puts "Dry run completed, no errors."
        end
    
        def publish(false = _isDryRun) do
              pkg = unsafePreparePackage defaultPublishOptions
              IO.puts (A.encode pkg)
        end
    
    

Pattern matching is pretty powerful, even going as far to give a dynamic, non-
statically types language like Elixir the ability to 'destroy all iffs' too.

~~~
twblalock
Pattern matching is just as explicit as an if loop. In languages that
implement it for null values, it is just as explicit as typing "if (foo ==
null)" in an imperative language. You have to think about it, and type just as
much code to deal with it, as you would in a language without pattern
matching.

The only upside to pattern matching that I can see is that you are forced by
the compiler to match all possible inputs and check for nulls in some
languages, which can help you avoid null pointer exceptions and such. But you
haven't encapsulated anything, or saved yourself any thinking or typing, by
using pattern matching. You've basically turned every function into a switch
statement. It's vastly overrated.

~~~
timmytokyo
Another advantage of pattern matching is extensibility.

Suppose you wish to add a new branch case. Under the traditional if/else (or
switch) model, you'd need to modify the function containing the if statements.
With pattern matching, you simply introduce a new function; it decentralizes
the change and acts as a sort of simple, intuitive polymorphism.

------
whazor
Reducing if statements does shrink the possible state space, however using
additional abstraction might increase it even further.

------
cdevs
Bad programmers will mess any syntax restrictions/guidelines/styles we put on
them. If you let them make any function were they can put launchNukes(); into
doX(); then they will. though running things as a service may be the future,
this launchNukes(); function is over here....safe from you.

------
VladKovac
Functional programmers love to emphasize how all the aspects of programming
that their pet language is uniquely good at dealing with also happen to be the
biggest problems in code maintenance. Is there any actual data on what the
biggest problem sources are?

~~~
swift
I'd love more data on this too, but I do think it's worth pointing out that
it's pretty uncontroversial that the more control flow paths you have, the
harder your code is to reason about. That's the basic assumption of the notion
of cyclomatic complexity, after all.

------
svanderbleek
I think pattern matching is fine, I don't see how it is still "boolean". The
additional techniques shown are interesting, but heavy abstractions that
should not be prescribed in general.

------
iopq
This was the idea I used when I did

[https://bitbucket.org/iopq/fizzbuzz-in-
rust/overview](https://bitbucket.org/iopq/fizzbuzz-in-rust/overview)

I still had to bottom out at [https://bitbucket.org/iopq/fizzbuzz-in-
rust/src/9e5fcaabbd5f...](https://bitbucket.org/iopq/fizzbuzz-in-
rust/src/9e5fcaabbd5f364839be929a974db0ac31d79c2e/src/lib.rs?at=master&fileviewer=file-
view-default#lib.rs-53:56)

------
smoothdeveloper
Paul Blasucci had a good talk on Active Patterns (an F# language feature):

[https://github.com/pblasucci/DeepDive_ActivePatterns](https://github.com/pblasucci/DeepDive_ActivePatterns)

This feature allows to encapsulate conditional matching on arbitrary input and
dispatching.

For those who know ML, it is making the concept of pattern matching extensible
to any construct.

------
dorfsmay
Since this is about FP, we have to have recursion:

[https://www.reddit.com/r/functionalprogramming/comments/4t91...](https://www.reddit.com/r/functionalprogramming/comments/4t913u/destroy_all_ifs_a_perspective_from_functional/)

------
mapleoin
This is the best bit I think:

> The problem is fundamentally a protocol problem: booleans (and other types)
> are often used to encode program semantics.

> Therefore, in a sense, a boolean is a serialization protocol for
> communicating intention from the caller site to the callee site.

------
sebastianconcpt
Less if is better, I agree on that. Lamdas technique is interesting because
they "encapsulete" a specific case. In OOP this is achieved by using
polymorphism on the objects instantiated for the right case. Right?

------
based2
If 'if' could support single 'expression' and multiple 'case's like
'switch/match', it would make easier the transition.

------
rosalinekarr
only a sith speaks in absolutes

------
basicplus2
sounds like what's really being said is..

It is recommended that programmers use abstractions whenever suitable in order
to avoid duplication, and associated errors

------
dingleberry
i can't think of use of 'if' in a math function; however, if is implicitly
used in input, say 0<x<1, f(x)=x, 1<x<3, f(x)=x^2

i see a lot of loop though, summation is so a double integral is loop within
loop. i can't think a code analogue with derivative

fta, i take that if in function body makes an ugly code.

~~~
gmfawcett
Lots of math functions are defined with 'if' \-- the absolute value, the
Heaviside step function, etc.

------
rimantas
Sandi Metz talks about ifs a bit here:
[https://www.youtube.com/watch?v=OMPfEXIlTVE](https://www.youtube.com/watch?v=OMPfEXIlTVE)

------
AWildDHHAppears
You can go a long way without Ifs in a pattern-matching language like Prolog
or Erlang, too.

------
sqldba
Ummm. Many common day to day languages don't use lambdas. Also I have no idea
what they are. So - yeah I don't think you can just replace if so easily.

~~~
beisner
Lambas are actually supported in most popular languages: C++, Java, C#, Go,
JavaScript, even C. Sometimes they're called function literals or anonymous
functions, but basically they involve creating a function without a name that
can be passed around and executed. In some languages (Haskell, OCaml, etc) the
anonymous functions can be extremely generic, whereas they are sometimes a bit
less flexible in other languages. If you want a quick intro you can find one
here: [http://stackoverflow.com/questions/16501/what-is-a-lambda-
fu...](http://stackoverflow.com/questions/16501/what-is-a-lambda-function)

