
The general value of typed functional programming lies in leaving no edge cases - jxub
https://np.reddit.com/r/scala/comments/en89or/when_should_i_use_cats_scalaz_instead_of_standard/fdxer1k/
======
agentultra
I came to a similar insight by the corollary: a sound type system makes the
edge cases obvious. When we make the type of a partial function total by
introducing an optional value as a result there's an opportunity to develop an
intuition that this is going to make my code harder to reason about. Assuming
I fix all the warnings about incomplete pattern matches I know that users of
this code will have to handle all of the edge cases I'm introducing. Therefore
I might take another approach and introduce a type as my pre-condition: the
type of non-empty lists. You can't even call my function unless I can be
assured there's at least one element in the input. I haven't even had to look
at the implementation to reason about my code yet.

    
    
        head :: [a] -> a
        -- versus
        saferHead :: [a] -> Maybe a
        -- versus
        safeHead :: NonEmpty a -> a
    
    

The first function is _partial_ because we'll get an exception if we give it
the empty list.

The second function is total but annoying to use because we're telling all the
code that uses the result to check for two cases (it's an error not to).

The last one forces the responsibility on the _caller_ to provide a non-empty
input.

That was probably one of the more frustrating aspects about learning to
program Haskell as someone who has been programming for more than fifteen
years when I started. It revealed to me in stunning detail all of the edge
cases that almost every other language I've used actively hides from me.

It doesn't absolve you of having to think about edge cases: even Haskell
throws run-time exceptions. However it does give you tools to think about many
of those edge cases up-front and in a direct way.

 _Update_ Added a trivial example to demonstrate "partial," etc.

~~~
lgas
There's also another option in the middle which is a version that takes a
value to return when the list is empty

    
    
        headOr :: a -> [a] -> a
    

which is similar to the Maybe solution but forces the caller to discharge the
Maybe immediately rather than letting them potentially clutter up the rest of
their code by proliferating the Maybe.

------
nicoburns
Forget the performance or the fancy memory management stuff, this is one of
the best things about Rust: it makes you deal with edge cases by default
(while allowing you to write code in a largely imperative or functional-lite
style).

It's why I think it competes for market share with some of what is currently
Java/C# and similar languages. A lot of people who use those languages care
about correctness, and Rust is big step up in this regard.

~~~
jerf
This is also one of the problems with programming with exceptions. You lean on
the fact that an exception will be thrown and just propogate up until it finds
something, so the normal mindset of an exception-based programmer is to
program for the happy-case and only vaguely think about what happens if it
goes wrong. I know, I've written a lot of code in this style for many years
myself.

In some cases, that's not a problem or it's even the best thing to do.
However, you end up writing this way _all the time_ in an exception-based
language, which is where the problem arises.

Non-exception based languages tend to throw the problems in your face and make
them something you can't ignore. Which is not fun. But it is often what you
need.

~~~
latch
There's also the OTP model, where you have exceptions as well as a robust
"something" up there. You don't have to deal with the minutiae, and the
pattern (processes (aka, isolated processes) and restarts) lets you deal with
known or unknown errors.

~~~
hopia
Right, but the OTP doesn't make you deal with the exceptions upfront, only to
architect your app in the way that they don't crash or infect other parts of
the system.

------
hristov
I am not sure this is the only value, but it is generally true. And this is
one of the great advantages of haskell.

By the way, now that I know some haskell, I see this edge case sloppiness all
the time and it is really annoying. For example, here is something that
annoyed me just recently. You go to yahoo finance and there they will show you
the revenues of a company as well as the revenue growth from past year. But
sometimes this revenue growth number is not available. For example, the
company may not have existed past year, or perhaps it was not public past year
and thus it did not publicly disclose its revenues. When they cannot compute
revenue growth for these reasons, yahoo finance will helpfully show revenue
growth as "N/A", i.e., not available.

That seems all nice and logical, but I noticed that yahoo finance tended to
show that revenue growth was not available for many established public
companies for which they should have the data. I looked at it more closely and
noticed with shock that if the revenue growth of a company was about 0%, yahoo
finance would still show it as "N/A"! So some programmer just decided to use 0
as not available and just erased a lot of useful information from their
system. Now when I see a company that has N/A as revenue growth on yahoo
finance I have to go and check whether the revenue growth was zero, or it was
truly unavailable. As I said -- really annoying.

This is of course a textbook case for Haskells Maybe concept. You can have a
type that is a Maybe Int, and that means it have a value or it can be Nothing.
Thus, you encompass the edge case of not having the value available, without
using an embarrassing hack that deletes real data. But of course Haskell gives
you more power than just using Maybe. You can create a type that can encompass
all kinds of edge cases and information about them. For example, you can embed
a reason why the revenue growth data was not available, if that is the case.

~~~
waheoo
Ive noticed other languages with this feature, they call them optional types,
you can pass in a null or a type value.

/s slightly, maybe more snarky, but seriously, optional types are way
overhyped. Null objects are far more useful than optionals. Optionals tend to
leak all over the show.

~~~
Smaug123
Nulls leak exactly as badly as optionals, except it's additionally impossible
to tell where nulls have leaked to.

~~~
a1369209993
Actually, nulls leak worse than optionals, _because_ it's impossible to tell
where nulls have leaked to (so you end up forgetting to filter out a null at a
interface where you _would_ have decided to filter out a maybe).

------
ken
They say that, and then turn around and show an example using IEEE-754
Doubles. You've got two infinities and two "not a number" values. What your
type system thinks is a number isn't even necessarily a number. Maybe.

Better be sure to call the "isNaN()" and "isInfinite()" methods all over the
place, because those are the worst edge cases of all and your language
provides no help in detecting or avoiding them. It's like having 3 or 4 other
None values, where they don't even use the same system as your standard Option
type.

If I had a nickel for every time I saw "NaN" appear on a webpage or a web
browser console log ... We throw up our hands and say, yeah, well, JS is
terrible -- but no _static functional_ language I know of is any better.

~~~
JoelMcCracken
Haskell is so much better, and you would learn that with about 5 minutes
learning about the number types

~~~
ridiculous_fish
Haskell's intro page starts with the infamous minimalist quicksort, which
mishandles NaN!

[https://wiki.haskell.org/Introduction#Quicksort_in_Haskell](https://wiki.haskell.org/Introduction#Quicksort_in_Haskell)

~~~
saagarjha
How _should_ it handle NaN? It's not clear where an incomparable value should
go when sorting.

~~~
ken
Why is NaN even a concept in modern languages? We don’t have a special “not a
string” that all string functions return when they have no useful value to
return.

~~~
saagarjha
Because it's part of IEEE 754.

~~~
ken
So? There’s lots of old standards that don’t fit well with modern programming
languages, so we abandoned them, or only use the parts that still make sense.

~~~
saagarjha
IEEE 754 is one of the standards like two's complement arithmetic which are
widely used and have a number of clear advantages, so we continue to use them.

------
saityi
Paul Snively (the author of the linked reddit post) also gave a great talk
entitled 'Typed FP on the Job - Why Bother?' at LambdaConf a couple years
back, that spoke strongly to me:

[https://www.youtube.com/watch?v=8_HsFrXhZlA](https://www.youtube.com/watch?v=8_HsFrXhZlA)

> _And now you can do the most important thing you can do with any piece of
> code... stop thinking about it! Go home! Pet the cat! Watch House with the
> wife! ... I love Friday deployments ... you know why? Because I know what my
> code is going to do before it runs!_

~~~
mac01021
Interesting talk, but does that retry logic really provide me with more
confidence about its correctness than equivalent impure code?

~~~
psnively
Thanks, and good question.

I deliberately didn't go into the specifics about the various typeclasses at
play and their laws, and how those are tested. Part of that is just for
reasons of time, but the more salient reason is that I wanted to show how
those of us who do purely functional programming do it in practice. So if you
watched the presentation, you saw that most of my thinking was about "OK, how
do I elaborate from the most trivial transformation (do nothing at all) to the
one I really want?" And that proceeds compositionally: I transform this value
to that value. OK, does that value have the right type? No? What do I need to
do to ensure that it does? And the transformation steps have some important
properties, like their scope being entirely local. I really tried to emphasize
this at the end with `attemptRepeatedly`: my description of each line of the
whopping three lines is exhaustive. When I say there's no point in writing a
test for it, I mean that literally. There certainly are typeclasses at play,
and I briefly talk about the `Catchable` typeclass, and show the ScalaDoc for
it, which documents an important law: the relationship between catching
ambient exceptions and the algebra provided by `Catchable`. I rely on the
other typeclasses and other laws in a similar fashion.

The other thing I think is pretty important is the part where I say "Let's
look at cases," because the point there is that I can reason about the code by
reasoning about the shape of the data it's manipulating. So if a `step` is an
`attempt` of `p` that, if successful, `kill`s the retry `schedule` or, if
unsuccessful, logs the exception, I'm still dealing with a `Process` of one
element (by assumption that `p` will emit one element). Then `retries` will be
a `Process` of 0 to infinity elements, because we put no constraints on
`schedule`. So `(step ++ retries)` will be one or more elements, and because
`step` `kill`s `schedule` on success, because `retries` is derived from
`schedule`, `retries` is also `kill`ed. So `(step ++ retries)` is a `Process`
that will emit one or more elements, with the last element being the first
successful one, or the last failed element if none of the `retries` succeeds,
so we take `last`. Then we just `fold` the failure or value back into a single
effect, and we're done.

As I discuss in the presentation, there certainly are questions. What happens
if the `schedule` is empty? What happens if the `schedule` is infinite? An
attendee in Q & A asked a really good question: was I sure `retries` would
wait before the first retry, or was the semantics "try, then wait?" (It really
is the former, but that wasn't clear from my recorded REPL session.)

Of course, this isn't very impressive for a three-line example, although I
think it's pretty striking that it only takes three lines, each of which can
be completely reasoned about independently, to achieve a pretty significant
operational goal. The point, though, is we can build entire systems this way,
and in fact `attemptRepeatedly` is part of a distributed monitoring system I
worked on at Intel Media/Verizon Labs, which is written entirely in this way
apart from the monitoring types themselves, which present an imperative API
for familiarity's sake (and which we later came to regret).

I hope this helps!

------
jackfoxy
Most languages rely on coding patterns for code safety. This is merely a _best
practice_ , usually involves writing multiple lines of code, and is merely a
convention.

Functional languages have many of these patterns already encoded into common
functions, like map and fold. As functions their correctness can be considered
mathematically proven. Would you rather rely on _convention_ or the compiler?

Consider adding on top of functional safety/correctness type safety with a
functional language implementing a strong type system, like F#. Sure, F# is
only functional first, not purely functional, and probably all type system
have some sort of implementation edge issues. For most practical purposes you
can consider the added level of type safety imposed by the compiler also to be
mathematically provable.

~~~
threatofrain
What kind of mathematical correctness are you getting by using FP? The same
kinds that the Rust compiler offers?

~~~
Filligree
Different, but related. If you call 'map' then you know you're not mutating
the values, and you get an output the same size as the input without dropping
values, and...

It means you operate on a higher level of abstraction, and that inherently
makes reading code faster. Not per line of code, but certainly per functional
unit.

Idiomatic Rust tends to use a lot of functional code, and provides all the
tools to make that safe. Its map function _could_ mutate the input -- it would
be unidiomatic, but certainly possible -- except that, if you don't pass a
mutable reference, then you're safe to assume it won't.

------
yachtman
I worked with Paul at a previous job and really enjoyed it. Such a great mind
and passion for functional programming. Would love to work with him again
(maybe in a different context).

~~~
psnively
That's unbelievably kind of you. Hopefully our paths will cross again!

------
melling
Edge case removed:

    
    
       for i=0 to 10 {print(a[i])} vs for x in a {print(x)}
    

With functional programming it gets even better:

a.filter {}.map{print}

Pseudo code, of course. I list several “functional“ examples in Swift here:

[https://github.com/melling/SwiftCookBook/blob/master/functio...](https://github.com/melling/SwiftCookBook/blob/master/functional.md)

Functional programming is also a higher level language. map, reduce,filter,
flatten, flapMap, drop, take, zip, ...

Learn the concepts in one language and they can be used in another functional
language.

~~~
Mirioron
While it does remove edge cases, the code also becomes more opaque. At least
to me, functional style programming seems much harder to read and then reason
about.

~~~
lmkg
It's subjective. My experience is exactly the opposite. A pipeline of sequence
operations is something I can skim at a glance, but a for-loop is a pile of
code that I need to stare at before I understand what it represents.

This does not invalidate your experience. It's common, and with good reason.
This sort of dissimilar experience with the same code is why coding standards
need to be views as social norms for the team using them, and not just
technical things.

~~~
eru
The problem with loops is that they are too powerful. They can do almost
anything.

The functions map and filter are nice and restricted. If you want more power,
you use eg a fold. And if you need even more power, you write a new recursive
function.

So a quick look at the code will tell you what to expect.

~~~
edflsafoiewq
Personally I've found that although this argument sounds good in theory, in
practice I don't find loops harder to read than chains of combinators.

~~~
eru
Which languages are you reading them in?

For me the argument holds in practice as well. I'm mostly reading my
combinators in Haskell, and the loops in Python or some curly brace language.

If you use a general purpose combinator like mapAccumL that has almost as much
power as a loop, you don't get that much extra readability compared to the
loop.

It's still a bit better, because that combinator still has to process one
element of your list for each run through it's "body"; and it clearly warns
you by it's very name that it's not just a filter or map, so there's less
possibility of confusing it for something simpler.

And, of course, not all loops are created equal. When applicable, for-each
loops are much cleaner than classic C-style for-loops for example.

------
phibz
For me the value of typed functional programming has been it's ability to
encode business logic in to the type system and thus validate some of my logic
at compile time.

~~~
Naac
What's specifically functional about this advantage. Can't you encode the
buisness logic into types in non-functional languages as well?

~~~
hellofunk
The word “type“ has a fairly different meaning in functional programming
compared to object oriented programming. At a very shallow level, they have
some similarities. But the way they are used is vastly different between the
two paradigms.

------
fctorial
OTOH it's nice to be able to do:

    
    
        try {
            // 500 lines
        catch (e) {
            return 4;
        }

~~~
Twisol
Often, even functional languages have something similar, whether it's call/cc
or something more cutting-edge like algebraic effects. All of these
essentially augment the execution environment in various interesting ways, and
are _enhanced_ by other features of functional languages like immutability.

(More pithily: try/catch is not inherently non-functional.)

------
thaw13579
I wonder, aren't there problematic edge cases that depend on the data? For
example, if you are computing a plane from a given set of points, there may be
an edge case where the points are collinear. Assuming you always need to
produce a valid plane, this would require some numerical analysis and
regularization. Or is the answer to define some new type system that captures
the non-collinearity? If so, it seems like the costs of enforcing that could
overwhelm things...

~~~
tolmasky
The answer doesn’t necessarily have to be statically known, just ensure that
the edge case _is_ handled. computePlane() can return Maybe<Plane>, where
Nothing is returned in the colinear case.

This seems not that different from throwing an exception, except the caller
can’t accidentally forget to deal with the colinear case, there must be code
to handle the Nothing case.

But perhaps the question you’re asking is “but how can the writer of toPlane()
know THEY did the right thing”. There’s of course no solution to logic errors
(function subtract(a,b) { return a + b } will get by most type systems, short
of having the type system re-encode the function itself, at which point it’s
just correctness through redundancy — unless both the type AND function are
wrong!).

However, you COULD protect against future people breaking your implementation
by doing something analogous time weak_ptr’s implementation and flipping the
Maybe to the input vs the output). So for example, you could make a type
NonColinearSet, and have the “constructor” take a PointSet, but return a
Maybe<NonColinearSet>, so asNonColinearSet takes a PointSet and returns
Nothing if the PointSet is colinear, or NonColinearSet if it isn’t. Now you
computePlane() function can take a NonColinearSet and return Plane (not
Maybe<Plane>) since it “knows” the input must be valid, so it can ignore edge
cases internally, at the cost of the _caller_ having to deal with
toNonColinearSet returning Nothing before calling it, since you won’t be
allowed to transparent pass the result of asNonColinearSet(pointSet) to
computePlane since the types Wong match (Maybe<NonColinearSet> vs.
NonColinearSet).

    
    
        maybeNonColinear = asNonColinearSet(pointSet);
    
        case Nothing: alertUserBadDataOrWhatever();
        case Just<NonColinearSet>: return toPlane(maybeNonColinear.value)
    

Notice in both cases, ultimately the caller must do something in the edge
case, which is what you want.

------
adgasf
Conversely, it's also one of the biggest pain-points when throwing together a
quick prototype.

~~~
worldsayshi
Then here's where typescript shines. To be able to move fluidly between no
static checks of JavaScript and strong-ish type safety of typescript without
too much work.

Then again, it's not always without hiccups.

~~~
jefflombardjr
That is where I think Flow really shines. Seems to be less hiccups, but that
is just my anecdotal experience.

------
proc0
Isn't that what also makes it hard to use __without __relying on the
mathematical laws? While the math part is hard, it is what provides the strong
guarantees in the form of laws.

------
cryptica
The goal of software development should be about reducing the number of edge
cases to the absolute minimum. If a tool makes it easier to keep track of all
edge cases, it is bound to encourage developers to write code which contains
more edge cases. But fundamentally, it still reduces code quality. Code with
more edge cases is less flexible and not good at handling changing
requirements.

I've seen this over and over in my career; tools can make people lazy in
certain areas, it can put them on auto-pilot. Often, this can be a bad thing.

I like coding with dynamic languages because they get out of my way and shift
the full responsibility for correctness on me. This creates a certain mental
tension which helps me to perform. Keeps me alert and mindful.

I particularly try to avoid languages that make me wait for code to compile
(which is more common with statically typed languages); this is for the same
reason why don't like it when someone distracts me while I'm "in the zone"
while coding.

~~~
finolex1
Isn't this akin to saying I don't like wearing seatbelts because it makes me
more complacent in my driving?

~~~
cryptica
It could actually be the case that seat-belts make drivers more complacent but
the key difference in the case of coding is that you won't die if you make a
typo in your code. If designed correctly, your tests will catch it anyway.

~~~
Verdex
Therac-25 [1] disagrees with your statement. Although to be fair, the
programmer didn't die. The patients did.

[1] -
[https://en.wikipedia.org/wiki/Therac-25](https://en.wikipedia.org/wiki/Therac-25)

