Hacker News new | past | comments | ask | show | jobs | submit login
When Does Point-Free Notation Make Code More Readable? (atomicobject.com)
48 points by philk10 on Sept 29, 2017 | hide | past | favorite | 73 comments



Okay, I get the concept. Where is the argument for it?

>Sometimes, especially in abstract situations involving higher-order functions, providing names for tangential arguments can cloud the mathematical concepts underlying what you’re doing.

Sometimes? When?

>point-free notation serves as an example of how subtle changes to the way you think about or define your code can have big impacts on readability.

In the end, we're defining `atLeastTwo` and using it identically, so it would be much more compelling if there was an example in which things were, in fact, more readable. This example is just syntactic. It's like arguing that `const atLeastTwo = (x) => x >= 2` is better than `function atLeastTwo(x) { ... }`

> It’s your job as a developer to be cognizant of these tradeoffs.

It's just that this piece isn't really helping me get there.

I think the author is suggesting that we define `atLeastTwo` in terms of an `atLeast` primitive, i.e., currying and partial application. It's clear that this is going to make composition easier, but it is relying on it. Now, I'd like to see how point-free, as a paradigm, yields better results in terms of readability or error prevention.


Just taking the first examples from the Haskell wiki:

    sum = foldr (+) 0
    foldr (f . g) e
rather than

    sum' xs = foldr (+) 0 xs
    foldr f' e
      where f' a b = f (g a) b
The idea is that you define loads of tiny functions at higher order, like the sum example, and these are very simple and easy to maintain because every function is a one-liner. I don't know how to respond to the complaint that the example is just syntactic, because it's fundamentally a syntactic construct (at least unless we get to the point of e.g. summing a list of functions to get their composition): the pointfree style is completely equivalent to the pointed style, it's just argued to be a less noisy syntax, that therefore increases readability and maintainability.


Concatenative languages come to mind. You basically don't have to think about partial application ever again, because you get it "for free" (at the cost of reverse Polish notation). Forth, Joy, Factor and Kitten are all examples of languages based on this paradigm.

[0] https://en.wikipedia.org/wiki/Forth_(programming_language)

[1] https://web.archive.org/web/20111007025556/http://www.latrob...

[2] http://factorcode.org/

[3] http://kittenlang.org/


Indeed, concatenative languages are a perfect example of point-free notation. I don't see the point (pun!) of doing it in JS or any mainstream language though.

They are also a joy to design/implement* and doing it really changes how you think about programming.

* http://hashmal.github.io/shirka/ my personnal attempt at designing such a language. warning: very old, dirty, buggy code.


Tasteful, minimal pointfree-ization can sometimes help one think at the "level" of functions instead of thinking about how those functions transform arguments.

If you're familiar with the Maybe type (called Option in Rust and Java, and perhaps Swift?), this example might help. For concreteness, this is how it's defined in Haskell:

   data Maybe a = Nothing | Just a
Consider this function, called `maybe` in Haskell:

   maybe :: t -> (a -> t) -> Maybe a -> t
   maybe defaultVal _ Nothing  = defaultVal
   maybe _          f (Just v) = f v
This unwraps a Maybe value, transforming the value inside if it exists and taking a default to return if it doesn't. So, for example

   > maybe 4.0 sqrt (Just 9.0)
   3.0

   > maybe 4.0 sqrt Nothing
   4.0
Now consider this function:

   withMinDefault :: Maybe Int -> Int
   withMinDefault =  maybe intMin (* 2)
Here (* 2) is a partially-applied function that doubles a number: ((* 2) 3) = 6. The advantage of defining `withMinDefault` like this instead of the equivalent "pointful" version

   withMinDefault' :: Maybe Int -> Int
   withMinDefault' n =  maybe intMin (* 2) n
is minimal at best, but the idea is that we're defining this function in terms of `maybe` directly:

   "withMinDefault multiplies an optional number by 2, 
   defaulting to INT_MIN if it's not present."
You can combine this with function composition to start working by almost exclusively manipulating functions, and (when done tastefully, as I stated at the beginning) it can greatly improve the clarity of the code for an experienced programmer who can read streams of functions with facility.


Aside from this, there's a higher performance and maintenance cost. Performance since you've turned a single function call into two. Maintenance because now you need to test this `atLeast` primitive when odds are YAGNI.


Performance shouldn't be an issue if you have a decent optimizing compiler and even in JavaScript I would think a decent runtime should JIT away the overhead.

Tangent but this is one reason C++ works nicely with a more functional style with lambdas - the compilers are pretty good about optimizing away any overhead resulting from function composition etc.


Go argue with Haskell benchmarks. Functional programming is slow.


What?

Haskell's competitive with everything but hand-tuned C code, consistently.

In a world where Python and Javascript are credible languages to orchestrate numeric computing, it seems ludicrous to imply that Haskell, which has superior code generation to these options in nearly every particular, is somehow "too slow."


I won't comment on Haskell's speed here, but function composition can't possibly be making Haskell less faster.

The reason is simple: the compiler inlines the composition operator in the vast majority of (probably all?) cases, since the Haskell standard library source code marks it with an INLINE pragma:

   -- | Function composition.
   {-# INLINE (.) #-}
   (.)    :: (b -> c) -> (a -> b) -> a -> c
   (.) f g = \x -> f (g x)
http://hackage.haskell.org/package/base-4.10.0.0/docs/src/GH....


Huh? Haskell does very well in the shootout, better than JavaScript and often comparable with C.


Haskell is not slow because of point free style. C++ generally won't suffer any performance penalty for point free style in an optimized build. If you think you have specific examples otherwise I'm happy to discuss them.


P.S.: I love Haskell too.


Let me provide some other examples then. Often the most relatable examples in this field involve manipulating data structures, or hiding abstractions.

Sometimes you just want to reuse an existing function with 2 arguments. You name it either because you want to give it an explicit type (in many cases the exact same code can do different things depending on the type it expects). People's first exposure to point free style is often "flip", a higher-order function that takes a diadic function and swaps the order of arguments.

As an example, the "cons" function from data.text takes a character and a text object and prepends the character into the text object. It has type: Char -> Text -> Text

But what if we'd like to do this repeatedly? Ideally we'd reuse a function like foldl (reduce in javascript parlance). What we'd like is to have a function like conss that has type Text -> [Char] -> Text. Sadly, foldl expects its reducer function to take something more like Text -> Char -> Text, which is really close to what we already have with cons.

A point free solution feels pretty natural here:

    conss :: Text -> [Char] -> -> Text
    conss = foldl (flip cons)
We might say, "I actually want to prepend the same prefix to hundreds of strings." A simple point free way to think about reusing our old code:

    prepend :: [Char] -> Text -> Text
    prepend = flip conss
Often times we use point-free style when we're building other structures. Some time ago I wrote a text adventure game engine to explain Haskell to some kids, and it has this line (https://github.com/KirinDave/Rag/blob/master/src/Rag/Parser....):

    buildMaze :: [(Int, Room)] -> MazeDefinition
    buildMaze = foldr (uncurry Map.insert) Map.empty
Again, a point free version of this is much more readable and direct than if I were to pattern match into the arguments. That would require a binding both on the outside for the array and in the inner reducer function, adding tons of noise:

    buildMaze :: [(Int, Room)] -> MazeDefinition
    buildMaze definitions = foldr reducer Map.empty definitions
                            where reducer = \(m, (i,r)) -> Map.insert i r m
Sometimes, point-free style can also help with very abstract code. A common way to define the functor instance for the Free monad, for example, is reasonable to follow in point free notation but pretty brutal without it:

    instance Functor f => Monad (Free f) where
      return = pure
      Pure a >>= f = f a
      Free x >>= f = Free ((>>= f) <$> x)
 
Trust me, you don't want to see the lambda-ized version of that inner bind. It's really, really bad. It also loses the clarity this one offers about the fact that at the end of the day, you make a big chain of functions and then fmap it (which is actually pointing to a massive performance flaw with this implementation free monad!).

I agree that the author didn't really make their point here. What's more, using the language they did will always make point free notation look awkward. Point free notation thrives in environments where partial application is automatic.


Just a quick correction, I said it was the functor instance of the free monad. That's an editing error, I changed the example I wanted to use for something better. It's the monad instance for Free.


"JavaScript doesn’t include this out of the box, but it’s an interesting exercise to write your own."

I may be misunderstanding point-free notation (I had never heard of it before), but I think `bind` (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...) let's a programmer provide some of the arguments to a function and returns a new function that requires only the remaining arguments.


It's also easy to implement as a one liner

   var atleastTwo = function(x){return function(){return Math.max(2,x);}
Even simpler with typescript

   var atLeastTwo = (x) => (() => Math.max(2, x))


ES6

    const atLeastTwo = x => Math.max(2,x)


his is a function that returns a function, i.e. you'd need to call atLeastTwo(x)(); to get a result, yours is just a function. in es6 the equivalent thing to what he wrote would be:

    const atLeastTwo = x => () => Math.max(2, x);


bind provides partial application but doesn't directly help towards point-free notation.


Why do we need yet another term for concepts already known as function composition, currying, and partial application?


Pointfree style uses composition (among other operators), currying, and partially applied functions, but it refers to something distinct from the sum of those concepts/features.

Consider the following function of two parameters, written in Haskell/PureScript syntax:

   f x y =  g x (m (h y))
         = (g x . m . h) y
We can now eta-reduce/partially apply f instead in the definition (which is possible because f is curried by default):

   f x   = g x . m . h
If I were using pointfree style at all (which I almost never do), this is where I'd stop. However, you can go the whole hog and strip out all the arguments:

         = g x . (m . h)
         = ((. (m . h)) . g) x

   f     = (. (m . h)) . g
I've made use of everything you listed, but it's only when you go this far that your function is truly free of "points".


Pointfree style uses composition (among other operators), currying, and partially applied functions, but it refers to something distinct from the sum of those concepts/features

Exactly. But in all fairness to the parent comment to yours, the original article doesn't convey this as well as you lucidly did, and his confusion is perfectly understandable.


Thanks! I'd say at least some of the blame for that is due to how C-family languages make these concepts appear somewhat ... opaque.


You're actually using the wrong words here too. When you take a function and map it to this style, that's called "η-conversion" (pronounced "eta" or "eee-ta").

"Point free style" is the name for the results of writing code where you do not introduce many new variables. You lean on function composition, currying, and often times this style benefits from partial application (famously with (.) and (.).(.) and others, but in general even with folds and traversals).

It's not really a problem to introduce a description for the style?


It's not a new term, is the partial answer.

The rest is that "point free" describes a programming paradigm, while terms you listed describe the mechanisms you can use to apply it.


It comes out of the mathematical notion of a point and what not, right?

Something, something, only discussing the composition of maps without ever talking about how the initial categories are tied to points. Category theory always makes for weird names for programming features.


Point free notation often depends on currying, but it is not the same. Instead, point free notation is about only using composition and currying to define your functions.


Thanks everybody. It's a much simpler idea than I thought: "Don't introduce extra arguments in your function defs when you don't need to." This is what I always did as a Forth programmer; I just didn't know it had a name.


Point-free notation builds a program indirectly using expressions, rather than declaratively and directly like most programming languages. You need to follow data and control flow in order to understand the program that is built.

I think it's rarely easier to read or understand, with a few major exceptions. Certainly toy examples like in the article are too simple to highlight the benefits.

If the blocks being composed are reused and recombined a lot, but are meaty enough not to be inlined, then the composition effectively turn into a kind of high-level DSL. Consider e.g. a data processing pipeline using relational operators; filters, joins, sorts, projections etc. Those things have meaty implementations, but fairly simple interfaces; they're tailor-made for composing into larger abstractions that have the same external interfaces.

If the problem being solved inherently requires building a custom program to solve it, then it's hard to avoid writing something that isn't isomorphic to point-free style. For example, a reporting tool that doesn't dump the user in a SQL editor.


I don't really find the argument convincing. I'd argue that the original approach was more intuitive.


"Intuitive" is too subject.

What we're interested in is simplicity. And this is an objective measure.

This (A):

   partial(Math.max, 2)
is objectively simpler than this (B):

   function atLeastTwo(someNumber) {
      return Math.max(2, someNumber);
   }
Why is it objective? Because it expresses the same exact value (i.e., function) with an objectively fewer number of concepts.

When (A) is unfamiliar it feels less intuitive, but with practice/experience that sense of unintuitive goes away, and then you are left with (B) a simpler expression -- and therefore less cognitive load on the developer trying to make progress.


A is objectively more complex:

B gives you all the information you need to know what's going on.

A requires you to go dig up what "partial" is doing (which, it turns out is itself doing some wacky gymnastics to turn one of its arguments into a function and the other into a parameter (and the parameter of whatever gets passed into the returned function as the other parameter of the function that was originally passed in).

Observe how much writing it took just to describe what's going on with your "A". No explanation at all is required for B. It's complete as written. Simple, and intuitive all in one.


"partial" is an independent concept, which could be learned once and applied after that just as "function" can be in example B. You don't say that "function" and "return" are doing some wacky gymnastics in the code do you?

It takes a lot to describe what's going on with A because it's a relatively new concept, related to the subject of the article. If partial application would be just as well known as function definition, the explanation could be shorter.


But it's not. "partial" is specifically defined as its own function in the article. If you're a developer coming across this guy's project for the first time, you'll have to dig one level further into the stack before you find it.

Only then do you get to learn those unnecessary gymnastics, so you can follow the rest of his code.


If "partial" were only usable for this one function you'd be right. But actually it's a very common, general function (under various names); in many languages it's already present in the standard library, or at least in an established library that's commonly used.

Fundamentally this is what programming is all about - finding the commonality that you can pull out and reuse, rather than doing every single case by hand.


"partial" is an idea which is known in programming, specifically functional one, and its use is much wider than this particular example. It's just that JavaScript doesn't have this standard it has to be defined. Similarly a concept of "function" is a widely known one, which you also need to learn.

The whole article makes sense as telling something new to the one who doesn't know what's the point-free style is - but might know what the partial application is. It's a feature, not a bug when something is learned as the result of reading the article and can be applied later.


I think you're confusing the concept of "complex" with "harder for me to process".

There are many, many things that are /simple/ that are actually pretty hard to understand. Most of higher maths, for instance.


A only seems more complex because you're not used to the concepts involved. If you were then it would seem less complex than writing it "longhand".


if you're writing functional code, you should already know what partial application is - and the "partial" function is only necessary in this case because JS isn't a curried language by default.

your idea of B being more simple is largely driven by the fact that you likely have more experience writing code in that manner. With experience writing functional code you should find that both A and B are clear at a glance.


Using a good function syntax [Scala], the pointed version is:

    { x => Math.pow(2, x) }
Obvious it's a function of 1 argument. The pointfree variant requires 1 extra concept [partial], plus building a mental map of the arguments, to reconstruct what kind of a function [or value!] the pointfree expression represent.

Furthermore, implementing the reverse check is syntactically symmetric in the pointed version:

   { x => Math.pow(x, 2) }
Whereas the pointfree version, assuming the function is not commutative, accumulates even more concepts:

   partial(flip(Math.pow), 2)


Using even better (PureScript) syntax, the pointfree version of the latter can be written in notation similar to a hom-functor.

   f1 = pow 2
   f2 = pow _ 2
This makes it look like a lambda but without having to bind an argument explicitly. Makes it much nicer to write complicated expressions (although I can't speak to the readability for people less fluent in the syntax).


You still have to remember that pow has 2 args, and not 1 or 3, to get a basic sense of what kind of object f1 is. How about:

   f1 x = pow 2 x
   f2 x = pow x 2


Kind of irrelevant in a language that has type declarations on the previous line.


I'd argue B is simpler. Why? Naming things is helpful. A function named "atLeastTwo()" is more or less self-explanatory, I don't have to look at the implementation to know what it does. In case of A, I have to look at the implementation and figure out what it does.

A is maybe more _elegant_, but IMHO it has a higher mental load.


> This (A):

   partial(Math.max, 2)
> is objectively simpler than this (B):

   function atLeastTwo(someNumber) {
      return Math.max(2, someNumber);
   }
What the article failed to convey to me was why either of these are better than just using

   Math.max(2, <expression>)
everywhere I need to use the greater of 2 and some expression. It didn't really explain why we want to hide the 2 at the places where we need to use 2 as a lower bound on a value.


Why would you want to do this? For exactly the same reason we don't want to see so-called "magic numbers" throughout our code. An equivalent question might be: Why should I use MAX_INT16 instead of 32767 everywhere? They encode precisely the same thing, but one is just a number and the other offers a meaningful name as well as one location where it's defined, reducing the risk of error. Or maybe you really want MAX_INTEGER which will be determined by your hardware platform, again you only have one place to redefine the value and not throughout your codebase. Maybe 32767 was selected in some places because it was the system's MAX_INTEGER, and in others it has some different meaning or reason for being used that shouldn't be altered by changing MAX_INTEGER.

Back to atLeastTwo. It's a trivial example, but consider if you actually did have Math.max(2, e) throughout your code base. What is the meaning of this expression? Gives you either 2 or e whichever is greater. Great, we understand what it is. But now, why? What is magic about 2. Giving it a name can convey the why (though atLeastTwo does not). Giving it a name also makes it less error-prone. Suppose you fat-fingered 22 into a couple spots, how do you (quickly and easily) identify that error? If it's just Math.max(22, <expression>) you can't even know whether it's wrong, just that it has a 22 in it instead of a 2.

Sufficient testing would (hopefully) reveal that you used 22 instead of 2, but it may lead to subtle bugs that are hard to identify. We have this issue in a codebase at work where we do a lot of bit twiddling. Instead of setXFlag(&data_word, val) we have something like setBits(7,1,&data_word,val). That 7,1 determines which bit to start at (7) and how many (1) to put val into. Great, it works. But when the data format got changed we had to change every single reference to 7,1 to 6,1. Unfortunately it was really, really fucking hard to find all those references and correctly discern which ones needed to be changed and which ones didn't.


I agree that a literal 2 is probably bad to use in "Math.max(2, e)", but it is also probably as bad in "partial(Math.max, 2)", and even worse is the "Two" embedded in the function name in "function atLeastTwo(someNumber) {...}".

I decided to let that ride rather than change it to something like MIN_SAFE_VALUE like it should be in real code.

So supposed that is fixed, and we have MIN_SAFE_VALUE defined somewhere, so that we don't have to worry about typos (much...if we have a MIN_SAVE_VALUE too that could be trouble).

Also suppose that MIN_SAFE_VALUE is used instead of 2 in "partial(Math.max, ...)". Alternatively, if that is the only place that actually uses MIN_SAFE_VALUE, with everyone else using it via the partial function, then I suppose it would be OK to use the 2 literally there, and name the result of partial with a name that says it it ensures the minimum safe value.

Same for the "function atLeastTwo" approach. Give it a good name, and get rid of the magic number in the definition, unless that is the only place that uses MIN_SAFE_VALUE directly and then maybe it is OK for that to be where the literal 2 appears and not have a MIN_SAFE_VALUE constant defined. (I'd still go for a defined constant, because it and any other such magic numbers can be defined in one place making it easier for people who need to tune things later).

Now I can see why the "partial" approach and the regular function approach might win over inline "Math.max(MIN_SAFE_VALUE, ...)", because they both make it easier to change how the safety constraint is enforced.

For example let's say this code is part of the control system for a power plant. The engineers revise the rules to allow going 10% under the normal minimum safe value when the plant is operating more than 20% below peak capacity.

With the regular function approach we only have to change that one function so that it checks the operating level and adjusts the threshold it checks against accordingly. With the partial approach we could probably also do something like that (although maybe we'd need a helper function since we would no longer be applying partial to an already existing function.

With the "Math.max(MIN_SAFE_VALUE, ...)" everywhere approach we'd have to change all those places. (Except if we were in C or C++, and MIN_SAFE_VALUE was a simple #define. Then maybe we could change that from "#define MIN_SAFE_VALUE 2" to "#define MIN_SAFE_VALUE current_safe_value()" and write a current_safe_value() function. That still limits is to only handling safe value rules that involve a >= comparison of the value with a threshold, so is still not as flexible as the normal function approach.

But note that this advantage comes from pushing the algorithm of the safety check down to underlying functions, instead of exposing it at the point of use. Using Math.max at the place where you are actually checking the value and using the corrected value exposes at that point that the algorithm is to compare to something and pick the larger of the two.

For me the article just didn't explain why pushing the algorithms lower in point-free style is better than doing that in normal function style, and didn't seem to have any other compelling thing point-free does. This is probably a topic that needs more than the 148 words in the article's "Why Does It Matter?" section.

Some of the comments here using Haskell look more interesting, but I've only looked a little at Haskell. They all quickly hit my limit--the before and after Haskell examples look like something compelling is happening but I just am not yet able to see what that is.


Well, it isn't like you can avoid understanding B by using A: A adds a concept, but it doesn't remove any of the existing ones; it just makes you do B in your head when you see A to arrive at the concept which is reified in the function name.

Even if you're only using it in one place, `atLeastTwo(some_value)` is simpler than `partial(Math.max, 2)(some_value)` in at least that way.


I'd argue that they are the same. Calculating return value of B (return Math.max(2, someNumber)) is calling Math.max with two arguments, the second of which is explicitly named. The result is the same as partial(Math.max, 2)(someNumber). someNumber isn't used anywhere else, and the function itself just does the supplement of someNumber as second argument to Math.max . That's also what partial(Math.max, 2) does.


Seems to me that A has two major concepts (function calls and partial application) whereas A only has one (function calls).


> What we're interested in is simplicity.

What we are really interested in is whether it makes programs easier to understand, less easy to misunderstand, and less likely to contain errors. A measure that is more objective but doesn't address these issues is not intrinsically better.


I see what you're getting at with (B), since you've named the function which adds another 'thing' to think about. But I can anonymise it in modern Javascript:

  y = (x) => {return Math.max(2, x)}
To me that just looks nice, and it's clear how Math.max is being used. With the partial (A) I have to have some implicit knowledge of the function, where as in (B) or my version I can see how it's being used. Perhaps I'm a bit dis-functional.


I mean, => is standard in the language and partial isn't, but that's an accident of history. Your example looks reasonably nice, but the "x" isn't really adding anything; if we could somehow write it as

    y = => {Math.max , 2}
that would be clearer still (setting aside familiarity with the language). Whether "=>" or "partial" is a better name is a separate question.


It's not objectively simpler at all, actually.

It does not express anything with objectively fewer concepts. Instead it distributes the concepts differently.


I'm not convinced. Please enumerate the concepts in both, the way you see it.


It's a functional programming thing.


Point-free isn't going to be worth it in a language where composition and application of functions isn't syntactically and computationally as cheap as possible. It also only works perfectly well for unary functions and situationally for binary functions. Anything more and the contortions start being more unreadable than variables.

In Haskell partial application is literally free and composition is an operator. That's why point-free works well there. No point in importing syntactic ideas that don't make sense in the new context.

Edit: I should also point out there's a subtle corner-case if you mix these transformations with uses of `seq`.


So, you get a single point-free function, at the cost of two other point-full functions (at least one of which will utilize run-time introspection)? All you're doing is pushing the "point" somewhere else - in this case down onto Math.max.

Seems like a net loss of simplicity and readability.


That's only true if you never reuse Math.max. I believe the philosophy the OP is espousing is to write a few point-ful funcitons when necessary and then make the bulk of your business logic be recombinations of those in a point-free way, rather than redoing the points at every function the whole program wide.

I will agree it is a big loss of readability for someone who's not used to reading programs that are written like that. And that includes me. But I have faith it will get easier to read and eventually I will find it easier to read code written in that style than code written in the old style.


I disagree, I don't know may be it's a preference thing but I think the code with points is readable for me and doesn't require to write extra function, class etc.


A hallmark example of point-free style in J is likely the example of calculating the average value of the array:

+/ % #

Here literally you say "I'm calculating sum of elements (+/) which then divide (%) into the number of elements (#)".

If your array isn't just a one variable (x) but instead is obtained as a result of an expression, you may not want to repeat that expression twice or to invent an intermediate variable to store the array result. Here point-free style can help.


Composition and currying are great if you're the only person working on a project, because you will understand everything that's going on. But my experience has been that developers will not inherently come to the same conclusions with this style, using this on a team of more than two leads to project fragility and incongruousness.


A couple of posts that really made this click for me (just beginning my programming career). This is javascript library, but the concepts do translate functional programming in general.

http://fr.umio.us/why-ramda/

http://randycoulman.com/blog/2016/05/24/thinking-in-ramda-ge...

(Read all posts from the second link)

After reading this series of blog posts and using Ramda.js for the last year or so, I can see the value here. But you do have to "get it" which means learning additional concepts, if the additional learning isn't a burden to your team (I hope your team isn't averse to learning new concepts) then I think it's an asset to be used when it makes sense.


given the article neglects to provide an example, I have one handy from my git:

https://gist.github.com/ajeffrey/092b71a3ad5f2034601574262c5...

Note how on this line, arguments are not present and passed around - the input of the function being declared is simply passed from function to function, with the output of each becoming the input of the next. When you can write code this way, it improves clarity by putting the focus on what the code is _doing_ rather than whatever arbitrary name you're giving the input parameters. It's definitely a tool for use in specific situations though as often the name of the arguments helps a great deal with readability.


So point-free avoids the parameters. Why not call it parameter-free. Or argument-free.


The concept originated in topology, where the parameters to your functions are points (in a space).


Agreed. I thought I was going to read about message passing styles a la Objective C.


FWIW I wrote a related blog post:

"Shell Pipelines Support Vectorized, Point-Free, and Imperative Style"

http://www.oilshell.org/blog/2017/01/15.html

I'm also writing a blog post right now that uses this real instance of point-free style:

    count-nul() {
        od -A n -t x1 | grep -o '00' | wc -l  
    }
    # usage: count-nul < myfile.bin
The issue is that grep can't find NUL bytes, because argv is NUL-terminated. So you have to convert to hex, grep, and then count.


We use point free notation in JavaScript most often with "fluent" apis like d3 or underscore, where instead of using an explicit compose operator like in haskell, each method in a chain returns the context with a ton of chainable methods defined on it.

The big difference is that Haskell (and other ML-syntax languages) make currying the default so it's much less noisy syntactically to use pointfree style than it is in JavaScript.

https://wiki.haskell.org/Pointfree


This seems like a great way to obfuscate all your code without any actual benefit. Can anyone give me a real example of how this is somehow a better way to do things? Maybe the application for this is only for a specific field that I'm not familiar with?


Working with streams of data often makes pointfree syntax beneficial, since you can start to see the omitted arguments "flow" through your chain of composed functions. If you write

   f # g = \x -> g (f x)
then you could implement functions like this:

   f :: Stream Int -> Stream Int
   f = drop 3       -- drop the first three elements
     # skipEvery 3  -- skip every third element
     # sumEvery 2   -- sum every consecutive pair of elements
     # map (* 2)    -- double everything
This would turn

   1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, ...
into

   2 * (4 + 5), 2 * (7 + 8), 2 * (10 + 11), ...
You can find many non-pointfree versions of similar streaming code here: https://github.com/snoyberg/conduit/


From a Clojure standpoint, I don't think there's much going on here beyond comparing this:

  (defn at-least-two [x] (max x 2))
with this:

  (def at-least-two (partial max 2))




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: