This comment seems unnecessarily mean-spirited... perhaps I just feel that way because I'm the person on the other end of it!
I agree the code you have there is very readable, but it's not really an example of what that sentence you quoted is referencing... However I didn't spell out exactly what I meant, so please allow me to clarify.
For me, roughly 5 calls in a chain is where things begin to become harder to read, which is the length of the example I used.
For the meaning of "multiple", I intended that to mean if there are nested chains or if the type being operated on changes, that can slow down the rate of reading for me.
Functional programming constructs can be very elegant, but it's possible to go overboard :)
To me the functional style is much more easy to parse as well. Maybe the lesson is that familiarity can be highly subjective.
I for example prefer a well chosen one-liner list comprehension in python over a loop with temporary variables and nested if statements most of the time. That is because usually people who use the list comprehension do not program it with side effects, so I know this block of code, once understood stands for itself.
The same is true for the builder style code. I just need to know what each step does and I know what comes out in the end. I even know that the object that was set up might become relevant later.
With the traditional imperative style that introduces intermediate variables I might infer that those are just temporary, but I can't be sure until I read on, keeping those variables in my head. Leaving me in the end with many more possible ways future code could play out. The intermediate variables have the benefit of clarifying steps, but you can have that with a builder pattern too if the interface is well-chosen (or if you add comments).
This is why in an imperative style variables that are never used should be marked (e.g. one convention is a underscore-prefix like _temporaryvalue — a language like Rust would even enforce this via compiler). But guess what: to a person unfamilar with that convention this increases mental complexity ("What is that weird name?"), while it factually should reduce it ("I don't have to keep that variable in the brain head as it won't matter in the future").
In the end many things boil down to familiarity. For example in electronics many people prefer to write a 4.7 kΩ as 4k7 instead, as it prevents you from accidentally overlooking the decimal point and making an accidental off-by-a-magnitude-error. This was particularly important in the golden age of the photocopier as you can imagine. However show that to a beginner and they will wonder what that is supposed to mean. Familiarity is subjective and every expert was once a beginner coming from a different world, where different things are familiar.
Something being familiar to a beginner (or someone who learned a particular way of doing X) is valuable, but it is not necessarily an objective measure of how well suited that representation is for a particular task.
> Maybe the lesson is that familiarity can be highly subjective.
Over the years I've come to firmly believe that readability is highly subjective. And familiarity is a key contributor to that, but not the only one. There are other factors that I've found highly correlate with various personality traits and other preferences. In other words, people shouldn't make claims that one pattern is objectively more readable than another. Ever.
I've reached the point where anyone who claims some style is "more readable" without adding "to me" I just start to tune out. There's very little objective truth to be had here.
What should one do? If on a team and you're the outlier, suck it up and conform. If you're on a team and someone else is the outlier? Try to convince them to suck it up and conform. If you're on a new team? Work empathetically with your teammates to understand what the happy medium style should be.
Pragmatic, but good, advice that I would recommend anybody to follow in their daily pracrise.
However I'd defend the notion that on the bad end of things you can have such a thing as (objectively?) hard to read code. With "hard to read" I do not mean "nobody can figure it out with time", what I mean is, that figuring it out takes 99% of programmers longer than the equivalent in another language or style. As you rightly point out, it is important to realize that this is a statistical realization and not an universal law of nature, so it really matters on which cohort of people you look at and recommendations stemming from such observations should be taken with a grain of salt.
Yet, underneath all of that isn't there such a thing as truly objectively hard to read style? Brainfuck for example is an objectively hard to read language – they put that even into the name of the language. Does that mean there is not a single person who can read it fluently? Probably not, but that doesn't invalidate the point. A double black diamond ski track is objectively harder to ski, exactly because there are less people that are able to ski it.
If you see programming as working with language and symbols to achieve some behavior, it is clear that there are patterns who match more with the known (familiar) of most people. If you ask non-programmers or non-mathematicians how to describe some action in any way they like they will probably use their daily language. That means code that looks somewhat similar to how a regular person would write it down is surely on the "very familiar"-end of things. Now I did argue that maximum familiarity to regular people is not in itself a desireable goal, we need to make a tradeoff between familiarity and suitability to express program structures and operations. The latter is not how regular people think at all, so using them as an absolute guide isn't a good idea. However thinking about how to write things that they are both expressive in terms of programming behavior and easy to reason about is a desireable goal. There just isn't a single right way of doing that and sometimes good enough is just that.
For sure. I think we're channeling similar sentiments. Obviously there is *some* amount of objectivity here. No one is going to look at a million lines of Brainfuck code and say "this is easy to grok". it's just that the problem is these objective truths make up a tiny fraction of the crap people debate when these topics come up.
And yes, the maximal familiarity is a huge aspect. Many years ago I got into an argument with a colleague regarding FP patterns. He contended that they were objectively harder to reason about and cited the difficulty of new hires in ramping up. I was contending that this is untrue, but rather those new hires had existed in a world where they didn't often encounter FP patterns. For practical purposes the end result is the same, I'll admit. But to your point: if we accept that the missing ingredient is familiarity vs it being objectively bad it changes how one might approach the problem.
Funny, as years pass by I slide more and more towards the opposite. For most aspects of readability people discuss, there are objectively correct choices, with some leeway for the specific circumstances.
Code that interleaves high and low level concerns, that has a lot of variables used all over the place, is deeply nested, etc. vs code that is modular, keeps the nesting shallow, splits up the functionality so that concerns are clearly separated and variables have the smallest scopes possible.
One of these styles is better than the other and it's not subjective in the least, so long as you're a human. We all have roughly the same limitations in terms of e.g. memory capacity so ways of programming that reduce the need to keep a lot of stuff in memory at a time will be objectively better.
> I've reached the point where anyone who claims some style is "more readable" without adding "to me" I just start to tune out. There's very little objective truth to be had here.
Adding qualifiers all over the place is just a defensive writing style, best fitting online forums like this one where people will nitpick everything apart. However, it's not reasonable to pay so much attention to someone's particular word choices.
Objective truth is very easy to find actually. The problem you're solving is objective and the desired outcome is too. It's just a matter of analysing the problem space to find the correct design. The feeling of subjectivity largely just comes from the high complexity of the problem space.
> If on a team and you're the outlier, suck it up and conform.
It's way more complicated than you make it sound. It's entirely possible the entire team is wrong.
A general strategy when joining a new team is to follow what the team does exactly and not attempt to make changes until you learn why the team does things the way they do (Chesterton's Fence). Once you understand, you can start suggesting improvements.
> In other words, people shouldn't make claims that one pattern is objectively more readable than another. Ever.
Maybe you haven't seen the sort of code I've seen.
Consider this specimen, for example:
// Call a function.
// Function call syntax isn't very visible (only a couple parens tacked onto an identifier);
// this is a major win for readability.
function callFunction(func, ...args) {
return func(...args);
}
// Perform math.
// Math symbols are arcane and confusing, so this makes it much more clear.
// I'm trying to program here, not transmute lead into gold!
function doMath(left, op, right) {
const OPS = {
"less than or equal to": (l, r) => l <= r,
"minus": (l, r) => l - r,
};
// TODO: add support for other operators.
return callFunction(OPS[op], left, right));
}
const MEMO_TABLE = {}
function fibonacci(n) {
if (callFunction(doMath, n, "less than or equal to", 1)) {
const result = n;
// Memoization makes this fibonacci implementation go brrrrrrrrr!
MEMO_TABLE[n] = result;
return result;
} else {
const result = callFunction(fibonacci, callFunction(doMath, n, "minus", 1)) + callFunction(fibonacci, callFunction(doMath, n, "minus", 2));
// Memoization makes this fibonacci implementation go brrrrrrrrr!
MEMO_TABLE[n] = result;
return result;
}
}
Yes, it was an intentional choice to never actually make use of MEMO_TABLE -- this (reproduced from memory) is a submission from an applicant many, many years ago (the actual code was much worse, it was somehow 3 to 4 times as many lines of code).
> In other words, people shouldn't make claims that one pattern is objectively more readable than another. Ever.
Taken literally, I can't argue with this. But if you amend that to "people shouldn't make claims that one pattern is objectively more readable than another to competent individuals" (which I suspect is intended to be implied by the original quote), then I'd have to disagree. The choice between functional vs procedural may mostly be subjective, but a convoluted pattern is objectively worse than a non-convoluted one (when the reader is competent).
Funny, I see the need to add comments as an indicator that it's time to introduce chunking of some sort--arguably it already has been.
In this case, I'd lean towards intermediate variables, but sometimes I'll use functions or methods to group things.
I prefer function/method/variable names over comments because the are actual first class parts of the code. In my experience, people are a bit more likely to update them when they stop being true. YMMV.
The dig on chains of map/reduce/filter was listed as a "Halstead Complexity Takeaway", and seemed to come out of the blue, unjustified by any of the points made about Halstead complexity. In fact in your later funcA vs. funcB example, funcB would seem to have higher Halstead complexity due to its additional variables (depending on whether they count as additional "operands" or not). In general, long chains of functions seem like they'd have lower Halstead complexity.
The "anti-functional Tourette's" comment was partly a response to how completely random and unjustified it seemed in that part of the article, and also that this feels like a very common gut reaction to functional programming from people who aren't really willing to give it a try. I'm not only arguing directly against you here, but that attitude at large.
Your funcA vs. funcB example doesn't strike me as "functional" at all. No functions are even passed as arguments. That "fluent" style of long chains has been around in OO languages for a while, independent of functional programming (e.g. see d3.js*, which is definitely not the oldest). Sure, breaking long "fluent" chains up with intermediate variables can sometimes help readability. I just don't really get how any of this is the fault of functional programming.
I think part of the reason funcB seems so much more readable is that neither function's name explains what it's trying to do, so you go from 0 useful names to 3. If the function was called "getNamesOfVisibleNeighbors" it'd already close the readability gap a lot. Of course if it were called that, it'd be more clear that it might be just trying to do too much at once.
I view the "fluent" style as essentially embedding a DSL inside the host language. How readable it is depends a lot on how clear the DSL itself is. Your examples benefit from additional explanation partly because the DSL just seems rather inscrutable and idiosyncratic. Is it really clear what ".data()" is supposed to do? Sure, you can learn it, but you're learning an idiosyncrasy of that one library, not an agreed-upon language. And why do we need ".nodes()" after ".connected()"? What else can be connected to a node in a graph other than other nodes? Why do you need to repeat the word "node" in a string inside "graph.nodes()"? Why does a function with the plural "nodes" get assigned to a singular variable? As an example of how confusing this DSL is, you've claimed to find "visibleNames", but it looks to me like you've actually found the names of visible neighborNodes. It's not the names that are not(.hidden), it's the nodes, right? Consider this:
Note how much clearer ".filter(node => !node.isHidden)" is than ".not('.hidden')", and ".map(node => node.name)" versus ".data('name')". It's much harder to get confused about whether it's the node or the name that's hidden, etc.
Getting the DSL right is really hard, which only increases the benefit of using things like "map" and "filter" which everyone immediately understands, and which have no extrinsic complexity at all.
You could argue that it's somehow "invalid" to change the DSL, but my point is that if you're using the wrong tool for the job to begin with, then any further discussion of readability is in some sense moot. If you're doing a lot of logic on graphs, you should be dealing with a graph representation, not CSS classes and HTML attributes. Then the long chains are not an issue at all, because they read like a DSL in the actual domain you're working in.
*Sidenote: I hate d3's standard style, for some of the same reasons you mention, but mainly because "fluent" chains should never be mutating their operand.
Just want to add that I both agree with you and parent. Your examples are readable and I see no issues there.
It might be a language thing as well. In Python often people take list-comprehensions too far and it becomes an undecipherable mess of nested iterators, casts and lists.
> in c or go if you dont like variadics just pass in a params struct.
Which, again, is fine if your language supports named args for your params struct.
C++ didn't have this until C++20, despite C having it for decades prior.
Java still doesn't have this.
If your language doesn't have named args or designated initializers or whatever it calls them, then what, perform your function calls with argument comments of the form f(/*foo=*/1, /*bar=*/2, /*baz=*/true)? That's error-prone even in the best-case scenario.
o_node := graph.GetNodeByName(name)
var ret []string
for _, node := range o_node.connectedNodes() {
if !node.isHidden {
ret = append(ret, node.name)
}
}
return ret
There is just no way that reasonable people consider this to be clearer. One certainly might be more familiar with this approach, but it is less clear by a long shot.
You've added a temp variable for the result, manual appending to that temp variable (which introduces a performance regression from having to periodically grow the array), loop variables, unused variables, multiple layers of nesting, and conditional logic. And the logic itself is no longer conceptually linear.
Everything you said is true for both of our programs, the only difference is whether or not it's hidden behind function calls you can't see and don't have access to.
You don't really think that functional languages aren't appending things, using temp vars, and using conditional logic behind the scenes, do you? What do you think ".filter(node => !node.isHidden)" does? It's nothing but a for loop and a conditional by another name and wrapped in an awkward, unwieldy package.
>which introduces a performance regression from having to periodically grow the array
This is simply ridiculous, do you just believe that the magic of Lisp/FP allows it to pluck the target variables out of the sky in perfectly-sized packages with zero allocation or overhead?
> the only difference is whether or not it's hidden behind function calls you can't see and don't have access to.
You "can't see and don't have access to" `if`, `range`, or `append` but somehow you don't find this a problem at all. I wonder why not?
> You don't really think that functional languages aren't appending things, using temp vars, and using conditional logic behind the scenes, do you?
By this metric all languages that compile down to machine instructions are equivalent. After all, it winds up in the same registers and a bunch of CMP, MOV, JMP, and so on.
`.distinct()` could sort the result and look for consecutive entries, it could build up a set internally, it could use a hashmap, or any one of a million other approaches. It can even probe the size of the array to pick the performance-optimal approach. I don't have to care.
> [".filter(node => !node.isHidden)" is] nothing but a for loop and a conditional by another name and wrapped in an awkward, unwieldy package.
This is honestly an absurd take. I truly have no other words for it. map, filter, and friends are quite literally some of the clearest and most ergonomic abstractions ever devised.
Speaking of filters and clear ergonomic abstractions, if you like programming languages with keyword pairs like if/fi, for/rof, while/elihw, goto/otog, you will LOVE the cabkwards covabulary of cepstral quefrency alanysis, invented in 1963 by B. P. Bogert, M. J. Healy, and J. W. Tukey:
> it could build up a set internally, it could use a hashmap, or any one of a million other approaches. It can even probe the size of the array to pick the performance-optimal approach. I don't have to care.
Well, this is probably why functional programming doesn't see a lot of real use in production environments. Usually, you actually do have to care. Talk about noticing a performance regression because I was simply appending to an array. You have no idea what performance regressions are happening in ANY line of FP code, and on top of that, most FP languages are dead-set on "immutability" which simply means creating copies of objects wherever you possibly can... (instead of thinking about when it makes sense and how to be performant about it)
Your assumption that filter/map/reduce is necessarily slower than a carefully handcrafted loop is wrong, though. Rust supports these features as well and the performance is equivalent.
Also countless real-world production environments run on Python, Ruby, JS etc, all of which are significantly slower than a compiled FP program using filter & map.
> FP languages are dead-set on "immutability" which simply means creating copies of objects
Incorrect. The compiler can make it mutable for better performance, and that gives you the best of both worlds: immutability where a fallible human is involved, and mutability where it matters.
> The compiler can make it mutable for better performance
Well, we already know that no pure FP language can match the performance of a dirty normal imperative language, except for Common Lisp (which I am happy to hear an explanation for how it manages to be much faster than the rest, maybe it's due to the for loops?). And another comment here already mentioned how those "significantly slower" scripting languages have a healthy dose of FP constructs -- which are normally considered anti-patterns, for good reason. The only language that competes in speed in Rust, which just so happens to let you have fast FP abstractions so long as you manually manage every piece of memory and its lifetime, constantly negotiating with the compiler in the process, thereby giving up any of the convenience benefits you actually get from FP.
You complain about not having control of how the `filter` works under the hood but are happy to give the language control over memory management. How much abstraction is too much abstraction? Where do you draw the line?
I consider an abstraction to be a good abstraction if I don't need to care for its internal workings all the time - whether that's some build step in the CI, or the usage of data structures from simple strings to advanced Cuckoo filters and beyond. Even Python uses a Bloom filter with operations on strings internally AFAIK. Correctness and maintainability trumps performance most of the time, and map, filter and immutability are building blocks to achieve just that. Those constructs won't prevent you from looking deeper and doing something different if you really have to (and know what you're doing), they just acknowledge that you'll probably don't need to do that.
> Correctness and maintainability trumps performance
Great, please make all the software even slower than it already is. I am overjoyed to have to purchase several new laptops a decade because they become e-waste purely due to the degradation of software performance. It is beyond ridiculous that to FP programmers daring to mutate variables or fine-tune a for loop is an exceptional scenario that you don't do "unless you have to" and which requires "knowing what you're doing". Do you know what you're doing? How can you be a software engineer and think that for loops are too difficult of a construct, and that you need something higher-level and more abstract to feel safe? It's insane. Utterly insane. Perhaps even the root of all evil, Code Golf manifested as religion.
False equivalence. You're saying that the statement "both for.. append and .map() executing the _same steps_ in the _same order_ are the same" is equivalent to saying that "two statements being composed of cmp,jmp, etc (in totally different ways) are the same" That is a dishonest argument.
> Distinct could sort and look for consecutive, it could use a hashmap
People love happy theories like this, but find me one major implementation that does this. For example, here is the distinct() implementation in the most abstraction-happy language that I know - C#
This does the usual hashset based approach only if the array has strings. Otherwise, it gets the comparator and does a sort | uniq. So, you get a needless O(nlogn), without having a way to distinct() by say, an `id` field in your objects. Very ergonomic...
On the other hand...
seen := map[string]bool{}
uniq := []string{}
for _, s := range my_strings {
if !seen[s] {
uniq = append(uniq, s)
seen[s] = true;
}
}
Let us say you want to refactor and store a struct instead of just a string. The code would change to...
seen_ids := map[string]bool{}
uniq := []MyObject{}
for _, o := range my_objects {
if !seen_ids[o.id] {
uniq = append(uniq, o)
seen_ids[o.id] = true;
}
}
Visually, it is basically the same, with clear visual messaging of what has changed. And as a bonus, it isn't incurring a random performance degradation.
I'm not sure reaching for PHP to dunk on functional languages is the win you think it is?
> Visually, it is basically the same, with clear visual messaging of what has changed.
In order to do this you had to make edits to all but a single line of logic. Literally only one line in your implementation didn't change.
Compare, with Ruby:
# strings
uniq = strings.uniq()
# structs, if they are considered equal based on the
# field in question (strings are just structs and they
# implement equality, so of course this is the same)
uniq = structs.uniq()
# structs, if you want to unique by some mechanism
# other than natural equality
let uniq = structs.uniq(&:id)
With Rust:
# strings
let uniq = strings.unique();
# structs, if they are considered equal based on the
# field in question (strings are just structs and they
# implement equality, so of course this is the same)
let uniq = structs.unique();
# structs, if you want to unique by some mechanism
# other than natural equality
let uniq = structs.unique_by(|s| s.id);
You cannot tell me with a straight face that your version is clearer. You'll note that both languages have essentially the exact same code, which is to say: nearly none at all.
> Distinct could sort and look for consecutive, it could use a hashmap. It can even probe the size of the array to pick the performance-optimal approach. I don't have to care.
which says that you just use an abstraction and it transparently "does the right thing". The C# example showed how abstractions actually don't do that, and instead simply provide a lowest common denominator implementation. The rust examples you used also uses a hashmap unconditionally. The PHP example was showing how when abstractions attempt to do that it ends up even worse - when you move from strings to structs, you get a slowdown.
In practice, different strategies are always implemented as different functions (see python `moreitertools` `unique_justseen` and `unique_everseen`), at which point its no longer an abstraction (which by definition serves multiple purposes) and it just becomes a matter of whether or not the set of disparate helper functions is written by you, the standard library, or a third party. In rust, you would do `vec.sort()`, `vec.dedup()` for one strategy, and call `v.into_iter().unique().collect()` for the other strategy. That is not an "abstraction" [which achieves what you claimed they do].
> It unconditionally uses a hashset regardless of input.
This follows what I like to call macro optimization. I don't care if a hash set is slightly slower for small inputs. Small inputs are negligibly fast anyway. The hash set solution just works for almost any dataset. Either it's too small to give a crap, midsized and hash set is best, or huge and hash set is still pretty much best.
That's why we default to the best time/space complexity. Once in a blue moon someone might have an obscure usecase where micro optimizing for small datasets makes sense somehow, for the vast majority of use cases this solution is best.
> Everything you said is true for both of our programs, the only difference is whether or not it's hidden behind function calls you can't see and don't have access to.
This is a key difference between imperative programming and other paradigms.
> You don't really think that functional languages aren't appending things, using temp vars, and using conditional logic behind the scenes, do you?
A key concept in a FP approach is Referential Transparency[0]. Here, this concept is relevant in that however FP constructs do what they do "under the hood" is immaterial to collaborators. All that matters is if, for some function/method `f(x)`, it is given the same value for `x`, `f(x)` will produce the same result without observable side effects.
> What do you think ".filter(node => !node.isHidden)" does?
Depending on the language, apply a predicate to a value in a container, which could be a "traditional" collection (List, Set, etc.), an optional type (cardinality of 0 or 1), a future, an I/O operation, a ...
> It's nothing but a for loop and a conditional by another name and wrapped in an awkward, unwieldy package.
There is something to be said for the value of using appropriate abstractions. If not, then we would still be writing COBOL.
> You don't really think that functional languages aren't appending things, using temp vars, and using conditional logic behind the scenes, do you? What do you think ".filter(node => !node.isHidden)" does? It's nothing but a for loop and a conditional by another name and wrapped in an awkward, unwieldy package.
But the whole point of higher-level languages is that you don't have to think about what's going on behind the scenes, and can focus on expressing intent while worrying less about implementation. Just because a HLL is eventually compiled into assembler, and so the assembler expresses everything the HLL did, doesn't mean the HLL and assembler are equally readable.
(And I think that your parent's point is that "awkward, unwieldy package" is a judgment call, rather than an objective evaluation, based, probably, on familiarity and experience—it certainly doesn't look awkward or unwiely to me, though I disagree with some of the other aesthetic judgments made by your parent.)
What? Golang append()s also periodically grow the slice.
> Conditional logic
it's just a single if, really, the same thing is there in your filter()
> Multiple layers of nesting
2... You're talking it up like it's a pyramid of hell. For what it's worth, I've seen way way more nesting in usual FP-style code, especially with formatting tools doing
> What? Golang append()s also periodically grow the slice.
If you already know the size of the result (there are no filtering operations), the functional approach can trivially allocate the resulting array to already have the correct capacity. This happens with zero user intervention.
IIRC the Rust optimizer basically emits more or less optimal machine code (including SIMD) for most forms of iteration.
Yes, you can always just write more and more and more code to fix these things. Or you could just… not write more code and still get optimal performance.
But the abstractions don't really adapt themselves to be performant in each usecase the way you imagine. I gave an example of c# distinct() above. Sure, they can. But do they? No. They only save anything at all for trivial usecases, where the imperative code is also obvious to parse.
>For me, roughly 5 calls in a chain is where things begin to become harder to read, which is the length of the example I used.
This isn't just about readability. Chaining or FP is structurally more sound. It is the more proper way to code from a architectural and structural pattern perspective.
given an array of numbers
1. I want to add 5 to all numbers
2. I want to convert to string
3. I want to concat hello
4. I want to create a reduced comma seperated string
5. I want to capitalize all letters in the string.
This is what a for loop would look like:
// assume x is the array
acc = ""
for(var i = 0, i < x.length; x++) {
value = x[i] + 5
value += 5
stringValue = str(value).concat(hello)
acc += stringValue + ","
}
for (var i = 0, i < acc.length; i++) {
acc[i] = capitalLetter(acc[i])
}
FP:
addFive(x) = [i + 5 for i in x]
toString(x) = [str(i) for i in x]
concatHello = [i + "hello" for i in x]
reduceStrings(x) = reduce((i, acc) = acc + "," + i, x)
capitalize(x) = ([capitalLetter(i) for i in x]).toString()
You have 5 steps. With FP all 5 steps are reuseable. With Procedural it is not.
Mind you that I know you're thinking about chaining. Chaining is eqivalent to inlining multiple operations together. So for example in that case
x.map(...).map(...).map(...).reduce(...).map(...)
//can be made into
addFive(x) = x.map(...)
toString(x)= x.map(...)
...
By nature functional is modular so such syntax can easily be extracted into modules with each module given a name. The procedural code cannot do this. It is structurally unsound and tightly coupled.
It's not about going overboard here. The FP simply needs to be formatted to be readable, but it is the MORE proper way to code to make your code modular general and decoupled.
you have this backwards: reusing code couples the code. copy+paste uncouples code
if you have two functions, they're not coupled. you change one, the other stays as-is
if you refactor it so that they both call a third function, they're now coupled. you can't change the part they have in common without either changing both, or uncoupling them by duplicating the code
(you often want that coupling, if it lines up with the semantics)
I agree the code you have there is very readable, but it's not really an example of what that sentence you quoted is referencing... However I didn't spell out exactly what I meant, so please allow me to clarify.
For me, roughly 5 calls in a chain is where things begin to become harder to read, which is the length of the example I used.
For the meaning of "multiple", I intended that to mean if there are nested chains or if the type being operated on changes, that can slow down the rate of reading for me.
Functional programming constructs can be very elegant, but it's possible to go overboard :)