Since I'm in the position to use Swift, I often define small functions WITHIN functions, in order to give the behavior of a code block a name. I'm also using LOTS of immediately executed closures (is that the correct term?) to reduce scope and profit from slightly easier control flow. And I don't see how this would be bad.
Meta: I thought we were over this dogma thing. I watched the TDD thing become popular, and then fall from grace - why do people still need to pretend they know better than me, the reader? Do they want to become gurus, or do they soothe their own feeling of "doing it wrong"?
Perhaps they do? Or perhaps they just see a trend starting up in their day-to-day work and want to try and nip it in the bud?
I see this exact same trend forming: in my last code review I saw a function which had exactly one line of code in it - a string format operation. It was called twice, and had no likelihood to be called more.
That was the most gregarious, but not the only, example.
> And I don't see how [lots of small functions] would be bad.
Well, I am not sure about Swift's implementation, but functions are typically expensive to execute. They require pushing values onto the stack, switching control, executing code, popping values off the stack, switching control back, etc. Lots of CPU operations and memory manipulation (not to mention cache misses, etc). There's also the possibility of blowing out your stack if you store too many values on it, or nest too deeply.
From a cognitive point of view, you have to interrupt your current flow of reading, move to a different spot in the current file (or another file entirely), to track down the behavior of a certain function. As the functionality of that function mutates over time, its name is rarely changed to reflect the updated functionality, which means it can create confusion and require even more time to consider the corner cases.
I worked at one place where, during pairing, there was pressure to place a single line of code in its own function. Just so that function name could be coerced into being a sort of comment, as comments themselves anywhere under any circumstances were "bad practice".
I put it down at the time to Uncle Bob Cargo Cultism, but found it a bit irritating nonetheless. Similarly annoying that the linter would reject a commit because a function had more than 14 lines of code in it. Under any circumstances.
However, I've been recently delving into some styleguides, and thought I'd refactor a web app server according to principles in the styleguide. So the first pageful is more of an Executive Summary of the module, and everything happens in functions below the fold. As everything wasn't nested more than a couple of levels deep, I was quite pleased with the results. Unlike the crazy level of nesting on the previous project, where I was doing as much branching as the CPU just in order to read the stuff.
I suppose it's the same old, same old; a little is great, a lot is harmful.
Overhead is a fair point for jit/interpreted languages if you want to write performance critical code in a jit/interpreted language for some reason. Tiny functions are almost certainly going to be inlined if the compiler does its job, though, so it isn't a huge deal for compiled languages.
And the point is that the function can be so tiny that you would just replace it with a new function. Then you can reason locally about behavior. This is a very functional approach, though, and can break horribly in imperative languages so there definitely is a balance.
Inlining is hard (especially with languages which allow for overriding functions), and is not done perfectly (or at all). The LLVM compiler toolchain in particular has a lot of properties which can inhibit inlining code; and in some cases you even have to explicitly tell the LLVM compiler when it can inline a function.
You don't need a language with closures even. Many C-style languages  allow you to add curly braces around a couple of statements without using any other control structure (if, for, while, etc.), just to force variables declared inside this block to go out of scope at the end of the block. I've used this for complex functions that do not lend itself to useful modularization, to indicate which variables are short-lived, and which are carried forward to later parts of the function.)
 e.g. C++, Perl; not sure about C itself
Readability is most strongly related to the ease of following all possigle code paths from a given entry point. That is, can I start reading in main() and iteratively follow the possible code paths given a specific input?
The answer to this question is more related to separation of scopes than it is to function size. If a single scope contains many possible branches and code paths, then it will be difficult for a reader to follow a specific code path amongst the noise of irrelevant functions (short or long).
All else equal, the length of a function is primarily a matter of style and preference. The priority of code should be expressiveness and clarity. If small functions enhance expressiveness, then they are the tool for the job. If big functions add clarity or aide in separating scopes, then they are also a tool for the job. Sometimes you need both.
Worrying about anything other than readability, maintainability and correctness is needless dogma.
And this is the thing about extremes - to get back to the center, you have to push towards the opposite extreme (and hope your push has the right magnitude to not overshoot the proper location). It's why the title is extreme, and the article is not. "Small Functions Considered Harmful" is easy to remember, and a soundbyte which can compete at the same level of "Don't Repeat Yourself".
Here's the thing... code is either readable and maintainable, or it's not. You don't evaluate the readability of code with some kind of dogmatic checklist. You just read it and see if it makes sense.
Dogmas are good for rules of thumb, which are mental shortcuts with safe error margins. Shortcuts are helpful in that they provide cover for knowledge gaps. But they are a short term solution. The long term solution is to develop adaptable instincts that provide better guidance than any dogma or rule of thumb.
Looking at dogma this way, as a shortcut to cover knowledge gaps, its no surprise that adherence to dogmatism appears to be inversely correlated with programming experience. Newbies who haven't seen many contexts will cling to dogmas. But as they gain experience, they find situations where dogmas don't necessarily yield the best solution. Slowly that experience develops into instincts, which replace the crude dogmatic ways of thinking. But instincts only come with experience, and nobody is an expert in every domain, so dogmas have their place. But reliance on them should be a temporary solution, not a long term guiding principle.
Isn't the author trying to correct a dogma, just as you are?
Another approach is "disproportionate simplicity", and works out similarly to an abstraction, something of a DSL. If you can modularize functionality, such that that aspect becomes much simpler, do it. Note that it might not solve the whole of that problem (like the core of git is a "stupid content tracker", it solves the part it does tackle with simplicity disportionate to the alternatives; but that simple core doesn't solve everything). It's about where to draw bounardies (between modules, with functions being one kind of module).
Yet another is the traditional criterion, based on likelihood of change: boundaries between modules [functions] should be unlikely to change; the stuff that is likely to change should be "hidden" within a module [function]. Unfortunately, I've found prediction to be tricky, particularly when it concerns the future of programs.
I find "straight-line" code, perhaps in one long function, with comments to provide commentary and "navigation", far easier to read than the dozens-of-tiny-and-verbosely-named-functions style too. The "readability" argument is commonly used, but it's focusing on the wrong thing: a single-line function is certainly going to be easier to understand than a 512-line function, but understanding a single-line function does not make it any easier to understand the system/algorithm/etc. as a whole. The latter is extremely important, because not knowing "the big picture" can lead to very bad decisions overall; I've seen many cases where bugs or accidental and severe inefficiencies (e.g. unnecessary allocations, high-polynomial complexity, multiply duplicate accesses to data, etc.) were created because the author of the code only focused on a tiny piece and neglected to consider its application in the whole.
There are some very insightful posts by an APL(!) programmer here, discussing the topic of complexity overall vs. complexity in parts:
I suspect part of the motivation for producing "microfunctions" may have come from a misunderstanding of the "decompose the problem" principle --- which is intended to mean that you, as a programmer, decompose the problem into simpler steps --- but not that each step necessarily warrants a function.
The same problem and principles apply to other levels of organisation: classes, structures, files. etc. --- they are intended to reduce duplication and simplify code, but will have the opposite effect if used to excess.
Rather than dogmatically applying "no small functions" or "break functions down as much as possible" it's more useful to look at how the code communicates the idea (and achieves the goal).
If the function is calling 30 things before it does the 4 things that logically map to the function name, maybe consider refactoring out all or parts of the preceding 30 steps.
Hope that's a good tl;dr for the article ;)
Let's assume I'm right here for a minute. If most programmers then are intermediate programmers, what kind of advise needs most to be given knowing businesses needs to run? The ones that do "damage control". There is no issue with that, it's ok to do damage control.
Let's take some example. "readable code". Yes for an intermediate programmer it is a good thing if the code is not too dense because those code tend to draw all the programmer's energy when he tries to make sense of it. But as the programmer get experience, he can read more and more dense code to finally be able to get in a glimpse what looks like obfuscated code to others. For a mature programmer, the important point not readability, it is for the code to be as small as possible which generally tend to produce very dense code. The mature programmer draws on his abilities to understand obfuscated code to write even better code.
"write a test first" (beside the fact you're here actually asking a beginner to solve his code problems by writing more code which is at best controversial to me). A mature programmer don't write a test first, he thinks that way already - and yes he delivers code that has less than a percent failure rate. Now, some code may require unit tests, I'm not saying this is wrong in every situation, it's just not right in every situation.
Anyway, what I'm trying to say is: I wish we have a way to finally make a clear distinction between good advises for intermediate programmers and what is actually mature programming because it's very different - the mature programmer gonna make the function as big as it make sense to be, he isn't gonna artificially break up his body into small functions to make it more readable: he can read the code already.