I think you can say that at the boundaries, applications aren't anything.
I tend to think of code in a cellular sense, as in, biology. Outside the cell is a big scary world that you don't control. There's a cell wall where you only permit certain things in. Inside the cell, the cell trusts that only good stuff has been let in. It may also have to package certain things on the way out to the next cell.
In this case, the observation I'd make is about that big scary external world. You don't get to impose anything on it, or at least, your control is a great deal less rigid than our code would like. Even if you think you control your internal network, hackers might explain otherwise to you in the worst possible way. You can't impose compliance with functional paradigms, imperative paradigms, security levels, soundness of data, whether or not a bit was flipped during transit, that the metadata matches what was sent, or anything.
Obviously you can't fully write your code that way (even real cells get it wrong sometimes too), but that's the principle I try to view the world through. Even within an application where every individual component is, say, compliant with functional programming, the interactions still can't be counted on to have any particular properties that you don't check somehow.
FP, OO, data-driven design, all that sort of stuff, that's for what you do inside the cells, and maybe how you choose to structure the code implementing your cell wall. But you almost always end up forced to treat the outside world as bereft of any structure you don't rigidly check for and enforce yourself, if not outright hostile (anything with security implications).
Yea, I think a lot of programmers confuse the map for the territory. It's not only the data, but the program itself.
Almost no one actually cares how a particular program was written or how it understands it's input and output -- we care that it works with some level of quality. How one gets that result is irrelevant in the end. It could be written directly in twisty assembly code. Do not care[1]
Parts of these paradigms have useful tools for building working programs, but a great majority of the contents of these paradigms are simply about organization. This shows up most clearly in OO, and of course, functions are a way to organize code. This isn't a bad thing -- certainly helpful for humans working on a code base -- but it isn't actually relevant to the task the program itself performs or how fast or well it performs it.
So, of course, the input and output of a program aren't really conformant to any paradigm, because the paradigms are about organizing programs, not about performing a particular task.
[1] (it might even be more reliable, in some cases, because you would be forced to be careful and pay attention and all those little details you want to ignore are right there in your face (see: async) :-))
I think you might miss the audience here...this is talking /to/ programmers, after all - who decidedly do care about how the program is written, organized, etc.
I don't think anyone is making a "more correct" or even "more performant" argument here; maybe, a "more reliable" argument - but only by extension of "better organized, so less likely to include certain classes of bugs".
> I think you can say that at the boundaries, applications aren't anything.
I think you very much can say that "at the boundaries, applications are procedural" (i.e., they do side-effecting things sequentially).
That's not incompatible with FP. On the contrary, FP used properly lets you push all that procedural code to smallish kernels at the "boundaries" so that all the rest of the code can be pure (thus easily tested).
> I think you very much can say that "at the boundaries, applications are procedural" (i.e., they do side-effecting things sequentially).
Well, side-effecting, yes, because that's literally how we define boundaries.
Sequentially? Not so much; concurrency (whether asynchrony or true parallelism) is important largely because simple sequential behavior doesn't capture what happens naturally at the boundaries well. (I suppose on an individual boundary, defined in the right way, there is likely to be a sequencing constraint, but not in aggregate across the boundaries of the system.)
You can parallelize as much as you want, but remember, you can't go faster than the slowest serial code. There will be some serial code. Again, I'm talking about order of steps in processing events that themselves may not arrive in any particular order.
"I think you very much can say that "at the boundaries, applications are procedural" (i.e., they do side-effecting things sequentially)."
You may be able to; I can't. I have a number of incoming event streams that are not necessarily ordered.
Now, like I said, you don't implement all code everywhere for all possible missteps, so you may have specific apps that get away with assuming orderedness. But it is not a general thing you can rely on.
> That doesn't make the application not-procedural. If it's having side effects, it's procedural in a sense.
“Procedural” is a structural paradigm; unstructured imperative code (old-school BASIC) has side effects but is not procedural in any sense (it is part of the broader category of imperative languages.) Also, “procedural” (or “imperative” or “functional”) is an attribute of programming languages, not applications/systems: Haskell is a pure functional programming languages, in which evaluating functions has no side effects, but it can define systems that have effects.
Perhaps I should have written "imperative" rather than "procedural", except no one really writes imperative non-procedural code now.
> Also, “procedural” (or “imperative” or “functional”) is an attribute of programming languages, not applications/systems:
It can be an attribute of programs regardless of language. It is possible to write functional-style code in C and procedural code in Haskell (just do everything in the IO monad!).
> Haskell is a pure functional programming languages, in which evaluating functions has no side effects, but it can define systems that have effects.
That's a bit of a fiction. Pure functions are pure, yes, but you can have impure functions -- Haskell is only interesting because you can have impure functions (otherwise every Haskell program would compile down into a constant), and what's really interesting about Haskell is the ideas that have evolved in its community about how to deal with impure functions.
> Pure functions are pure, yes, but you can have impure functions
Not in Haskell, you can't.
> Haskell is only interesting because you can have impure functions
No, it's interesting because it provides a way of representing series of effectful operations that are distinct from its functions, which are pure.
> otherwise every Haskell program would compile down into a constant
Every program in any language that is compiled compiles down to a constant (which is, itself, usually a program, often an imperative one in a native or virtual machine language); but in Haskell, each program is (in the model of the language, not merely what they compile to) normally a constant, non-function expression, most commonly of type IO ().
It's true that values of type IO a are isomorphic to “impure nullary functions returning a” in a language which has impure functions, but they aren't functions in Haskell, and can't be called from functions, they can only be operated on as values within the language.
Haskell's effectful operations are functions. Oh sure, not in the mathematical sense unless you model them as functions from World -> World.
> Every program in any language that is compiled compiles down to a constant ...
Sigh, yes, the compiled program is constant, but you know what I meant: a constant value, such as a number, that the program is expected to output. Let's not be this pedantic.
Things might be sequential at the boundaries or they might not. A lot of hardware interacting applications are forced to handle concurrent and out of order inputs that are expected to be processed in a particular order.
Really, since the boundary is where we push all the awful stuff - that boundary (depending on the application) can be any sort of terrible.
> In 2011 I observed that at the boundaries, applications aren't object-oriented. For years, I've thought it a natural corollary that likewise, at the boundaries, applications aren't functional, either. On the other hand, I don't think I've explicitly made that statement before.
The next and last paragraph does not then explicitly make that statement, instead ending with:
> Functional programming offers an alternative that, while also not perfectly aligned with all data, seems better aligned with the needs of working software.
I think that's right. OOP is just a disaster, but FP is not. FP is not about making all of a program pure, but, rather, about isolating all the bits that can be pure (thus making them easy to test) and collecting all actual impurity into as small a bunch of code as possible.
The impure code you end up having will look very procedural.
In Haskell, you do this by running all side-effect-having code "in the IO monad", and monadic code looks procedural in the same way that PROGN loops procedural in Lisp though PROGN is [or can be] a macro that turns your procedural statements into one singular expression.
So it's completely fair to say that "at the boundaries, applications are procedural", because, well, it's patently true!
FP helps by helping the programmer push impure code as much as possible towards that boundary, leaving the rest to be as pure (and thus easily-tested) as possible.
For example, if you have code that uses the time of day for externally-visible effects, then pass it the time of day so as to make it more pure and easier to test. This one is counter-intuitive because we like to just get-the-current-time, but I've done this to make code that does epochal cryptographic key derivation deterministic and, therefore, testable.
Theoretical arguments sound great and all, but where are the results? If OOP is such a dumpster fire and FP is so productive, where is all the FP code? Why are there so many OOP projects? It's not like FP is something we've just discovered. It's been around for decades.
Why does nobody appear to be having runaway success with it, if it is the superior paradigm?
I think the explanation is the fundamental reality: computers are used to transform data, to store data, and to present data; FP determines that two of three reasons are impure and must be minimized.
John Carmack says that FP is a hinderance when rendering graphics, working with buffers, etc. So in respect to presenting data, UI and graphics will likely always be in an imperative lang.
Rich Hickey's (creator of Clojure) company Datomics (i.e. proprietary) uses his design for an immutable database which has only been around since 2013. He says that disk storage was so costly in the past but is now so cheap that the existing server industry is built atop this legacy of old ideas. So FP regarding storing data is likely in infancy.
At the boundaries are events. "User wanted to change time of meeting".
Once you have recorded events, figuring out what side-effects (often causing new events in other systems) should be triggered from the set of all events input into the system can be coded using whatever flavor of functional/relational/reactive programming.
I think the combination of event sourcing and functional programming, and databases that support this way of working better than today and doesn't have OOP as their main target audience, is the future. (And I absolutely don't mean Kafka. SQL comes at least closer; to efficiently work with events and implement functional business logic on top of events relations and structure is important.)
To me the big difference between OO and FP is complexity management.
In OO complexity is hidden. So you don't have to deal with the complexity of the internal state of an object while using the object. It's a divide and conquer approach.
In FP complexity is constrained. Pure functions and immutable data make it easier to reason about the code. This allows you to see all the workings and not get overwhelmed.
I think there are a lot of FPLs, like Erlang (and friends) and F# (which is in TFA), Julia, which are #1 and at least mostly #2, with their concommittant benefits, without the pain in the ass for small benefit of #3
I think it is more like somebody wrote a blog post that said "FP rules, OO drools" and then other people thought "I'll write a blog post like that!"
Or maybe it is easy to find fault with programming languages you actually use and to idealize programming languages you don't use.
I imagine a world where Common LISP won and reddit/programming would be kvetching all the details CL got wrong while asking "How can I get a COBOL Job?", "Did you know that PHP syntax is based on Chomsky's generative grammar?"
I worked on a project that involved building a stream processing engine in Scala that heavily used Monads.
I remember being told by the manager what the error handling strategy was and thinking "This is like that Amway presentation where they 'draw circles' showing how 8 people get a cut of the $7 tube of toothpaste they sell you and then ask 'How can we beat the prices in the supermarket?', they strike the presentation board with a pointer and say 'By eliminating the middleman!'"
Now, they could have handled errors correctly with monads just as they could have handled errors correctly with exceptions, except that they didn't. That manager approved code review after code review where error handling was absent.
This really feels like most online communication and promoted blogs on the social medias (hacker news included) have gone to the "we're optimizing for tribe click throughs" rather than technical due diligence and excitement.
I like the CL spec as an early example of a spec for a language that balances performance and dynamism. In the big picture I'd say that it taught people how to write specs for languages like Java, Python, Javascript, etc.
CL was deeply unpopular at the time for quite a few reasons: it really had a 32-bit mindset which made it a bad fit for the machines many people had on their desktops at the time, also the language has enough performance-oriented details that it's not as simple as a LISP can be.
Correct. In more mainstream language, quoth Gary Bernhardt, "functional core, imperative shell."
Functional programming is a convenient fantasy, a highly restrictive and controlled environment that allows us to make large assertions about bodies of code - "no network IO can take place here"; "your inputs will most assuredly be numbers that can be added together."
It's the equivalent of assuming the cow is a sphere [0]. A useful mental model, that ultimately breaks down upon contact with the "real world."
Hence the imperative glue code / monadic actions wiring all of the pretty, perfect abstractions together.
I think it's more like, out of this actual, physical cow, we are going to carve a perfectly spherical cow plus some... "other" parts.
That is, you can make chunks of your application functional - they just can't be chunks that touch the exterior. It's not a "mental model" - it's something you construct in the code.
Now, you may not be able to do that with all the "interior" code, either. Parts may have too much intrinsic state for functional programming to be a useful approach. But for other interior parts, hey, you like functional? Make it so.
Pure, deterministic code is easier to test than impure, non-deterministic code. There's no test environment setup for the former. No need to setup networking, DNS, PKI, etc. No need to have containers. For the "functional code" you just furnish inputs and compare to expected outputs.
Sort of like test vectors for cryptographic functions.
You still have to be careful to test all the edge cases (assuming you can't test the full domain of each function), naturally. But the fact that the functional core doesn't need setup means the tests of it have less startup and teardown overhead and so will generally run faster (unless they take so much time that setup overhead is in the noise).
As for the "imperative shell", you may be able to mock everything w/o having to change it, though you could also set up a test environment with all the external things it needs.
I tend to think of code in a cellular sense, as in, biology. Outside the cell is a big scary world that you don't control. There's a cell wall where you only permit certain things in. Inside the cell, the cell trusts that only good stuff has been let in. It may also have to package certain things on the way out to the next cell.
In this case, the observation I'd make is about that big scary external world. You don't get to impose anything on it, or at least, your control is a great deal less rigid than our code would like. Even if you think you control your internal network, hackers might explain otherwise to you in the worst possible way. You can't impose compliance with functional paradigms, imperative paradigms, security levels, soundness of data, whether or not a bit was flipped during transit, that the metadata matches what was sent, or anything.
Obviously you can't fully write your code that way (even real cells get it wrong sometimes too), but that's the principle I try to view the world through. Even within an application where every individual component is, say, compliant with functional programming, the interactions still can't be counted on to have any particular properties that you don't check somehow.
FP, OO, data-driven design, all that sort of stuff, that's for what you do inside the cells, and maybe how you choose to structure the code implementing your cell wall. But you almost always end up forced to treat the outside world as bereft of any structure you don't rigidly check for and enforce yourself, if not outright hostile (anything with security implications).