Despite traces of interesting ideas -- spreadsheets are underappreciated, so is Multics -- this article is full of hot air. It implicitly equates Haskell with functional programming. It hypes HOTT but doesn't explain. It bashes West Coast programming culture for being short sighted and money hungry, while praising the finance industry. (Who are never short sighted or money hungry?) Worst of all, it name-checks a bunch of category theory lingo but doesn't do anything with it besides try to look smart.
Actually, the worst thing about this article isn't the hot air. The worst thing is that I think I would agree with it, if it had ever managed to get to the point.
The reason that monads don't need to explicitly show up in an article about monads is because they are the same manifold as the subject line. For the same reason, people don't use the suffix "set" or "collection" when designing a relational database schema. If you are already in the manifold, it makes no sense to call it out separately (e.g. Russell's paradox)
Right, so technically there is no reason to deal with monads directly. Andrej Bauer's Eff, is an experimental language, which feels a lot like Standard ML, but incorporates the power of monads (really, algebra effects and handlers) under the hood. There's really no reason to conflate "monads" (the idea) with "monads" (the hairy syntactic overhead), and believe me, it gets hairy.
This rather scattered and jargon-laden article seems to be pointing out the differences between the lambda calculus (stateless) and the Turing machine (stateful), which, while computationally equivalent, are not formally equivalent (pardon me if I get the verbiage wrong as I am not a mathematician). While the hardware architecture is essentially a Turing machine and thus stateful, the lambda calculus cannot be mapped onto it without somehow extracting state. In my (rather simple) mind, this shouldn't be a problem if functional programs are not used to do things that they were never intended to do in the first place. If you are trying to use Haskell to do stateful jobs like UI or DB management, you are probably using the wrong tool. Use Java or some other OOP language. Use your FP languages for middleware services that take an input and give an output that can be consumed by whatever is consuming the service. And guess what, the enterprise is already build that way.
It's more complicated than "FP or not". Languages like Haskell isolate state and IO. But the mechanisms are there, and Haskell was intended to do it; it's just formalized and very strict. There are FP languages that don't have this pure separation (I believe OCaml and Scala mix IO freely), and that's more at the core of where the friction might be of interacting with a database rather than FP itself. I would often not use Haskell for database-related stuff, I agree, but it's more because I often do no require the level of strictness that the type system enforces in all my database interactions such that it would be worth the investment. But if you're doing mission-critical stuff, it may be worth getting the safety guarantees.
The other thing this article seems to miss is the level of abstraction at which FP applications need to live. At the OS level, where state is being managed, using FP would be insane. I don't think anyone really would want to do that. The UNIX philosophy of piping along a stream, while it may have something in common with FP, it still exists at a higher level of abstraction than the kernel, which is still a giant, complex Turing machine.
Managing for whom? Is that a resource optimization problem? What if I use a control monad that does the same thing? If TF finds the shortest path for a computation does that make it an OS according to your definition ?
JSON and SQL are closer to preimages of data and computation manifolds respectively that are projected (or fibrated) via Kan
I checked out this and the some of the author's other posts. A lot of complaints here in the thread are valid and pretty consistent through their articles. However even though the ideas are scattered and loose and they're trying to market their product, there was a lot of interesting food for thought - it helped that I've spent some time studying FP. I also feel that operating systems have been stagnant for a long time. Yes, the OS should manage hardware and resources, but also the OS is a platform on which other things are built. The platform part is mainly what has felt quite stagnant and perhaps more layers of systems on top (JVM, containers, etc.) might not be the best answer. Adding a layer on top has the benefit of compatibility and reliability that the layer below provides, but layers also compound constraints, possibly sticking us with local maxima.
I don't understand what this article is trying to say. The snippets of category theory just seem to be technobabble: While they are all "mathematically true" (upto a generous reading), they don't really "make sense". It reads like the output of a well trained statistical/neural language model on the #haskell IRC channel.
For example:
> The act of “unbundling” functions (lambdas) from their traditional containers is really what Serverless and the Functional Programming movements are trying to do
I have no idea where the author got this impression about the functional programming movement (insofar as such a thing exists) is trying to do. For a reasoned view on functional programming, check "Why functional programming matters"
> In FP, you are either writing functions or doing something else (like gluing or wiring functions together). Simply put, a monad is an industry-generic term for that “something else”.
That is meaningless. A monad is a precisely defined mathematical object in category theory.
> Technically, monads are instances of special ‘containers’ called monoids (sets) that manage the above activities.
This part of the article manages to completely misunderstand or misrepresent what monoids are. They are not just "sets", they are sets with additional structure on them.
> Incidentally, the most troublesome spot for both OO and FP has been homotopy type theory, which is similar to the debate over the mutant creatures that emerge from relational joins
Uhh, no? HOTT does not come from relational joins, it comes from a desire for a constructive viewpoint of mathematics in which one can encode proofs well, so we can check proofs on our computers.
> .. databases and programming will converge — and something called the Curry-Howard correspondence suggests we cannot ignore this forever.
I don't understand what this is trying to say, but on a simple reading, this is blatantly false. While curry howard provides a way to connect proofs in proof systems to lambda calculus, _(relational) databases don't use lambda calculus_. Instead, they're based on (surprise surprise) relational algebra, for which I don't know of a curry howard style analogue.
> Meanwhile, the geometry community will talk in terms of topos, sheaves and data (note how a spreadsheet is sorta both code and data at the same time).
Wow, that is _such_ a misrepresentation of how mathematicians use "data". The word "data" is usually meant to encode "some structure owned by the mathematical object". One often reads sentences like "the galois group encodes data about the field", the "data" is not the "dual of code" or some such nonsense.
> That’s probably why applying algebraic “lambda calculus” to geometric problems (remember XML?) tends to be a rather unpleasant programming endeavor.
Why is XML geometric? What is this guy talking about? Can he even define a category? (Forget a topos)
> Vendor software is often needed to establish a “standard” way to assign names to lambdas so we can find them. Mathematically speaking, this is the job of homotopy. On the other side of the Yoneda “tunnel” we find something called an Eilenberg-Moore (EM) category (technically the flip side of Kleisli) and other alluring blobs like Serre subcategories and Segal spaces. None of which get much play in computer science.
Thi is once again, meaningless. Homotopy is a geometric idea about deformations, which in homotopy type theory gets utilized to define equality. But again, this has nothing to do with naming!
Also, elinberg-moore and kleisli categories are very well known. Open any category theory text, this will a part of the adjoints chapter. Indeed, you can search on hackage and find packages for it: https://hackage.haskell.org/package/streaming-0.2.2.0/docs/S... for example.
> In category theory, we call these “adjoints”, as in “joins”. If you ask the memoization crowd they sometimes actually do think in terms of SQL joins. In the database world, this is like joining keys of a schema.
What? Adjoints are not about joins. They are kind of generalization of inverses. Once again, this just reads like technobabble
> Haskell devs have to manually tinker with adjunctions and monads because in-memory data support is pretty dumb
What in memory support? What do monads have to do with anything in memory? Monads are a structure that we use to structure code, it has nothing directly to do with "memory data support" or whatever this guy is on about.
> If we follow Yoneda and accept that topology (geometry) and algebra are two different worlds, then we can “attach” functions to the fabric (geometry). But in a “serverless” world, what is this fabric?
What in the world is he on? How does Yoneda (where I presume he is referring to the Yoneda lemma) say this? Also, more importantly, much of Category theory exists to explore the _duality_ between algebra and geometry, it says nothing about them being "two different worlds". The very existence of Algebraic Topology means that geometry and algebra are married to each other --- not to mention the other objects mentioned in the technobabble above (topoi, grothendeick categories) arose from Algebraic Geometry, which, guess what, combines geometry and algebra...
- If you are confused, it is probably because FP languages are really only half the story of CT
- Adjoints are all about joining things hence the name; much like chasing dependencies (arrows) between cells in a spreadsheet and then going back to the relational database where the sheet was extracted from. You have seamlessly jumped from algebraic morphisms to geometry without even thinking about it
- Homotopy is about finding "paths" between things and therefore imply some relative naming (coordinate) scheme is possible. The segments of a path arise due to torsors. Hence homotopy is associated with paths. UNIX paths and URLs come to mind
- Adjoints as far as I understand are not about joining things, adjoints are about a generalization of inverses. Can you formally (and I mean mathematically) define what this intuition you have about adjoints ~= "joining things"? Also, the word "adjoint" does not come from "join". It comes from "adjoint" in the theory of complex operators: https://en.wikipedia.org/wiki/Adjoint_functors
- Homotopy is not about finding paths, it is an equvalence relation _between paths_. a path is _homotopic_ to another.
Homotopy is also a _continuous object_, unlike torsors, which are discrete objects (a continuous torsor is an affine space).
So, while UNIX paths and URLs are "paths" in the sense of torsors, they have _nothing_ to do with homotopy (as it is classically defined). Unless you are using some weakened notion of homotopy that I am unaware of, in which case I'd love links.
Actually, the worst thing about this article isn't the hot air. The worst thing is that I think I would agree with it, if it had ever managed to get to the point.