I've dabbled in enough category theory to have some inkling of the meaning of this, but not much. I've kind of given up on expecting an epiphany from this stuff that's applicable coding in a way that significantly changes my practice in any short period of time, but I'm still tempted by the feeling that the big revelation is just a couple concepts away.
It seems unlikely that there will be any profound insights from category theory on how to write code, similarly to how thousands of years of advances in the theory of numbers have had notably few implications for how to count things.
Category theory is likely to contribute by giving specialists a language they can use to communicate, and it seems likely to me that at some point general tools rooted in category theory will have some sort of interesting implication for testing and debugging. Taking Haskell for example; the language isn't anything to write home about but the type checking in the tooling enabled a big improvement in writing bug free code.
So there is a probably a lot on offer for the sort of people who write tooling, and less on offer for everyone else.
Category theorists have made this promise since the early 1990s, when
the overlap between categorical methods and pure functional
programming (think type theories) became clear. The really big win
here was Moggi's discovery that monads neatly generalise many similar
ideas in programming. (Moggi/Plotkin should have won a Goedel award for this!) This led to a great deal of activity towards
generalising this to other forms of computation, in particular
parallel computing (key problem: generalise the beautiful
relationship between cut-elimination and beta-reduction in type
theory, to cut-elimination and parallel composition + hiding). In my
opinion, this has comprehensively failed to date, in the sense that
(A) while you can express some parallelism in a way that can be read as
a generalisation of cut-elimination / category theory, that is not
surprising since (simplifying a great deal for brevity) category
theory is a foundation of mathematics; but
(B) when you express non
purely functional computation categorically, you get extremely
complicated formal systems that obscure the fundamental simplicity of
what they abstract from and generally have only what they abstract
from as actual model of the abstraction.
Let's take Example 47 from the Native Type Theory paper: what exactly is the formal property the typing system induces? (The paper makes vague allusions to a "foreign function interface" with rho-calculus, but there is no published understanding of even the equational theory of rho-calculus. Why should I trust this paper?) What is the logic that the paper promises ("Native Type Theory gives provides a shared framework of reasoning")? What are the proof rules, so I can type them into Lean or Coq? Show me that they are better than the simple logics I get from looking at lambda-calculus operationally.
Each abstraction has costs and benefits. As of 2021, the costs of
categorical methods outside of pure functional languages outweigh
their benefits. I worked in categorical methods for programming and
changed the field, when I realised this.
> generalise the beautiful relationship between cut-elimination and beta-reduction in type theory, to cut-elimination and parallel composition + hiding
How would you generalize the correspondence between cut-elimination and beta-reduction to cut-elimination and parallel composition, and what is meant by hiding here? I would be curious to know of relevant work in the area.
The generalisation of Curry-Howard to parallelism was outlined in early interpretations of linear logic, I think Abramsky's [1] was the first, there has been a lot more work in this direction since. A good place to start to understand the relationship between functional and concurrent computation is Milner's Functions as Processes [2], it's incredibly simple, my mind was blown when I understood it (if you understand CPS transforms, you will find it natural, indeed, I'd say the Milner translation is the essence of CPS, see also [4]). A Curry-Howard correspondence of sorts has now been produced for linear logic and session-typable name passing processes [3].
> It seems unlikely that there will be any profound insights from category theory on how to write code
I strongly disagree with this. To me it does not seem "unlikely", quite the opposite. The reason I feel so strongly about this is that computer science is a subset of pure mathematics, its the part that deals with algorithms and computation. There are many questions in computer science which cannot be answered without an appeal to pure math (e.g., what is the best possible compression algorithm)
All of computer science began as a result of the work of Hilbert on the foundations of logic[1]. This is what led to Turing's paper on Turing Machines. Since that time, mathematicians have done a lot of exciting work on foundations.
Category theory has completely transformed the way we think about abstract mathematical objects; and type theory is very exciting in that it brings mathematics closer to a programming language which is compiled in the human brain and is done with pen and paper.
It's seems to me very likely that in the future, code writing standards will change and get better, and will not remain the same. Towards that end, it makes sense that improvements will come from a better understanding of foundations, since that is where it all started.
None of Zuse, Babbage, Torres y Quevedo,
Ludgate,
Dickinson,
Desch,
Atanasoff–Berry,
Mauchly / Eckert, nor many of the other pioneers of computing came via Hilbert and FOM problems. An important source of early computing was the practical need for calculation (e.g. logarithm tables), leading to mechanical (and later electro mechanical) automata for automatic calculation, a tradition going back at least to ancient Greece (e.g. Antikythera mechanism).
Regarding the arrow of influence: a Fields medalist spent a decade coming up with a new foundation of mathematics (or at least algebraic topology), only to realise that the computer science department already teaches Coq to undergraduates!
> None of Zuse, Babbage, Torres y Quevedo, Ludgate, Dickinson, Desch, Atanasoff–Berry, Mauchly / Eckert, nor many of the other pioneers of computing came via Hilbert and FOM problems
I don't think this is a fair comparison. The modern computer is really distinct from everything that
came before. That's because it was built according to the theory of Turing Machines.
One of the most important historical papers for the development of modern computers was the Report on the ENIAC by Von Neumann[1].
Von Neumann took the idea's of others working in the field, and was able to apply his understanding of mathematical logic to formulate the principles which led to the first working modern computer, which Von Neumann built in the basement of IAS. At that time, there was a major debate at IAS between Einstein and Von Neumann, which centered around whether or not to only do pure math at IAS, with the idea that building a computer was part of experimental science. [2]
> Regarding the arrow of influence: a Fields medalist spent a decade coming up with a new foundation of mathematics (or at least algebraic topology), only to realise that the computer science department already teaches Coq to undergraduates!
LOL! That is an interesting and funny story. However, I don't think this example demonstrates that in the future, mathematics will not be the source of improvements to code writing standards.
Question: if code writing standards improve, where else will these improvements come from other than pure mathematics? I consider this question to be a problem type similar to maximum compression algorithms, i.e. its a question whose solution can only be verified using the language of pure mathematics. Therefore it seems likely these improvements can also have their roots in pure mathematics as well. At least, I would not say it "seems unlikely"
Regarding your question, I think mathematics and programming about the same: being precise!
Programming is much more demanding, much more unforgiving in this regard, because you are talking to a stupid machine, rather than a smart human. The powers automation gives mathematicians (once you've climbed the mountain of initial unfamiliarity), are so spectacular, take the average Joe so much beyond what even Gauss, Newtown, Grothendieck or Archimedes had at their disposal, that I expect that over the next century mathematics will be rewritten to suit computing, rather than vice versa. K. Buzzards's work is one step in this direction, scriptable notebooks like Jupyter or Mathematica are another.
> I expect that mathematics will be rewritten to suit computing, rather than vice versa
I agree with this. I believe pure mathematics is suffering greatly because many mathematicians refuse to fully embrace the computational power of modern technology.
My belief is the age of pretty formulas is coming to an end, and that the future of mathematics will be it focuses more and more on computational aspects of the subject, and problem sets in pure math courses will be done using programs that are much more advanced than anything which exists today, and everyone will think nothing more of those programs than we do about calculators.
Apologies for the self plug, but this has been my vision with mathinspector[1]. I've been working very hard on that, and this is why I got so interested in your statement. Thank you for clarifying your thinking here. Makes sense to me, and you could be right
> I agree with this. I believe pure mathematics is suffering greatly because many mathematicians refuse to fully embrace the computational power of modern technology.
They can't even give meaningful names to their variables. When showing code to mathematicians we should always rename all variables with only one character. Then look down at them because they can't understand it ;)
I think the reason mathematicians use single letter variables is because mathematics is about the way that people think about patterns, understand patterns, and the relationship between them. Therefore the letter is used to help our brain understand the connection between abstract concepts (Categories).
As opposed to programming languages, where the goal is to do something practical, in pure mathematics, our goal is to create a language capable of helping our brains understand the infinite complexities of nature.
Thank you!!! It's so funny that you mentioned "scriptable notebooks like Jupyter or Mathematica" since I have been spending all of my time on mathinspector recently.
I think our points of view are actually very strongly aligned. However I believe the next big idea is likely to come from outside of computer science.
Personally, I am betting on biology. So many of the most sophisticated techniques are based on biology, e.g. neural nets and genetic algorithms. I have done a lot of work on extending the theory of computation with a new axiom which gives Turing machines a self replicating axiom[1], [2]
In many parts of science, there is a cross pollination, where new ways of thinking about subject X come from a new discovery in subject Y. Typically, research will follow a group think pattern until it hits a brick wall, then you need that really big breakthrough idea. This line of reasoning leads to the conclusion, imo, that it's approximately equally likely to come from either pure computer science, or pure mathematics, or somewhere else.
Alan Turing invented neural nets in a little-known paper entitled Intelligent Machinery (see [1]), in 1948. Since, the use of NNs has moved away decisively from inspiration by nature. I reckon, nature's last big win in AI were convolutional NNs: Kunihiko Fukushima's neocognitron was published in 1980, and inspired by 1950s work of Hubel and Wiesel [2]. Modern deep learning is largely an exercise in distributed systems: how can you feed stacks and stacks of tensor-cores and TPUs with floating point numbers, while minimising data movement (the real bottleneck of all computing)?
Not unlike, I think, how airplanes were originally inspired by birds, but nowadays the two have mostly parted ways, for solid technical reasons.
> take the average Joe so much beyond what even Gauss, Newtown, Grothendieck or Archimedes had at their disposal
I think this comment really sums up very well what is at the core of our discussion: the future of mathematics and science.
My strong belief is that thousands of years from now, Archimedes and Gauss will still be remembered, and everything we think is great now will be forgotten while they are not. That tells me that they were much farther ahead of their times than us, even though they didn't have modern computers.
Mathematicians and computer scientists both have it totally backwards imo. On the one hand, mathematicians think they have something to teach us about computer science, but they refuse to use technology properly. On the other hand, when we write code, it's all governed by mathematical laws and there are many questions (but maybe not you know, coding standards or the philosophy of writing good code) we could really use the guiding hand of mathematicians with, and they need to catch up with the times and we programmers need to accept they have something valuable to offer and to teach us.
My interest in category theory is just as a hobby and an amateur; but I have gone pretty deep reading lots of books and so on. To be fair, category theory has not empowered my work (in programming) at all.
I find category theory empowers my programming in the design step, much more so than when actually writing code. It enables me to find a more elegant design from the outset and be sure of its utility and elegance even before I program it.
> profound insights from category theory on how to write code
Sure. But as u/feniv hinted upthread, I'm also optimistic about the impact on language design (noodling).
I too have a hobby toy language. For the type system, I just want to use the best available science. Hopefully something practical, easy to explain, simple to use, with good enough ergonomics.
While type theory is really important, it's just not my area of interest, and I'd like to defer to the experts.
“It seems unlikely that there will be any profound insights from logic on how to compute, similarly to how thousands of years of advances in the theory of numbers have had notably few implications for how to count things.“
It would be helpful if you fleshed that out a little, I'm curious about how you think insights from logic helps people program. I don't think I could deduce whether a developer knew or was ignorant of formal logic in a code review. Unless they give it away with a rather direct comment.
Recognising that a programming language is equivalent to some mathematical construct is a very different thing from getting a practical insight about what decisions to make. My shirt is some sort of topological embedded space, but the tailoring as a field has not gained much practical insight from the study of topology. Although I would encourage any aspirations the tailors have might have to a broad education.
When Faraday was asked about what the practical use of his discovery of electromagnetic induction was, he famously replied, "what is the use of a newborn baby?"
“No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems unlikely that anyone will do so for many years.”
I was referring to the fact that computers themselves are the descendants of logicians — Church, Turing, von Neumann. Your argument above could have just as easily been used to attack Turing machines before the invention of the general purpose computer.
> I've kind of given up on expecting an epiphany from this stuff that's applicable coding in a way that significantly changes my practice
Same. It makes me kind of sad that pure math is not more helpful here! I think its possible that coding standards today are really high quality, and that any improvements are going to require a revolutionary paradigm shift in how we think about programming at an extreme level of generality. That's where pure math comes in, its a good place to shop around for new ideas, since we are kind of at a local maximum now in terms of best practices, imo.
This is how I use math while coding: Instead of tinkering with numbers, just calculate the thing and get all the numbers right the first time. When trying to solve a problem, it's super helpful to have spent years proving theorems in pure math, it helped me compile programs in my head. When I am suuuuuuper stuck and need to figure out if what I am trying to do has already been done in math somewhere. When constructing an overall plan for a project, using abstract concepts in mathematics to properly anticipate the limits of what various implementations will be in order to pick the best one.
> It is well-known that the simply typed lambda-calculus is modeled by any cartesian closed category (CCC). This correspondence suggests giving typed functional programs a variety of interpretations, each corresponding to a different category. A convenient way to realize this idea is as a collection of meaning-preserving transformations added to an existing compiler, such as GHC for Haskell. This paper describes such an implementation and demonstrates its use for a variety of interpretations including hardware circuits, automatic differentiation, incremental computation, and interval analysis. Each such interpretation is a category easily defined in Haskell (outside of the compiler). The general technique appears to provide a compelling alternative to deeply embedded domain-specific languages.
But category theory is very foundational theory. It would similarly be like learning set theory without further goals, which would be fine if you think of it as alternative Sudoku.
As a foundational language it’s there to formalize mathematical objects, but that doesn’t mean people think in either set or category theory, as opposed to higher level objects in their domain.
Totally agree. Just musing on my desire to fully understand this paper, but with the knowledge that all the steps to get there probably won't do much for me outside their own satisfaction. Which, admittedly, might be fun.
CT helps advance Denotational Semantics, so syntax is written closer to math syntax, which I think is approaching computation in a much more stricter way and therefore leveraging the "magic" of math. Of course the price is paid in complexity of the theory and learning curve to understand it, which kills its practicality.
A good introduction to categorical logic is [0], introducing the simply-typed lambda calculus along with the categorical semantics for each construction (cartesian closed categories). The later sections include topics mentioned in this article such as hyperdoctrines, categorifying quantifiers, and dependent types.
CMU also currently has a course in categorical logic [0]. Internally, the only prerequisite is a course in category theory. The course notes can be found at [1].
I can’t read things like this without bringing up Leslie Lamport, the creator of TLA+. His position is that all you need for understanding computation is “regular” mathematics, i.e. set theory and predicate logic. With just this level of math, we have been able to specify, prove, and model-check sequential and distributed computations. For several decades.
I was briefly interested in type theory, dependent types, etc., but have yet to have the payoff from the investment into it. Whereas investing in set theory and predicate logic has had immediate benefits to my day job.
Can someone explain what type theory gets us that we don’t already have with set theory?
The issue is automation: At this point (Feb 2021), the support in interactive theorem provers is just much better for type-theory based mathematics in contrast to set-theory based maths. Whether that is due to technical advantages of type-theory over set theory, or a historical accident, reflecting the much larger amount of attention on type-theory as basis for interactive provers, is a subject of much controversy. Let me cite a leading light in the formalisation of mathematics [1]: "I would guess that the amount of effort and funding going into type theory exceeds the amount that went into set theory by an order of magnitude if not two [...]
I am certain that we would now have highly usable and flexible proof assistants based on some form of axiomatic set theory if this objective had enjoyed half the effort that has gone into type theory-based systems in the past 25 years."
Paper [2] has brought renewed attention to set-theory as a basis for automating mathematics.
That’s a very insightful reply. I hadn’t thought about the fact that the number of working computer scientists today probably does dwarf the number of mathematicians around when ZFC was a current topic.
"Set theory" can mean different things.
The looser interpretation of set theory is just using the intuitive notion of sets to model things. I think basically everyone is for that and learning these topics is definitely the lower hanging fruit.
The usual formal interpretation of what Set Theory is Zermelo Fraenkel set theory, which is roughly first order logic with some axiom schema about how sets behave https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t.... I would be somewhat skeptical that this notion of set theory is all that practically useful to use, although people have managed (metamath). In my opinion, first order logic is great in simplicity a degree of automatability, but clunky and awkward in terms of expressivity. Usually the awkwardness of expressing yourself outpaces the automation, and at that point there may be such a mess that it would be terrible to do it manually
Type theory and dependent types are different formalizations of intuitive set theory. In a sense, types are kind of sets. From a pure english perspective, the words are close to synonyms and in practice the analogy is pretty close. 3 being of type Int is intuitively like saying 3 is in the set of ints.
Type theory seems to me to lead to a cleaner system of syntactic formal reasoning, which sounds esoteric except that syntactic reasoning is what computers have to do since we can't convince them with English paragraphs. And perhaps we don't want to, since English is ambiguous.
I think the success Lamport alludes to is even though he bases the TLA+ system on ZFC in some sense, users stick to the fragment of very intuitive notions of set where perhaps your foundations don't matter so much and all should basically agree. This may be more of a pedagogical emphasis than a true difference in systems, since almost everyone interested in dependent types finds the parts that go off into the stratosphere the most interesting.
Type theoretical and categorical foundations can be understood as a kind of 'structural' set theory. Structural set theory is a formalization of (naïve) set theory which stays quite a bit closer to commonly-understood mathematical semantics, avoiding a lot of unwanted 'baggage' that is introduced by 'material' set theories such as ZF(C).
(where (log(.) is the logarithm function) which are syntactically valid and have a truth value, which depends on the exact nature of how you code up various mathematical things in ZF(C). Your cannot form such propositions in type-theory. The downside of type-theory is that it seems difficult to have simple inclusions like:
Naturals ⊆ Integers ⊆ Rational ⊆ Reals
and that certain natural operations (like quotients) seemingly are labourious in type-theory.
These inclusions are anything but "simple", even in material set theory. Consider the real numbers; in common formalizations, a "real number" is an equivalence class of infinite sequences of rational numbers. You can't understand the 'inclusion' of rationals in reals as material (set theoretical) inclusion; you need an adapter/wrapper function. Type theory works the same way.
Quotients are not "natural" from a constructive POV, and that's ultimately where the difficulties you're referencing come from. If you're OK with being non-constructive, quotients can be formalized ad hoc. (One of the positive side effects of homotopy-theoretical foundations is that they might help clarify what exactly is going on when one reasons on quotients.)
In ZFC, you define reals as you sketch, and then specify rationals, integers etc as specific subsets, then you get inclusions, which mainstream mathematics uses without worries. And yes, mainstream mathematics is non-constructive (as is Isabelle/HOL).
(I don't want to have a contentious discussion about FOMs, or which prover is best, just trying to sketch the trade-offs, and historical background.)
Lamport is giving a talk at my workplace on Monday about 21st century proofs, happy to add this question to the Q&A and let you know his answer, I’m interested in this too.
I really like this idea! I first got into type theory as a result of an answer by Carl Mummert to a question I asked on Math StackExchange[1]. I even wrote a blog post about it a year later[2].
Ever since then, I have considered type theory to be my favorite version of the foundations of math. I have often played around in theoretical foundations theory to discover as many connections as possible between foundations of math and computer science. That being said, I think the perspectives offered by set theory and category theory are extremely valuable, and that these three approaches compliment each other, and sometimes one is better than another for a specific purpose.
This is the first time I had really seen the possibility to reformulate computer science to bring it closer to mathematics. How exciting!
I followed this until the part just slightly after "see how natural it is?" Then it's suddenly How to Draw an Owl. I don't follow the monoid part. Anyone can help elaborate?
The core sentence of the post comes a bit later, just above the "Conclusion" section header: "This type theory is used in proof assistants such as Coq and Agda; eventually, this expressive logic + dependent types will be utilized in many programming languages." Note the "is used" part, which is in the present tense.
My understanding is that nothing new was constructed here. Maybe the well-known theory of dependent types has been reconstructed in a new way based on categories; this may be exciting to fans of category theory, but I don't see how it advances programming languages in any way. The actual paper isn't any more helpful either: It name-drops JavaScript in the introduction and at the very end, but doesn't really do anything with it, it certainly doesn't show any kind of model of a type system for even a trivial JavaScript subset.
That's just an example of how we naturally use dependent types in everyday mathematics: the set of monoids is the set of triplets of a set M, a function m from that M^2 to that M, and an element 1 of that set M such that the given laws hold. You can only define the type of monoids (members of the set of monoids) dependently, because 1 isn't just an element of any set, it's an element of the specific set M.
(Groups might have been a better example than monoids, given that groups are more familiar to mathematicians and the point is to emphasise that this is normal everyday mathematics).
Sure, but that's distinctly not the way you'd write or think about that definition in everyday mathematics. It encodes the definition but it doesn't really express it.
> Dependency is pervasive throughout mathematics. What is a monoid? It’s a set M, and then operations m,e on M, and then conditions on m,e. Most objects are constructed in layers, each of which depends on the ones before. Type theory is often regarded as “fancy” and only necessary in complex situations, similar to misperceptions of category theory; yet dependent types are everywhere.
A monoid is (S,•,e): a set S equipped with a binary operation • : (S,S) -> S such that for all x, y, z in S, x•(y•z)=(x•y)•z, and an element e in S such that for all x in S, x•e=e•x=x. Examples of monoids include (String,concat,""), (int,0,+) and so on.
One recurring theme in universal algebra and category theory is the notion of an algebra[0], in programmer terms it's like a module declaration. (see also F-algebra[1] which is heavily used in functional programming to formalize the notion of folding and more)
Dependent type theory allows one to encode these algebras, e.g. encoding monoids and their laws in Coq's type system CoC[2]. Just like how ADTs in Haskell/OCaml and enums in Rust are comprised of a "sum of products", products and sums can be generalized in dependent types as well:
Length-indexed lists via the dependent sum: Σn:ℕ.List(n) consists of dependent pairs ⟨n,l:List(n)⟩. Σn:ℕ.List(n) can be read as a function that takes a natural number n and returns a type List(n)
Type-parametrized types: ΠS:Set.List(S) consists of dependent functions λS:Set.φ(S), ΠS:Set.List(S) can be read as a function that takes a type S and returns a type List(S)
So to encode monoids in a dependent type theory using those dependent sums and products, we can just put the operations and laws into some big record, say
> For now we explore the language of a topos. There are multiple views, and often its full power is not used. Toposes support geometric logic, predicate logic, and moreover dependent type theory. We emphasize the latter: dependent types are expressive and pervasive, yet underutilized in mathematics.
The next step of the article is the realization that a topos[0] has this expressive capability from the get-go.
There's a nice list of clarifying examples in the paper[1], starting at the bottom of page 2, for instance, stacks with push and pop, untyped lambda calculus, monad comprehension syntax in Haskell (sort of like a generalization of async/await syntax), and so on. This quote in particular seems exciting:
We aim to implement native type theory as a tool which inputs the formal semantics of a language and outputs a type system, which can then be used to condition an entire codebase. Properly integrated, it could introduce expressive type theory directly into everyday programming. To do so, we plan to utilize the progress made in language specification
But don't most dependent type theories have a restrictive notion of equality (e.g. without extensional equality of functions) making many things that we would like to be monoids not fit that definition? Anyway, I agree with your general point: dependent types are extremely simple, and make so many things more elegant (type-safe printf is the common example) that I'm surprised they aren't more popular.
Hm yes. For instance the monoid of endofunctions on S would require one to prove that f . id = id . f = f. In vanilla dependent types this needs the extensionality axiom added, but in category theory it definitely is fine to speak of equality between morphisms with the same source and target.
This whole post is above my head, reminiscent of reading extremely generic Haskell code. It looks super interesting and has been done very compactly. For someone who really gets the stuff it’s probably an amazing way to communicate the thing, but for the rest of us, the jargon just forms a barrier. That’s just how specialties work, so I’m not complaining or anything.
A question:
> The embedding preserves limits and exponents
What’s this mean? I’m especially curious about exponents because wildly different areas of math talk about exponents in ways that must be related but I don’t understand the connection.
Exponents are probably the easier aspect to understand. For instance, what is the number of distinct functions between a set X and a set Y? Well, for every input value of X, we get to choose any single output value of Y. There are |Y| choices of output, so we have |Y| * |Y| * ... * |Y|, where there are |X| factors in that multiplication. In short, there are |Y| ^ |X| functions from X to Y. So we'll often denote the function space by Y^X itself, and thus it gets called an "exponential".
In the category Set, the "exponential" of two sets X and Y is exactly the set of functions from X to Y. In other categories with a concept of exponential, you get a similar internalization of collections of the category's morphisms as objects of the category. You also almost always have an exponential law, (Z^Y)^X = Z^(X*Y)). (If you interpret ^ as "if" and * as "and", you get a common law of logic that we often use when refactoring nested if-statements!)
For a functor to "preserve" an exponential operation (or any other operation) is to be able to do the exponentiation either before or after the functor without changing the result. In symbols, F(X^Y) = F(X)^F(Y). (Compare with linear transformations from linear algebra, in which f(a*x + y) = a*f(x) + f(y).)
Limits are a bit more fundamentally categorical, and I still don't have a good intuition for them. If you think of a functor as projecting the shadow of some index category onto your target category, a limit is in some sense a "best pre-approximation" in the target category, such that if you have a function from elsewhere in the target category into the part under your functor, you can equivalently make a pit stop at the limit object before continuing on into the functor's image.
In some sense, it's like an optimal border control point just outside your functor's image -- you can systematically divert traffic through the limit without actually changing the results.
I'm not sure how to intuitively interpret "preserves limits", since I guess you'd need another functor lying around whose limit is preserved under composition with your first functor. Something like F(Lim(G)) = Lim(F . G), perhaps. But now I'm just reading Wikipedia.
Most of the common examples are mathematical widgets. For example, for any two vector spaces V, W, there's a vector space of linear transformations from V to W. Given a vector in V and a transformation W^V, you can get a vector in W -- I think categorically this is called the hom-tensor adjunction?
It's a long-standing desire of mine to find categorical examples closer to software engineering. The usual example is the family of cartesian-closed categories corresponding to simply-typed lambda calculi -- "first-class functions" is exactly saying that you can take and return functions themselves as values, which witnesses that the morphisms of the category are "internalized" as elements of some type. I've never found it to give a very visceral intuition for the concept though.
Seems like this is an extension of the Curry-Howard-Lambek isomorphism [1] relating type theory, logic and category theory together. I find that to be one of the most beautiful results in computer science and have been working on a practical language built on the same principles for a while (A mix of Haskell, Prolog and APL in a python-like language)
Not yet. I've specced it out, but still working on the implementation. Here's a hand-written example showing how you can specify the constraints to validate a Knight's tour. You can then enumerate over the types to get valid tours.
Honest question: has anybody ever used category theory to solve a practical problem that couldn’t as easily have been solved with simple algebraic reasoning? I have so far read and understood enough category theory to get a feel for whether it is worth spending more time on or not. And my conclusion so far is: nope.
This is terrific but also frustrating: if you want people to use this stuff you've got to "bring it down the mountain".
Alternately we could have a tiny elite writing crystalline mathematically perfect software and the rest of us mere mortals can just plug it together like Lego blocks. Might not be terrible?
How much of a literal newbie? Are you familiar with predicate logic / any formal logic? Are you familiar with any basic modern algebra like Group theory?
I was educated in the UK, so have a good grip on the highschool math there, but not much beyond that. I did try to plough thru SICP, that's a work in progress. Too little you'd say?
Well it's too little to understand immediately but you can definitely come to grasp this stuff. There are a lot of mathy words here that do warrant their invocation but don't mean that the math is arcane. You seem like you're interested which is all that matters.
I'm going to be slightly handwavey/generalize here but I think it may be helpful. Predicate Logic is the basic logic that humans think of when they think "logic." It's, if-this-then-that style thinking with only basic "tools". Predicates are statements that can lead to other statements, for instance the predicate "it's raining" could imply the predicate "I will wear a jacket." You can even combine predicates such as, if "I'm hungry" _and_ "I have food in the fridge" then "I will eat." Typically, we use symbols instead of sentences so it will look more like (A and B) -> C.
Assuming you've done some programming, you're already used to predicate logic in the form of if-else statements. You've also been exposed to type theory. I think SICP is in scheme so types are a little bit hidden but if you've programmed any of the modern non-lisp languages i.e. Python, Java, C, etc. then you've seen types. Types are ints, floats, strings, etc.
You know how you can add two numbers but you can't add a number and a string without special rules? (1+1 == 2 but "cat" + 1 == ???) Type theory concerns these rules and a bit more. A question that type theory helps answer is, what are we permitted to do with certain classes of objects?
Once the blog post gets into dependent types, I think it becomes a bit tricky to pick up. Dependent types are types that "depend" on other information. For instance, we might have a type called array which is a sequence of numbers of any length. We might further have a dependent type called the PointTripleArray := Array<3> i.e. it's an array with 3 terms and they type of PointTripleArray is the dependent type. There are other more complicated ways to use dependent typing but I'll skip those.
Honestly, I kind of understand the rest of the post but not well enough to start explaining here. I wish I could. I think this is enough to get you started reading and learning though.
I've dabbled in enough category theory to have some inkling of the meaning of this, but not much. I've kind of given up on expecting an epiphany from this stuff that's applicable coding in a way that significantly changes my practice in any short period of time, but I'm still tempted by the feeling that the big revelation is just a couple concepts away.