> rzk-1 assumes "type-in-type", that is U has type U. This is known to make the type system unsound (due to Russell and Curry-style paradoxes), however, it is sometimes considered acceptable in proof assistants. And, since it simplifies implementation, rzk-1 embraces this assumption, at least for now.
I do this too (in my HoTT-book-inspired proof-checker)! I do plan on eventually implementing a hierarchy of universes but it doesn't seem to be a problem to let U be of type U. I wonder where the first practical issue comes up when you allow this.
---
After writing this comment, I decided to put it up on GitHub. Here is the entire source code of my proof-checker as well as some of the theorems from the HoTT book: https://github.com/sid-code/metalogic/blob/main/dts.org. It was never meant to be used or even read by anyone else, but here it is.
To see the theorems, scroll down past the machinery to the `(env-define-checked)` calls. Here is the proof that multiplication of natural numbers is associative: https://github.com/sid-code/metalogic/blob/main/dts.org#asso.... There is no type inference, so it's pretty gross. I think it's pretty cool how little you need to actually have a working proof-checker.
Interestingly, even thought type-in-type is inconsistent, there are no normal terms of the “empty” type. So, if you restrict to normal terms you do not get full explosion. There are however types A such that both A and not A have normal terms. It is just that if you combine them with modus ponens the term diverges.
Another cool thing is that type-in-type is inconsistent, but has models. In particular, Palmgren constructed a model, based on domain theory, of Martin-Löf type theory with type-in-type.
wow that's really cool, I wonder what the essential difference is between type-in-type and other classic "nice" inconsistencies in type theories. It seems like it's "almost" consistent.
Type theories usually come with a reduction relation on terms. It computes things like β-reduction for function, inserting arguments into function bodies, computes projections out of tuples, etc
A normal term is one where no further reductions can be made. Without type-in-type you can prove that all terms can be reduced to a normal form. With type-in-type some terms normalise while other diverges.
In fact, using type-in-type you can define the fixed point operator and do general recursion!
Like every theorem prover it has a logic that you use for stating propositions, and proving theorem. The logic is a specific logic that is closely related to HoTT.
Can someone explain to me why there are so many category theory posts in HN? To me it's just a pretty obscure branch of mathematics and there are almost no posts about any of the more popular branches of mathematics.
A pretty obscure branch of mathematics? You've got to be kidding me. I don't understand HN's fetish for category theory either, and I find it rather weird - I'll refrain from even less charitable words. But as an actual professional mathematician... no, it's not an obscure branch of mathematics.
You don't believe me? Read the ICM proceedings https://www.mathunion.org/icm/proceedings and count how many times the words "category" or "functor" appears. And notice something very special, too: you'll find articles that are not about category theory per se, and that will still mention categories. And you know what? They don't even explain what they are. You know how that can be? Categories are so pervasive in mathematics. You wouldn't write an article for the proceedings and explain what a group or a vector space is, would you? Well, same for categories. It's a basic (in the original sense of the word) part of mathematics.
They use the word "category" instead of set, and the word "functor" instead of homomorphism. Category Theory is still very-very obscure, with little to no connection to the rest of mathematics.
I think this is a rather peculiar comment — category theory is interesting precisely _because_ it's closely connected to other branches of mathematics, and can be used as a lingua franca to translate concepts and discoveries between them.
I made the comment — and find yours interesting — because the usual criticism is CT is the opposite, that it provides new language for accomplishments in other fields of mathematics but offers little in the way of novel results itself: a great quote I heard once is that ‘category theory is great for telling you what you've done’ :)
So I'm curious as to how exactly you disagree: the existence of connections between CT and other fields are objectively and verifiably well established (just read a randomly-selected CT paper!) but perhaps you feel like the categorifications of other fields are incomplete or unrepresentative? Or that the connections are in some sense forced or unnatural?
> So I'm curious as to how exactly you disagree: the existence of connections between CT and other fields are objectively and verifiably well established (just read a randomly-selected CT paper!)
I don't think that there are as many and as important as they claim, and I don't think that several fields use them (yet). Sure, CT advocates have huge lists of unifications and connections (e.g. categories for the working mathematician has a lot), but they don't appear on the other side. Maybe they will, but I don't think they do yet. Thus the 'obscure'.
I see — so it seems like what confused me about your comment is that your definition of ‘closely connected’ doesn't mean in a theoretical sense, i.e. that there's a well-understood mapping between them, but is more about a cultural connection in which people make choose to make active use of category-theoretical concepts when working in a more concrete field?
The Cartesian closed category's internal language is simply typed lambda calculus that has equivalent expressive power to combinatory logic where the y combinator was originally defined.
Note that the name "Y Combinator" was intended to appeal to a specific kind of hacker. A kind that was likely interested in functional programming, Lisp, and Paul Graham. Which was to say the kind of person that Paul Graham wished to convince to start startups with Y Combinator.
And Hacker News was started as a discussion site to attract that kind of person here to where they would all become aware of each other, aware of Y Combinator, and aware of the opportunities to have from starting startups.
The community of people here expanded out from that core. But category theory is pretty close to that original core, and so it isn't a surprise that it gets more attention here than elsewhere.
I think it is due to the homotopy type theory book. There is an elegant connection between category theory and type theory. I guess most of the people submitting category theory to hacker news (or any graduate math) just do it to flex and seem clever.
Category theory has been extremely influential in computer science, primarily because a (maybe the) prototypical idealised programming language (the simply typed lambda-calculus) is at the same time, a logic and a convenient syntax for cartesian closed categories (= basically the nicest, most well-behaved kind of category (which is, in turn, a foundation of topoi, a generalisation of the essence of all logics)).
The trias gives extreme scope for technology transfer between mathematics, logic and programming.
Any source for that? I don't think that category theory has even a minuscule influence on computer science.
Type theory? Sure. Logic and model theory as well. Set theory? Number theory? Heck, even geometry is used for dozens of algorithms, not related to geometry (convex optimizing in an n dimensional space, Hamming distance, abstract convex geometry etc). But category theory? Do you have any influential papers or books in mind?
I would argue that the last 30 years (if you arbitrarily begin at Moggi’s paper on Monadic computation). Have shown ample impact of category theory on programming language theory and type theory more generally. Additionally, there are earlier work on the semantics of programming languages dating to the early 80’s with strong category theory roots. There is a large body of work (I’m on mobile and don’t have links) that uses category theory to backstop the development of algorithms and generic code solutions for all types of algebraic data types and container types for instance.
There is a thriving ‘school’ of computer science that views category theory as the third leg of the category theory, type theory, proof theory triangle that forms their basis for all computer science. This is very evident in the CS department at Carnegie-Mellon. If you are interested I’d recommend checking the backlog of lecture videos and presentations at the Oregon Programming Language Summer School program.
Turing award winner Dana Scott already mentions categories in his work on models for the untyped lambda-calculus, for example in his 1971 monograph on Continuous Lattices, which set the foundations of domain theory. Moggi monads came about, in parts, as a way of understanding domain theoretic constructins better.
The Haskell community was, quite clearly, heavily influenced by category theory. Monads (as a programming tool) fell out of that, and I suspect this directly influenced the treatment of async in Javascript (and elsewhere).
I do think "extremely influential" is overstating it. Outside of that, plus some niches of niches of academic CS, I can't really think of any other places where category theory is particularly important.
I am not sure that it influenced async in javascript. But the earliest monad paper I found in ~2 minutes of googling [0] even uses natural transformations which in my opinion counts as using CT.
Async/await comes originally from F#, brainchild of the Cambridge MSR lab that also maintain(s|ed) GHC. Async/await is just a limited monad syntax, and its originators were well aware of that fact. Insofar as monads are a categorical vocabulary for effects, then, async/await in JavaScript and all the other languages that now implement it all ultimately stem from category theory :)
More vaguely but also more sweepingly I think the general approach, now the standard in language design, of taking a pure language as base and then adding effects to it is established thanks to Moggi's work on monadic effects, which makes essentially all modern programming languages heavily influenced by CT (at a couple of steps' removal).
You could look at the papers being published in conferences like POPL, LICS, PLDI and ICFP.
The theory of (Moggi) monads and monad transformers has been influencing modern programming (and libraries) very heavily (e.g. all of Haskell, Scala's ZIO vs Cats, Rust approach to returning errors). Most modern programming language research engages in some form or other with linear types (and its relatives, like affine) and they come from Girard's linear logic. Both (Moggi) monads and linear logic are heavily influenced by their inventors learning of category theory. So I'd say, whenever you program in a modern language or use modern library design, you (indirectly) stand on the shoulders of many giants. Some of those giants were category theorists.
Interestingly, what I'm beginning to detect is an influence of computer science on category theory, if only because we want to verify abstract maths in automated tooling.
While technically there's a relationship between category theory and substructural logics (of course there is, category theory is general enough that there's a relationship between it and everything), they don't really strike me as having that much to do with it directly - they're a fairly straightforward extension (restriction) of typical presentations of constructive logic. And I wasn't aware that e.g. Rust picked up its type system via exposure through category theory. Is there a source for that narrative?
Girard mentions the connection with categories all the time.
For example in "Proofs and Types"
he proves various theorems along the lines of: the sub-category of coherence spaces and stable maps (one of the main models that LL was developed for) induced by some LL fragment is
cartesian closed category (IIRC). I think he developed LL in parts by fine-tuning it, until all the categories induced as models of fragments of LL have nice categorical properties. (To the extent that is possible.)
When I was a PhD student, my supervisor suggested that I learn category theory to understand linear logic. (Not sure, in retrospect, that was the best course of action, but that was my trajectory)
Rust's types evolved over many years. Rust used to have "typestate" for example. I had discussions with Graydon Hoare around 2011-ish about session types (which are linear). It struck me that Hoare knew exactly what I meant with the term. More generally, linear typing was just "in the air" in the early 2000s: you could not been serious in programming language design without being aware of linearity. Linearity was all over the research literature. Hoare was clearly very knowledgable in programming language research.
HN is dominated by computer science people. Category theory is related to functional programming (e.g. Lisp), which academic computer scientists tend to like. Proof assistants are related to verifying correctness of software, which many computer scientists (especially the nerds, counting myself) like because it lets you dreem of a world that's not so messy (or at least of conquering the mess that it is in a small closed domain).
citation please? there's one guy in my department (one of the best theory departments in the US) that contributes to sml and that's about it as far functional goes.
a less charitable/more accurate response to gop's question is: HN has a fetish for both functional programming and cat theory. there's even another response here that captures the sentiment beautifully:
> I have studied math and was in some lectures about category theory. I still don't get what the project is about and that fact intrigues me.
- Definitions you can comprehend without a math background
- Extremely vague applications so that there's no area in which it definitely doesn't apply.
I think the hype, which has lasted for a decade now, is a result of there being a lot of smart people in different fields who are interested in math research, but not so interested they're going to catch up to whatever the Langlands program is about.
> It is well-known that the simply typed lambda-calculus is modeled by any cartesian closed category (CCC). This correspondence suggests giving typed functional programs a variety of interpretations, each corresponding to a different category.
My theory is that it's a kind of math that LLMs can do. Not that LLMs are necessarily doing it, but the same thing that makes it easy for LLMs also makes it easy for certain people. They can repeat words and phrases that they hear, and sometimes even rearrange them into new forms! One time some category theory guys were struggling with something and it was equivalent to topological sort in computer science. But the way they were failing at solving it sounded much more impressive!
if i pull the API for django or react or ...anything else, how much should i be expected to understand without any prior exposure? how is "domain-specific gibberish" any different? you make up words to represent groups of other words in order to capture concepts. it's literally just how language works.
let me translate for you:
proof assistant: program that checks your work (writing a proof) and helps you find new proofs.
synthetic: math where the axioms "represent" the objects instead of the other way around (they're "synthesized" from the objects)
1-category: a collection with objects and functions between them
2-category: a collection with objects, functions between them, and functions between those functions (called homotopies)
k-category: you get the idea
∞-category: the limit as k -> ∞
second-order abstract syntax which is great for lambda extraction: syntax specifically suited for this kind of stuff
I'm not a mathematician, but I feel the "syntethic"/"analythic" distinction in mathematics is an interesting and useful concept.
"In modern mathematics, an analytic theory is one whose basic objects are defined in some other theory, whereas a synthetic theory is one whose basic objects are undefined terms given meaning by rules and axioms"—Michael Shulman
In programming terms, I guess "synthetic" mathematics feels a bit like programming to an abstract interface.
I didn't read it as a complaint, more of a humorous observation. Either way, there's no need to get aggressive onto someone that hasn't figured it out yet.
> rzk-1 assumes "type-in-type", that is U has type U. This is known to make the type system unsound (due to Russell and Curry-style paradoxes), however, it is sometimes considered acceptable in proof assistants. And, since it simplifies implementation, rzk-1 embraces this assumption, at least for now.
I do this too (in my HoTT-book-inspired proof-checker)! I do plan on eventually implementing a hierarchy of universes but it doesn't seem to be a problem to let U be of type U. I wonder where the first practical issue comes up when you allow this.
---
After writing this comment, I decided to put it up on GitHub. Here is the entire source code of my proof-checker as well as some of the theorems from the HoTT book: https://github.com/sid-code/metalogic/blob/main/dts.org. It was never meant to be used or even read by anyone else, but here it is.
Here is where the universe is defined (unsoundly): https://github.com/sid-code/metalogic/blob/main/dts.org#the-...
To see the theorems, scroll down past the machinery to the `(env-define-checked)` calls. Here is the proof that multiplication of natural numbers is associative: https://github.com/sid-code/metalogic/blob/main/dts.org#asso.... There is no type inference, so it's pretty gross. I think it's pretty cool how little you need to actually have a working proof-checker.