Hacker News new | past | comments | ask | show | jobs | submit login
A formalization of category theory in Coq (github.com)
109 points by noch on Aug 4, 2017 | hide | past | favorite | 33 comments



It's actually not very interesting formulation of category theory. You can't express a category of categories, with isomorphic categories considered equal in this way. Partially, this can be worked around if we state that everything important preserves equivalence (i.e. if A ~ A` and B ~ B` A x B ~ A` x B`). Though, if you want to express this in full generality, you will need Homotopy Type Theory.

But, all in all, even formulating all this and proving theorems was a good exercise. Good job!


I agree that doing category theory without homotopy type theory is working with one hand tied behind your back, but you have to realize that it is very different from textbook category theory. For instance, in HoTT you will find that the category of U-small categories is not a 1 category; it's a 2 category (since equality of categories is equivalence, not isomorphism). This is correct and gets rid of a lot of confusion surrounding Cat, but it means that you have to deviate from the textbook definitions a lot.

There are at least two formalization of HoTT categories if anybody wants to know more. There's one formalization in HoTT Coq and the formalization as part of the unimath project. Both work heavily with precategories as well as categories in order to formulate some textbook notions without changing all the definitions...


> You can't express a category of categories, with isomorphic categories considered equal in this way.

Not sure why you think that. It's a straight-forward construction within this framework and even part of the development[1].

[1] https://github.com/jwiegley/category-theory/blob/master/Inst...


You are right. I didn't notice compose_respects.


Concerning everything important. It seems, that it's enough for equality to respect only composition. So, basically category of categories is expressible there. But HoTT would give you much more power.


I believe this is what John Wiegley talked about in this Haskell Cast[1] which, as I understand it, should allow translating Haskell into Coq.

I’m just beginning to understand all of this category stuff but, as far as I can see, a Haskell function of type

    f :: a -> b
should be equivalent to a Coq function of type

    f :: a -> Either SomeException a
to account for the Hask category’s “bottom” value.

[1] http://www.haskellcast.com/episode/013-john-wiegley-on-categ... (skip to “compiling to categories”)


In a constructive metatheory, partiality is both a richer and more subtle concept than just adding option types everywhere. It's possible to model partiality correctly using e.g. quotient inductive types or countable choice (https://link.springer.com/chapter/10.1007/978-3-662-54458-7_...) or by working with setoids throughout.

Neither is a particularly satisfying solution if you want to reason about programs, which typically have richer notions of effects. My own recommendation is to embed programs via weakest preconditions (or strongest postconditions as in Chargueraud's characteristic formulas http://www.chargueraud.org/softs/cfml/). This does not allow you to directly evaluate potentially diverging programs, but is far more flexible when reasoning about programs. For example, you can use separation logic to deal with state and concurrency.

If you really want to extract to categories for some reason then I would recommend restricting to concrete categories of presheaves or sheaves - I know of no useful application that doesn't already fit into this framework. Using categories requires you to encode everything into a first-order language which is uneccesarily complicated when working with higher-order functions.


How can one automatically translate any Haskell containing unbounded recursion into Coq?


Some sort of partiality monad, I presume.


Is there a good lecture series on Coq? Or maybe introductory project?

I've seen a video intro [0] and poked a little bit around the reference manual [1], but I'm not sure what's "between" basic understanding of the commands and being able to extract a complex verified program.

It all seems to be either the theory (which I mostly grok) or completed projects, not a lot about the engineering details.

...Or does that just not exist?

[0] https://www.youtube.com/watch?v=ngM2N98ppQE

[1] https://coq.inria.fr/refman/


Concerning the books on the topic, I highly recommend this: https://www.amazon.com/Verified-Functional-Programming-Agda-... If you want to get into the math related aspect, read this: https://homotopytypetheory.org/book/

There's also an annual summer school on the topic which has downloadable videos available: https://www.cs.uoregon.edu/research/summerschool/summer17/


This year's OPLSS did not really engage with Coq. Adam Chlipala did give a lecture series on Coq at OPLSS'16 though: https://www.cs.uoregon.edu/research/summerschool/summer15/cu...

The DeepSpec summer school, however, did lead with a Coq intensive, which was recorded and included in the uploads at https://www.youtube.com/channel/UC5yB0ZRgc4A99ttkwer-dDw .


It's going to be hard to learn one way or another. Chlipala's Certified Programmimg with Dependent Types is free online. Then, Software Foundations others mentioned. Starting with lightweight stuff like Alloy or TLA+ might help. Also, The Little Prover book teaches how provers work by letting reader build a small one. Had some decent reviews.


Chlipala's book is a decent one, but it's oriented towards software verification, coq, and the use of tactics (which I would consider anti-pattern).

Software Foundations is a good introduction to Coq and proofs, but overall is quite basic, and it also hides many details of how type theory works, so it's more of a hands on tutorial.

TLA+ is just a tool which basically does its job by exhaustively searching a space of possible solutions trying to find counterexamples. It works in some limited cases but doesn't work generally for any mathematical theorem.


> TLA+ is just a tool which basically does its job by exhaustively searching a space of possible solutions trying to find counterexamples. It works in some limited cases but doesn't work generally for any mathematical theorem.

Nope. TLA+ is a rich mathematical formalism, that includes a full-fledged proof assistant. It is, however, focused on verifying software rather than arbitrary mathematical theorems. There is also a model checker for TLA+ that can check your theorems (about finite cases of programs) at the push of a button. In practice, model checkers works in more cases than manual proofs, as manual proofs "fail" much more often, simply by being far too laborious; a model checker would verify in days what may take you months or more to manually prove.


I bring up TLA+ because many people have quit trying to learn Coq because it was so hard with little perceived benefit. TLA+ has a high ratio of benefit for work put in. It also shows that abstract modeling can knock out deeper errors than regular tests or sometimes review. That makes it a nice, intermediate step toward stuff like Coq. Just a hunch a few of us have. :)


Alloy was really helpful for me to grasp the ideas behind checking a system design. But it is not a tool to write verifiable code. Rather it is a modelling tool to find bugs in the design before implementing things in code.


Software Foundations book [1] covers many case studies. One of the authors has a lecture course based around the book as well.

[1] softwarefoundations.cis.upenn.edu/current/index.html


Indeed: Benjamin Pierce's lectures on "Software Foundations in Coq" https://www.youtube.com/playlist?list=PLGCr8P_YncjUT7gXUVJWS...


I can recommend SF, I just started it a week ago without prior Coq experience. I'm using the VSCode integration which helps, CoqIDE is a bit clunky.


In Coq to Rust Program Extraction https://news.ycombinator.com/item?id=12183095 , I saw some useful comments about its foundations.

(And for a bit more about Coq to Rust, Program extraction from Coq https://github.com/rust-lang/rfcs/issues/667 )


I have found Type-Driven Development with Idris [1] is a nice introduction into verifiable programming. The book starts with basic functional programming (Idris is how a strict Haskell could look) and gradually adds dependent types and working with proofs.

[1] https://www.manning.com/books/type-driven-development-with-i...


To really learn coq, you need to learn type theory which is hidden behind tactics. IMO, Coq isn't the best way to do so. I highly recommend starting with Agda instead.


Lean hits a really good middle ground I think. More resources for Agda probably. But I'm very fond of lean ;)


Lean is very interesting, but Agda and especially Coq has more learning resource, so for the beginners it's probably better to start from them, even though, in some sense Lean is more modern.


I have the impression that for someone coming from a programming background, Idris is also a pretty good introduction to the world of dependently typed programming and proof assistants.


A question for those of you here who are into functional programming:

Is category theory worth studying if I (coming from c++ background) am looking to get into functional languages like Haskell? How common is it actually among Haskell or Lisp programmers to know and utilize concepts from category theory in their programming?


It depends on what you’re interested in about programming. Do you like the research aspect of programming languages (inventing new language stuff) or are you into application programming (e.g. writing an app)?

Category theory is the lowest level of functional programming. At this low level you can do some pretty ground-breaking stuff (like the Haskell z3cat[1] library which can apply a SAT solver to a regular Haskell function), but it’s not required, in any way, in order to write great Haskell code, and make use of Haskell as a tool to write applications with very few runtime exceptions.

[1] https://github.com/jwiegley/z3cat


A lot of the ideas used in everyday Haskell originated in category theory (similarly to how all our programming languages eventually come from Turing machines, the lambda calculus, and so on), but command over the mathematical ideas is absolutely not required to "get into Haskell" or even to use it productively and fluently. It is certainly something you should look into if you want to

a) know why some things are done the way they are (in case you're not satisfied with "this works!"), or

b) if you want to find inspiration to write a certain kind of library: Haskell is one of the few languages that has, as I like to call it, "over 50 years of documentation".

--

To elaborate: a category is a bunch of objects, and functions (morphisms) between the objects. The types in a language like Haskell form a category (called Hask in this case) where the objects are the types and morphisms from one type to another are the functions that go from the former to the latter.

A lot of things done in Haskell are instances of general constructions from category theory applied to Hask. Comfort with these general constructions certainly helps to illuminate many ideas.

For example, the great trifecta of Haskell abstractions comprises three interfaces ("typeclasses") called Functor, Applicative, and Monad, which are used everywhere. Each of these corresponds to a concept from category theory: for instance, what a Haskeller calls a "Functor" is what is categorically called an "endofunctor on Hask". Among users of the language who know any category theory at all, most have some familiarity with things at the level of what I've just said, but not much more. A knowledge of category theory at any higher level (like in the linked repo) is not at all common, statistically.

Also, not many of the common libraries are based on deep categorical ideas (lens [0] is an important exception) but even then, knowledge of the mathematics is useful if you want to dig into how the ideas are implemented, not if you're happy just using them. Learn as little or as much as you'd like.

[0] https://github.com/ekmett/lens/wiki/Examples


Thank you, that was very illuminating.


You're welcome! http://haskellbook.com/ is the currently recommended book and is worth looking at when you start learning.


"looking to get into..." -- No, not at all. It will lead you down a long rabbit hole that doesn't actually wind up anywhere too close to functional programming.

CT is great for "looking to get really good at" FP. It writes out a parallel universe where certain things are easier and certain things are harder but the same core pillars of FP are intact. In this parallel universe there are myriad examples of constructions, techniques, and conceptual frameworks which can be "ported" back to FP. Likewise, some FP concepts have interesting "ported" versions in CT.

Learn CT when you want to build intuition for how to construct functional architecture. Don't learn it to start and don't even learn it when you run into Functor/Applicative/Monad. It's _not_ useful for understanding them in practical terms.

Go learn it when you start composing higher level constructions together comfortably and start running into questions about the most efficient ways to lay things out. It's _great_ for that.


And I thought he only did Emacs




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: