(Disclaimer: I'm a mathematician working in HoTT and (higher) category theory.)
Like with most advanced concepts in any field, there are lots of misunderstandings pertaining to HoTT. To me, the underlying insight is that the "right" abstract setting for a lot of classical homotopy theory is that of an infinity-topos (whose precise definition is an open question, but we have candidates). Theorems proven in HoTT hold in any infinity-topos, and HoTT is (conjecturally) the internal language of these.
For anything outside of homotopy theory, HoTT isn't (immediately) interesting. It's certainly not trying to provide foundations for set-based mathematics, but for (categorical) homotopy theory, sure.
Some mathematicians take issue with the logic being intuisionistic; e.g. neither the law of the excluded middle, nor double negation hold in HoTT. This is not about constructivity (which I agree, mainstream mathematicians will reject), but about the spaces being modeled. For example, a mainstream mathematician will say that a space is either empty or has a point. However, in the category of bundles over the circle, points are sections; and so the double cover doesn't have any point, for example. Neither is it empty.
Even in the world of pure mathematics, this is a backwater. As I understand it, the two main selling points are providing a foundation for mathematics and a framework for automated theorem proving, but:
1) We already have a perfectly good foundation of mathematics. It's called ZFC , and it does everything we need it to do. If all HoTT does is give a theory that's mutually interpretable with ZFC, that's not interesting (at least to mainstream workers in the foundations of mathematics, who would rather tackle substantive problems).
There are also people like Per-Martin Lof who want to use HoTT as a vehicle for doing mathematics with intuitionistic logic, and in particular denying the law of excluded middle, but at that point you've let the mask slip and you're advocating something that interests approximately zero mainstream mathematicians.
2) It's not at all clear HoTT, or type theory in general, is the best solution for formalizing mathematics. It could be, it couldn't be, but this is ultimately an empirical question . Some comments on the state of the field are given in .
I think this is the basis for the enthusiasm, almost entirely. There's a growing perception that a lot of mathematical abstractions that are purely naval-gazing have arisen from ZFC, and the intuitionalists and finitists are gaining followers amongst amateur and upcoming mathematicians. I don't have data to back this, just a feeling that I get.
And I think in particular this is attractive to programmers because this skews closer to what we've learned about computability and program design, and I think there's even some hope of moving forward from obscure notation and single-letter variable names towards a syntax for doing mathematics that can be more expressive and easier to verify.
A great example of a mathematics popularizer along these lines is Norman Wildberger, who has an astonishing range of videos on youtube about an amazing range of mathematical topics. Here’s an interview with him: https://youtu.be/E_dGqavx5AU .
My personal feeling is that a lot of beautiful and interesting mathematics will arise as a result of an intuitionist, finitist, computationalist rethinking of the foundations of mathematics.
Just kidding. But seriously, ZFC is probably the easiest foundation you can have for proper mathematics. Anything else is more complicated (if you think otherwise, you probably just don't understand it).
The Lisp code in this book is truly not doing anything for me. Seems like a kind of direct numerical computation/translation. I actually think it would be better to do it in Haskell which seems like it would at least let you identify some more generic structures (oh we can actually implement this idea x as a functor/monad blah blah).
Intuitionists don't deny the law of excluded middle, they just don't admit it as an axiom. From a classical perspective this looks like denial, since accepting the law means that you have to accept or deny everything, but from an intuitionistic perspective that kind of reasoning looks like begging the question.
"If all python does is give a theory that's mutually interpretable with x86 assembly, that's not interesting."
The point is to have better language support for expressing the things you want to reason about. I don't find it so far-fetched to be interested in better ways of reasoning about equality of types (both as a programmer and as a mathematician).
Mainstream mathematicians are not being prevented from proving theorems because they lack "better ways of reasoning about equality of types."
The importance of ZFC is that it ensures our common language reasoning isn't incoherent (among other things). Once you've done that, you don't need to do it again.
> Mainstream mathematicians are not being prevented from proving theorems because they lack "better ways of reasoning about equality of types."
How do you know? I've made the mistake of assuming that the definition of a certain morphism is natural with respect to a certain object when it wasn't (coevaluation in a rigid monoidal category). Better language for equality and naturality are precisely what HoTT provides. More importantly, the hope is that type theory will not only provide better language, but relieve us of the tedium of checking proofs by hand.
> The importance of ZFC is that it ensures our common language reasoning isn't incoherent.
I don't think this is true. AFAIK, no one has written down the definition of (say) a scheme from first principles in ZFC. In fact, if you care about coherence/verification, you should care even more about type theory. IIRC this is what drew Voevodsky to type theory in the first place -- https://mathoverflow.net/questions/234492/what-is-the-mistak...
If HoTT helps you prove theorems, great. In particular I could see it being of interest to people working in higher category theory (and such people have responded in other comments here!). My remarks were meant for the other 99% of mathematicians – most of whom go their entire lives without ever writing the word "category" in a paper.
> it's not clear to me why you necessarily want the language you use to rigorously investigate the foundations of mathematics to be the same one you use to do proof verification.
Among other things, I want theorems about foundations to be useful to non-foundational mathematicians. Working on type theory can improve, at the very least, formal verification tools. In my view, traditional ZFC-based mathematical logic has a much smaller likelihood of being useful for regular mathematicians. For example, the vast majority of mathematicians will never explicitly assume the existence of a large cardinal in a paper. It seems to me that the difference in usefulness is probably due to their relative distance from common discourse.
More generally, it's not clear to me why you necessarily want the language you use to rigorously investigate the foundations of mathematics to be the same one you use to do proof verification. ZFC – and set-theoretic approaches more broadly – facilitate proving things about mathematics (meta-mathematical theorems), and for that they are very useful. E.g., the strength of a theory is equivalent to the size of the cardinals it assumes, loosely speaking .
How is this not partial evidence that ZFC is trash?
Mainstream mathematicians do run the risk of being siloed in narrow specialties because abstractions are needed to distill the all definitions of remote areas of mathematics.
Saying HoTT is a failure because people do informal proofs is rather moving the goal posts; people won't do formal proofs until the tooling is really good. But none of the up-and-comming tools are classic first order logic + ZFC (Please don't mention TLA+) and there's a reason for that.
> The importance of ZFC is that it ensures our common language reasoning isn't incoherent (among other things). Once you've done that, you don't need to do it again.
No you need formal proofs to check that there aren't errors along the way. And lack of formal proofs slows down new weird stuff like the ABC conjecture proof candididate.
I don't expect this to be solved now (tooling, as per above), but mathematicians should learn more category theory now as that works just fine pencil paper and brain. When the type theoretic tooling is ready they will be ready.
ZFC was never intended as a language for practically doing mathematics in, so that's a weird criticism.
> Mainstream mathematicians do run the risk of being siloed in narrow specialties because abstractions are needed to distill the all definitions of remote areas of mathematics.
Mathematicians are siloed in narrow specialties because they focus on solving hard problems that require a great deal of specific subject matter expertise. I strongly doubt that "distill[ing] the all definitions of remote areas of mathematics" is even possible, or a fruitful project to attempt. Could you give an example of how definitions from, say, probability theory and algebraic geometry might be merged in a useful way?
> No you need formal proofs to check that there aren't errors along the way. And lack of formal proofs slows down new weird stuff like the ABC conjecture proof candididate.
I think this comment misunderstands the problem with the ABC "proof," which is (speaking roughly) that a certain lemma asserted something that was crucial to the argument, but which was never actually proved. Peter Scholze and others spotted this pretty quickly. Formalization wouldn't really have helped at all.
Isn't that just the point? Currently we do metamathematics in an esoteric language where your normal mathematical tools won't always have the meaning you expect them to have - HoTT is about enabling us to do it in something much closer to normal mathematics.
> Could you give an example of how definitions from, say, probability theory and algebraic geometry might be merged in a useful way?
A lot of category theory is "this theorem from this context can be translated to this theorem in this other context". Do I know any specific equivalences between probability distributions and multivariate polynomials? No. But I wouldn't be at all surprised if we found some. At one point the idea that arithmetic could tell us something about geometry was a surprising and novel one.
> I think this comment misunderstands the problem with the ABC "proof," which is (speaking roughly) that a certain lemma asserted something that was crucial to the argument, but which was never actually proved. Peter Scholze and others spotted this pretty quickly. Formalization wouldn't really have helped at all.
The fact that it was disputed for at least two years whether that lemma was or wasn't valid is the fact that formalisation would have been helpful.
What percentage of coders do programming on a turing tape? Is this partial evidence that turing tapes are trash? Does that question even make sense?
> I don't expect this to be solved now (tooling, as per above), but mathematicians should learn more category theory now as that works just fine pencil paper and brain. When the type theoretic tooling is ready they will be ready.
The abstractions of category theory are useless in many areas of mathematics. Prime example: PDEs.
Laypeople think that category theory is the 'ultimate math' because they hear that it provides bridges or analogies between different areas of math.
Perhaps programmers are especially prone to this because category theory does have some applications to programming.
The thing is, almost all of pure math is itself is a bridge between different areas of math. Some of these areas are bridged by category theory, some are bridged by other kinds of math, which have less catchy names.
Yes it is. Turing machine models are very limited, and a programme to let us achieve the things we can do with Turing machines (mainly runtime analysis) with a better model (i.e. a lambda-calculus style model) is a very good idea.
> The thing is, almost all of pure math is itself is a bridge between different areas of math. Some of these areas are bridged by category theory, some are bridged by other kinds of math, which have less catchy names.
I'd be equally interested in a programme of doing metamathematics in some non-category-theoretic model that was still "normal" mathematics in the same way that category theory is (and ZFC isn't). But I'm not aware of any such competing effort.
What you wrote is a different justification for why turing tapes are worse than lambda calculus. It has nothing to do with the number of people programming on turing tapes, which is the argument that I was responding to.
It doesn't mean that the lambda calculus is trash.
Likewise, most mathematicians don't work directly with ZFC. Doesn't mean ZFC is trash.
> I'd be equally interested in a programme of doing metamathematics in some non-category-theoretic model that was still "normal" mathematics in the same way that category theory is (and ZFC isn't).
My point is that almost all pure math (e.g: linear algebra, topology, differential geometry, category theory, group theory) is already metamathematics. Of course, there is a spectrum of 'meta-ness' but I think this is a continuous spectrum. I do not think there is a well-defined division between 'mathematics' and 'metamathematics'.
For example, can you give an argument for why, say, the irrationality of sqrt(2) is not 'metamath', yet godel's incompleteness theorem is 'metamath'?
It has everything to do with it: the reason for wanting a lambda-calculus-like model is that lambda-calculus-like models are what working programmers actually program in. If programmers actually used languages that looked like turing tapes then turing tapes would be a good model for talking about programming in.
Haskell has been described as essentially a typed lambda calculus. You're treating this as a binary distinction when it isn't: there's a lot of value in the model that we can do formal program analysis with being close to the models that we like to program in, whether the models are exactly identical is a lot less significant than the degree of similarity. Likewise the problem that "mathematicians don't work in ZFC" isn't just that mathematicians are doing something slightly different day-to-day, it's that it's a very different paradigm from normal mathematics.
> My point is that almost all pure math (e.g: linear algebra, topology, differential geometry, category theory, group theory) is already metamathematics. Of course, there is a spectrum of 'meta-ness' but I think this is a continuous spectrum. I do not think there is a well-defined division between 'mathematics' and 'metamathematics'.
> For example, can you give an argument for why, say, the irrationality of sqrt(2) is not 'metamath', yet godel's incompleteness theorem is 'metamath'?
I'd argue that irrationality of sqrt(2) is applicable outside of a mathematical context - it's a fact about something we're modelling rather than solely a fact about our models ("2" and "sqrt" are of course abstract models, but they can be applied to model a variety of concrete things that we care about, and you can carry over the irrationality of sqrt(2) into at least some of those contexts, where it will translate into something meaningful and useful). Whereas godel's incompleteness theorem is a map for which there is no territory; it's a fact about abstract models that could never correspond to anything that wasn't an abstract model.
But if you want to regard number theory as a subset of metamathematics then I don't mind. When I say I want to be able to do metamathematics, I mean I want to be able to do all metamathematics; in particular I want to be able to talk about proofs in general. You could argue that the irrationality of sqrt(2) is a statement about proofs, but it's certainly not in a context that allows you to reason about general proofs, and number theory does not give you a first-class way to work with proofs in general (of course the Godel encoding exists, but it's extremely tedious and not useful for practical work). Likewise, as far as I know, there's no way to really talk about (general) proofs directly in terms of group theory or linear algebra.
ZFC should be more like machine code we actually use than turn tape concept which we don't. The fact that no one uses something higher level that compiles down to ZFC is disheartening.
As the FOM mailing list demonstrates, it's really about goal posts here.
For one camp, the goal posts are such that ZFC or many other things are equally good. ZFC by share age has the "large cardinal" advantage in that people have been grinding away at it longer.
For the other camp, large-cardinal-type research agendas aren't very interesting, and the goal posts are dramatically different.
I still think ZFC is trash, but I will admit my mistake in thinking other share my goal posts.
> The abstractions of category theory are useless in many areas of mathematics. For example, PDEs.
At the moment that is true.
But I think this is more due to the human concerns than the actually Math. Until Statistics overtook it, differential equalization were the most-applied branch of mathematics, which definitely influenced the culture around it. There is also the general algebraist---analyst cultural divergence.
I look forward to the day when the computer tools are so good they are used in those fields too. That should bridge the culture gap, and then we shall see what the math holds.
It is useful for my own work, and thus I study it, why isn't this as simple as that?
There are many intelligent and curious people reading HN, who might be tempted to spend time understanding it because its somewhat frequent appearance here gives the (false) impression that it plays some important role in modern mathematics.
I regret spending the (large) amount of time I did learning what it is about, only to realize that's not much actual content there, and I would hope my post saves other people that trouble. There is a lot of far more interesting, elegant, and useful mathematics out there, imo.
My background - I’m a programmer and like building programming languages. I’m interested in HoTT because I’m interested in type theory generally and also formalization of mathematics.
Most mathematicians don’t care much about these things, which is fine. Like most weird stuff, it’s good that some folks are doing it.
Lastly you might enjoy this thread about why formalization is interesting to another mathematician:
I want to clarify that I think formalizing mathematics is interesting, too! It's just entirely unclear to me that HoTT has any special advantages here (see link from my original post).
The supposed selling point of HoTT wrt to formalizing mathematics is higher level proofs that are more general and are less work to express, due to having this richer notion of equivalence in the type theory.
I think the post you linked is pretty good... it makes me wonder - at this point, what is holding back more formalization? Is it a cultural thing? (None of the people who are doing “interesting” math care about formalizing It) Is it a technical thing? (Proof assistants and their type theories aren’t good enough?) Are we just missing a big enough “standard library” of mathematics so mathematicians can get to the interesting parts sooner? It’s probably all of the above.
What I think might happen is it reaches a tipping point where enough of the pieces are in place to where it rapidly becomes normal or expected to formalize mathematics and we have awesome tools and “standard libraries” to make it good.
This is a bizarre thing to say. Learning any mathematics is a complete waste of time.
> gives the (false) impression that it plays some important role in modern mathematics.
What are you talking about?
HoTT is broadly regarded as an exciting and important new area of mathematics; your view is absolutely the minority view. I mean I don't think I'm going to be able to convince you otherwise if you genuinely think that there's "not much content" in HoTT, but there are loads of well-regarded and prominent mathematicians who hold the opposite view to yours.
Your claim that HoTT is "broadly regarded as an exciting and important new area of mathematics" is simply wrong. I suggest talking to a wider variety of mathematicians.
In particular, concerning "well-regarded and prominent mathematicians," Steve Simpson and Harvey Friedman are on record on the Foundations of Mathematics mailing list saying, essentially, that HoTT is a waste of time . The other set theory experts I know agree with them but haven't said so publicly. My impression is that this is a fairly common view in the set theory and foundations community.
Further, Jacob Lurie, who is perhaps the most talented mathematician currently working in higher category theory, has also written he doesn't think highly of it .
 I can provide links if it you want, but if you search the archive these arguments are easy to find.
 View the comments on this blog post: https://mathematicswithoutapologies.wordpress.com/2015/05/13...
The set theory and foundations community, i.e. the small minority of mathematicians who manage to do effective work on mathematical foundations in ZFC, are precisely the people who we'd expect to be most anti HoTT. It's not necessarily wrong, but it's like concluding that OOP is pointless because you've talked to leading C programming experts.
You have missed my point. We generally don't study only the "useful" things; if we did most of the most important mathematics we have today would have gone unstudied.
> Your claim that HoTT is "broadly regarded as an exciting and important new area of mathematics" is simply wrong.
According to whom? You? You're making an argument about consensus regarding the importance of a given area, but I have seen nothing to indicate that the consensus among mathematicians is in line with your opinion.
> Steve Simpson and Harvey Friedman
They are in the minority. Again, if you're arguing about consensus you can't just point to one or two individuals and say "look! They agree with me!" For goodness' sake, the thread your referring to is well-known partially because it was controversial.
> Jacob Lurie [...] doesn't think highly of it
The content of Lurie's comment and yours are quite different.
> According to whom?
I've talked to a lot of very good mathematicians, including people who've studied and collaborated with people in the HoTT orbit, and people from many other subfields. I'm sorry I don't have exact quantitative data for you.
> They are in the minority. Again, if you're arguing about consensus you can't just point to one or two individuals and say "look! They agree with me!" For goodness' sake, the thread your referring to is well-known partially because it was controversial.
Again, I don't think they're in the minority. Also, there's way more than one relevant thread. That particular fight was litigated over years.
> The content of Lurie's comment and yours are quite different.
He makes many comments on that post. I invite you to read them all. Certainly, I'm not just regurgitating what he says there, but it does provide an example of a top mathematician expressing skepticism about the entire enterprise.
Yes, I was being facetious. My point is that by whatever measure pure maths is worth studying (i.e. those measure beyond "practical" applications) also obviously applies to HoTT.
> I've talked to a lot of very good mathematicians
You're an anonymous person on the internet, this is such a strange thing to fall back on. There are pretty standard ways of talking about how well-regarded/important certain subfields are in mathematics, "I've asked around" isn't one of them.
> I'm not just regurgitating what he says there
His comments express a substantially different point to yours.
He does not say that HoTT is a waste of time, or that it "doesn't have much content".
Not all branches of pure mathematics are equally worthy of study. A few are essentially dead for various (good) reasons, for example.
> There are pretty standard ways of talking about how well-regarded/important certain subfields are in mathematics, "I've asked around" isn't one of them.
How should one figure out how well-regarded a subfield of mathematics is if not by asking members of the mathematical community?
> His comments express a substantially different point to yours. He does not say that HoTT is a waste of time, or that it "doesn't have much content".
His comments support the point I intended them to support, which is that Lurie is not a huge fan of HoTT. You made a remark about well-regarded mathematicians, and I gave you comments from a well-regarded mathematician.
Ok? What does this have to do with the point I was making?
My point was, pretty simply, that "usefulness" is not the only thing we consider when deciding whether something is worthy of study. (If it was we wouldn't have touched most of modern pure mathematics.) As a result, your notion that studying HoTT would be a waste of time because it's not "useful" is nonsense.
> How should one figure out how well-regarded a subfield of mathematics is if not by asking members of the mathematical community?
Oh come on, you really don't understand the point I'm making?
When talking about consensus it's not good enough to say "I've talked to a few people and they agree with me". Work on HoTT, measured by any kind of actual metric rather than "people I've spoken to (who I can't name by the way) say it's bad", is broadly well-regarded and seen as important. In terms of research funding, citation/publication metrics, or the opinion of most mathematicians.
> His comments support the point I intended them to support
No they do not. My objection is to you saying:
* That HoTT is a waste of time.
* That HoTT doesn't have much content.
* That HoTT is not broadly regarded by mathematicians as important or exciting.
Please do not move the goalposts.
> You made a remark about well-regarded mathematicians, and I gave you comments from a well-regarded mathematician.
What? I "made a remark" about well-regarded mathematicians? I said, specifically, "there are loads of well-regarded and prominent mathematicians who hold the opposite view to yours".
I also tried to stress that:
> if you're arguing about consensus you can't just point to one or two individuals
Please bear that in mind.
I've talked to more than "a few people," about this, and I think it's clear I've thought more about the sociology of this issue than just chatting with Tom Scanlon or whoever over a beer one night. I stand by what I said about that earlier, along with my comment that you ought to talk to a wider variety of mathematicians if you think there's some broad consensus that HoTT is important. (Start with the PDE and numerical analysis people, for example.)
Anyway, sticking purely to objective metrics, how many young mathematicians (postdocs) who work primarily on HoTT have ever been hired by R1 universities to tenure-track positions? Surely if this were seen as an important area, departments would want to snatch people in that field up, right? (I'd even accept examples of postdocs at these schools working on HoTT, though this represents a far less serious commitment by the department.)
This is (in principle – you'd have to collect it) publicly available information, which anyone reading this exchange can check on. (Look for assistant profs at schools listed as R1 on Wikipedia.) Maybe I'm forgetting someone, or not knowledgeable enough, but I can't really think of any.
(ER at Johns Hopkins was the first name that popped into my head, but she doesn't count because she has only a single HoTT paper among dozens, and it was written after she was hired.)
Another metric: how many HoTT papers are there in top mathematics journals? Annals, Inventiones, etc.?
- Mathematics is a very large field.
- For beginners, it is necessary to have wiser people than yourself put up some signposts.
I found the comment you're referring to very illuminating. I'm not naively assuming the author's view is the only one, or "correct", but s/he certainly like they have some familiarity with graduate/research level mathematics in relevant areas. So it was very helpful to see, at least from the point of view of some mathematicians, that this is not considered an important area.
I was left wondering whether the author of that comment understood the importance of this stuff in computer science (i.e. in academic approaches to programming languages, might be the right term, I don't know, I'm pretty ignorant of these areas myself), or whether they were only able to criticize it from the point of view of pure mathematics.
You should not assume this about any anonymous person on the internet.
Sorry, but this is plainly false.
Per does not want to use HoTT. My impression is that he finds HoTT mildly interesting, and has not really used/advocated it. And, you are implicitly calling him and Voedvodsky "not a mainstream mathematician", which is rather offensive considering their contributions to statistics (Per) and homotopy theory (Vladimir).
Your comment comes out as very reactionary. There is room for a lot of both HoTT (weather classical logic is used or not) and set theory in modern mathematics. In fact, doing set theory in HoTT is a very pleasant experience!
I also don't think it's inaccurate or offensive to say Martin-Lof no longer works in mainstream mathematics. He hasn't done statistics since the 70s.
Voevodsky was of course a great mathematician, but it seems quite clear that his work in type theory was not mainstream mathematics and not of interest to most mathematicians (quantifying over, say, people who hold jobs as postdocs or professors in math departments worldwide). Whether the work is valuable is a separate question, but the sociological point is, imo, pretty inarguable.
Intuitionism in HoTT isn't about constructivity, which I agree is rejected by mainstream mathematicians. It's about the models of the theory. For example, LEM doesn't hold in the category of bundles over the circle (see my other comment on here). If you think of the axiom of choice as "every surjection has a section", then this doesn't hold for topological spaces. (Take the double cover of the circle; there's no continuous section.) I disagree that these examples are uninteresting to mainstream mathematicians.
2. What are these "substantive problems"? I don't have any experience in foundations, so I'm genuinely curious.
3. Sure, intuitionistic mathematics has very little popularity, but a lot of that popularity is among computer scientists, for fairly obvious reasons.
4. The HoTT book and the nLab are two of the most out-in-the-open developments in math I've seen --- the HoTT book was even written in a public GitHub repo iirc. Naturally that has a lot of appeal for HN. If you have anything similar (blogs, wikis, etc.) for mainstream foundations, I'd love to read them.
5. Yeah, personally, I'm not really convinced that HoTT is a good idea, but I guess we'll just have to wait and see.
2. Whether V equals Ultimate L, for example. There's "pop math" coverage available , as well as more technical introductions by Woodin if you Google for them.
3. I'm not sure what reasons you're referring to; can you enlighten me? I can guess, but the ones I'm guessing are not very good, and I'd rather not attribute weak arguments to you or them.
4. The Stacks Project, for one . It covers most of the foundations of modern algebraic geometry, is of extremely high quality, and is regularly cited in research papers.
My "empirical" evidence for this is most working mathematicians today do not care about foundations at all, whereas with something better, they might.
Everything I've just said is completely uncontroversial among mathematicians, for what it's worth. The comment I just wrote has probably been expressed on r/math dozens of times at this point.
Really "foundations" is a word that emphasizes the ZFC "learn and ignore" approach. The type theory / category theory approach might better be called "framing", in that it begins in the pilings below the basement but also goes all the way to the roof.
For example, if I have, say, a group, and I prove something about it, you would think that the proof would hold for all isomorphic groups. Under ZFC this isn't true, or at least we don't get it for free.
Under HoTT, we get this for free.
This is completely missing a core point of category theory and type theory (which is about categories equipped with certain extra structure), which is to work is settings which can be flexibly tightened or loosened in order to do constructions which are as general as possible. A theorem in HoTT is far more widely applicable than a theorem in ZFC; HoTT can be interpreted in any higher Gröthendieck topos, ZFC can be only intepreted in a certain class of 1-toposes. In HoTT, we can simply assume choice and LEM to internally reproduce ZFC. In ZFC, univalence is false and cannot be assumed.
> It's not at all clear HoTT, or type theory in general, is the best solution for formalizing mathematics
Xena project isn't too hot on HoTT indeed, but they will tell you that type theory in general is absolutely the best solution for formalizing mathematics.
> in particular denying the law of excluded middle
MLTT does not deny LEM, nor HoTT. The point is not to deny LEM, but to work in a more general system which does not necessarily assume it.
Besides the immediate usefulness: if "all HoTT does is give a theory that's mutually interpretable with ZFC", then holy shit what an accomplishment!
You seem to have a fundamental misunderstanding about the nature of mathematical research. Most fields of research do nothing but "give a theory that's mutually interpretable" with some other field, or at least did so initially.
They are rigorous, not in the sense that anything is ever proven about their formal systems, but in the sense that they are interpreted rigorously as to their formal consequences by computers. This is really different to how I as a physicist might use equations to express ideas to other physicists, and if I can't exactly formalize what the equation means that isn't always relevant.
This is of course an unorthodox way to frame what programmers do, but I think it's relevant to your last comment. What is a good foundation for formal systems is an empirical question, and empirically, type theory is extremely important to programmers. Set theory not at all.
Sure, that's very much a niche and we can't be sure anything practical will come out of it, but for some of us it's interesting in its own right. :)
I understand how this isn't interesting to folks working on pure math, just as category theory likely isn't, but it's the fact that these mathematical abstractions map so well onto (certain kinds of) programming that makes them such recurrent topics on HN.
MLTT, even without HoTT/CTT, already features canonicity, no? (At least for positive types.)
I thought the main advantage of HoTT is that you can define non-trivial equalities, and the advantage of CTT in particular is that univalence can be derived in it (rather than introducing it as a non-computational axiom).
I don't have any dog in this fight, but I also find it super interesting that this topic gets to the 1st page so often.
TBH I think answering that question is more interesting that litigating foundations.
Nah the math people are fine with assembly, even though they don't actually use it in their work.
Right now I'm mostly studying linear algebra and some combinatorics. Very interested in linear operators and the relationship between differential/finite difference operator, falling/rising factorials, stirling numbers, bernoulli numbers, etc.
I'm hoping math will get me a more interesting job than java programmer but I enjoy the math for its own sake anyways.
Software Foundations is a well-known book / series of Coq programs for theoretical computer science. 
Lectures on the Curry-Howard Isomorphism is my favorite (graduate-level) introduction to type theory and the lambda calculus, including the “lambda cube” and pure type systems.
Finally, while Type-Driven Development With Idris is not at all “theoretical,” it is still an excellent introduction to dependent types and the idea of proof-as-program. It’s also the only book about a specific programming language I’ve ever read and actually enjoyed :)
 Available for free here (1 MB pdf) https://disi.unitn.it/~bernardi/RSISE11/Papers/curry-howard....
I have background in standard undergrad math (linear algebra, calculus, real analysis, etc) and some basic undergrad CS, and I work as a software engineer (backend web stuff). However I've never really studied formal logic, or theoretical CS, or formal methods / verification, or type theory, and I was thinking of doing so. I enjoy studying math/theoretical things for its own sake, but I am of course also interested in techniques which might touch upon software engineering in practice.
Do you have any opinions on whether I am proposing something at all sensible for myself, and if so what sort of order to take things in?
Any recommended sources of exercises to work on (with solutions? seeing as I might well be self-studying)
Or lecture courses, or anything available online (even if it costs money)
Online communities you'd recommend for asking for help, e.g. with problems encountered in self-study?
And any more recommended books/lecture notes of course.
Unfortunately I don’t have many more suggestions that would be good pedagogically. Benjamin Pierce (who wrote Software Foundations) has another very famous intro book, Types and Programming Languages: https://www.cis.upenn.edu/~bcpierce/tapl/ which is regarded as a “Bible” of type theory for computer scientists. Note: I haven’t personally read this, but I have read the sequel, Advanced Types and Programming Languages. Both books have exercises and most exercises have solutions.
Simon Peyton-Jones wrote a very good book, the Implementation of Functional Programming Languages. It is old (1987), but free, a good read, and covers most of the “important” topics without assuming too much background: https://www.microsoft.com/en-us/research/publication/the-imp...
For formal verification - there are definitely better-qualified people than me. I found a lot of the source code to CompCert C (a verified C compiler) illuminating: https://github.com/AbsInt/CompCert However, it will be difficult to understand without doing Software Foundations first (I still find personally tactics-style proofs in Coq confusing and don’t like Coq as much as Idris or Agda).
I am very shy and can’t help you with the online communities :) The type theory subreddit is pretty active and they seem nice.
if you want to learn some functional programming along with abstractions actually used in FP programming, that is one thing, but that is not homotopy type theory, which is squarely in the domain of purely abstract mathematics which would be interesting from a pure mathematics perspective, not a programming perspective.
The paper does appear to include some proof that utilize Coq, but unless you're interested in software that checks the validity of mathematical proofs, that is not going to help you get a more interesting programming job.
By the way, if you're interested in categories, you might like this post about using monoids for GPU parsing: https://raphlinus.github.io/gpu/2020/09/05/stack-monoid.html
(Monoids are the simplest examples of categories)
As an example, theorem provers are really an interesting topic, I heard that Homotopy type theory was used in some theorem provers but couldn't find whether it leaded to any significant improvements.
As an AGI researcher, should I learn Hott?
The most popular theorem provers (at least among computer scientists) are based on some form of Martin-Löf type theory. In these theories, some very helpful proof techniques (often used in classical mathematics) like function extensionality and quotients don't compute. While they can be added as axioms, this breaks the computational property of the proof. What HOTT brings is a type theory with an improved notion of equality in which such things do compute, based on the notion of transport along equivalences.
If have two types
Inductive nat : Set :=
| O : nat
| S : nat -> nat.
Inductive natural_number : Set :=
| Zero : natural_number
| Successor : natural_number -> natural_number.
Are they? Judging by posts on HN, I 100% agree. But judging by the number of users and the amount of verified mathematics and code, I'm not so sure.
Maybe things have changed recently, but I believe that systems based on simple type theory (HOL and Isabelle) and systems based on set theory (Mizar, Metamath) have much larger repositories of verified mathematics and verified programs than those based on dependent type theories.
HoTT and type theories in general, do have some apparent advantages when it comes to making it practical for humans to express proofs in a way which computers can understand, but the entire practice of that is not something which could be said to be extremely well-developed thus far, and so everything is to some degree a big experiment.
As a synthetic approach to homotopy theory, it's a beautiful piece of work, the latest approach in a line of approaches to that. You could say that HoTT is to traditional homotopy theory what axiomatic geometry (where lines are objects unto themselves) is to Cartesian geometry (where lines are collections of points). Homotopy theorists have built various abstractions for capturing that, and HoTT gives a very satisfying one.
As an informal foundations for thinking about mathematics, it's intriguing and certainly has some new ideas about how to approach the definitions of various familiar mathematical objects. Especially for the sorts of things that are normally defined using quotients, those come out looking quite a bit different, since rather than forming equivalence classes in the usual way, we can construct "higher inductive types" which guarantee certain "paths" or "proofs of equality" between particular points/terms, and come equipped with an "induction principle" for defining maps out of the type (or what is the same activity, proving properties about it), and which insist that we say where those paths are sent in the destination, basically guaranteeing that certain equalities-made-tangible are conserved.
Type theories generalise the propositions of logic and sets of set theory into what they call "types" and those types give each mathematical statement a kind of interpretation as a sort of collection of its own proofs, which was formerly usually thought of as a set. HoTT adds a kind of geometry to this set, putting paths between proofs that are in some sense "equal", and that construction can be iterated to give paths-between-paths (surfaces), and so on to higher dimensions.
HoTT shows us that we can divide the types of type theory into levels of which the first few are propositions, sets, and (categorical) groupoids.
I would say most of HoTT grew out of the consequences of noticing this interpretation of what was already a very beautiful and "natural" type theory (Martin-Löf type theory, MLTT) whose original intent was merely to serve as a foundations for mathematics. It gives that theory a more general model than it previously had and turns what from a purely logical perspective might have been seen before as quirky idiosyncrasies to be put up with in exchange for the other very nice theoretical properties of the system, into things which are geometrically obvious from the perspective of thinking about homotopy types of spaces.
Unfortunately, plain Martin-Löf type theory isn't quite adequate on its own -- there is a fairly natural axiom called the univalence axiom, that falls out as an expectation regarding this interpretation of types as homotopy types. It gives us the nice property that types which are in some sense equivalent (or isomorphic), have these "paths" between them, making them "equal" in a containing universe type. This immediately gives us the ability to transport theorems about one construction to another, which is formally not something that could be done in first order logic and set theory. Formally, you would have to prove the theorem again for each isomorphic thing, because first order logic has no way of knowing that mathematicians only allow properties that respect isomorphism in the first place.
This Univalence axiom is also a problem, however. Axioms spoil the computational properties of a type system, as they are stuck terms under evaluation. But it's only just barely not a theorem. So there are a bunch of new type theories now (cubical type theories), which aim to restructure the rules of the type theory such that the univalence axiom becomes a result obtainable directly from the logical rules, and not an axiom. I'm not sure that a clear winner amongst those has fallen out of that yet, but there's a lot of promising work, and fairly practical systems at least from the perspective of a type theorist or computer scientist, if not the average mathematician.
For the average mathematician, there is some reasonable hope that research along these lines might provide a formal system that can actually be used, rather than merely thought about. Most proofs of modern mathematics are not formalised in the sense that a machine could check them, they're written in such a way where the intent is to convince other human mathematicians that a proof is formalisable in principle. To do otherwise while working under something like ZFC, or what is perhaps the real culprit, first order logic, would be not only 99.9% boring but also incredibly time-consuming. But at the same time, there's a lot of frustration that's happening with mathematics branching out into smaller and smaller areas, with the set of possible referees for some papers getting to the point that it can be hard to know whether to believe that things have been sufficiently checked.
There's some hope though that the fact that you don't have to repeat the proofs over and over for isomorphic things in HoTT, and that type theories in general are far more usable to begin with, might mean that an actually-ergonomic formal system for the average mathematician is within reach.
Will any of that affect how we think about artificial intelligence? Well, if the project to turn mathematicians on to computerised proofs is successful, it might help turn a larger fraction of mathematics into something that is more amenable to applying AI-based search to, and provide such AI systems with lots of worked examples of human mathematics.
Let me break it down even more basically:
1. There's a sense in many corners academia that there's too much new but ephemeral work, and not enough librarian work codifying and polishing existing ideas for more better presentatinon. (See https://hapgood.us/2015/10/17/the-garden-and-the-stream-a-te... for a good digression on that.)
2. Category theory, and related fields, emerged as a conceptual framework to organize many disparate parts of mathematics into a tighter whole, addressing such concerns (though it predates the modern form of those concerns).
Category theory however still came about as "regular" math with pen, paper, thinking, and informal natural language proofs.
3. A bunch of computer scientists studying programming languages become interested in Category Theory and Type Theory. They pretty quickly become the main people perusing both of those fields. They also decide they fit together quite well.
(If you are Platonist, this is not surprising as both are very general. If you a Conwayist, this is not surprising because if the same people are interested in two things, either those things will become increasingly intertwined, or if they really are incompatible and the group of people will disaggregate.)
4. Programming language theorists are not content with just a common mental framework for all math, they also want computer-codified proofs and computer-aided workflows. (See https://en.wikipedia.org/wiki/Automath for the first major attempt.) Type theory, being far more mechanistic/syntactic than category theory, is their obvious go-to.
5. Enter Vladimir Voevodsky. Vladimir Voevodsky is a fields medalist, i.e. a bonafide exceptional mathematician by mainstream standards who can finally lend some legitimacy to the efforts of these mathematician-computer scientists who often raise eyebrows. VV's field specialty is (within) topology, which always had a friendlier relationship with category theory, and so easily picks everything up after an experience with bad proof of his that no one else caught leads him to think formal methods are needed.
6. HoTT was sparked by some insight from VV. The intent is actually 2-fold: both to fix some annoyances with the automated theorem proovers, and to make some progress on advanced topology. It just so happens that the same tricks looked promising for both problems.
So HoTT in particular is useful for cutting-edge topology research (that's few of us) or better human computer interaction (that's far more of us). For the latter, just getting on board with Category theory and type theory before the HoTT special sauce is most of the mental work. HoTT is a not big additional step if others are doing the heavy lifting to figure out what it is in the first place.
7. During all of this, the US military decided that cybersecurity is a problem, and that programming language theorists had a compelling story on how to do defense more aggressively than traditional security types. That has injected a bunch of money into this stuff whereas before there was hardly any (by computer scientist standards).
EDIT: Should've read the first page, whoops; still, the book provides a lot more material, so if you're interested, give it a read next.