Hacker News new | past | comments | ask | show | jobs | submit login

Not a mathematician, but knowing how proof-checkers work I don't expect them to become practical without some significant future advancement, they certainly aren't right now.

The main problem is that in order to work they have to be connected to the formal definition of the mathematical objects you are using in your proof. But nobody thinks that way when writing (or reading!) a proof, you usually have a high-level, informal idea of what you are "morally" doing, and then you fill-in formal details as needed.

Computer code (kinda) works because the formal semantics of the language is much closer to your mental model, so much that in many cases it's possible to give an operational semantics, but generally speaking math is different in its objectives.




You're commenting on a thread about an interview where an actual Fields medalist explains how proof assistants are actually used to do meaningful mathematical work, today. The issues you raise are all discussed in the interview.


practical = practical for routine mathematical work, which they aren't right now, which Tao says in the article itself.

Never have I ever said or implied, here or anywhere else, that proof checkers are useless and bad and stupid.


> practical = practical for routine mathematical work, which they aren't right now, which Tao says in the article itself.

I see, I guess it would have been nice to be clearer from the start.

> Never have I ever said or implied, here or anywhere else, that proof checkers are useless and bad and stupid.

Did I say you did?


They = LLMs.

As I’ve interpreted it, they mean that LLMs and similar tech isn’t particularly helpful to theorem provers at the moment.


You yourself identified some problems that a modern AI guru would salivate over: Formal specifications from natural language, "filling in" formal details. etc

I'll add another one: NN-based speedups to constraint solvers, which are usually something like branch-and-bound. This is (i've heard) a very active area of research.


The whole article is about Terence Tao predicts it will be practical without a significant breakthrough.

>> I don’t think mathematics will become solved. If there was another major breakthrough in AI, it’s possible, but I would say that in three years you will see notable progress, and it will become more and more manageable to actually use AI.

He think you need another breakthrough to solve mathematics, i.e. making mathematicians obsolete. But even a major breakthrough doesn't happen, AI will be useful for mathematicians in 3 years.

Of course you can still disagree with Terence Tao. He being one of the best mathematicians doesn't make him an oracle.

> But nobody thinks that way when writing (or reading!) a proof

He even very specifically addressed this issue with:

>> Let’s say an AI supplies an incomprehensible, ugly proof. Then you can work with it, and you can analyze it. Suppose this proof uses 10 hypotheses to get one conclusion—if I delete one hypothesis, does the proof still work?

He clearly believes an ugly, incomprehensible proof is much better than no proof at all.


Actually, it is possible to do a high level outline of a proof development and later fill in the details. This is done by starting out with axioms. I.e., don't define the basic types but state that they exist as axioms and also write down their basic properties as axioms. Then later, when you have some development going on, fill in the blanks by turning the axioms into definitions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: