Hacker News new | past | comments | ask | show | jobs | submit login

I was just thinking about this. I already posted a comment here, but I will say that as a mathematician (PhD in number theory), that for me, AI signficantly takes away the beauty of doing mathematics within a realm in which AI is used.

The best part of math (again, just for me) is that it was a journey that was done by hand with only the human intellect that computers didn't understand. The beauty of the subject was precisely that it was a journey of human intellect.

As I said elsewhere, my friends used to ask me why something was true and it was fun to explain it to them, or ask them and have them explain it to me. Now most will just use some AI.

Soulless, in my opinion. Pure mathematics should be about the art of the thing, not producing results on an assembly line like it will be with AI. Of course, the best mathematicians are going into this because it helps their current careers, not because it helps the future of the subject. Math done with AI will be a lot like Olympic running done with performance-enhancing drugs.

Yes, we will get a few more results, faster. But the results will be entirely boring.




There are many similarities in your comment to how grandmasters discuss engines. I have a hunch the arc of AI in math will be very similar to the arc of engines in chess.

https://www.wired.com/story/defeated-chess-champ-garry-kaspa...


I agree with that, in the sense that math will become more about who can use AI the fastest to generate the most theories, which sort of side-steps the whole point of math.


As a chess aficionado and a former tournament player, who didn’t get very far, I can see pros & cons. They helped me train and get significantly better than I would’ve gotten without them. On the other hand, so did the competition. :) The average level of the game is so much higher than when I was a kid (30+ years ago) and new ways of playing that were unthinkable before are possible now. On the other hand cheating (online anyway) is rampant and all the memorization required to begin to be competitive can be daunting, and that sucks.


Hey I play chess too. Not a very good player though. But to be honest, I enjoy playing with people who are not serious because I do think an overabundance of knowledge makes the game too mechanical. Just my personal experience, but I think the risk of cheaters who use programs and the overmechanization of chess is not worth becoming a better player. (And in fact, I think MOST people can gain satisfaction by improving just by studying books and playing. But I do think that a few who don't have access to opponents benefit from a chess-playing computer).


Presumably people who get into math going forward will feel differently.

For myself, chasing lemmas was always boring — and there’s little interest in doing the busywork of fleshing out a theory. For me, LLMs are a great way to do the fun parts (conceptual architecture) without the boring parts.

And I expect we’ll such much the same change as with physics: computers increase the complexity of the objects we study, which tend to be rather simple when done by hand — eg, people don’t investigate patterns in the diagrams of group(oids) because drawing million element diagrams isn’t tractable by hand. And you only notice the patterns in them when you see examples of the diagrams at scale.


Just a counterpoint, but I wonder how much you'll really understand if you can't even prove the whole thing yourself. Personally, I learn by proving but I guess everyone is different.


My hunch is it won't be much different, even when we can simply ask a machine that doesn't have a cached proof, "prove riemann hypothesis" and it thinks for ten seconds and spits out a fully correct proof.

As Erdos(I think?) said, great math is not about the answers, it's about the questions. Or maybe it was someone else, and maybe "great mathematicians" rather than "great math". But, gist is the same.

"What happens when you invent a thing that makes a function continuous (aka limit point)"? "What happens when you split the area under a curve into infinitesimal pieces and sum them up"? "What happens when you take the middle third out of an interval recursively"? "Can we define a set of axioms that underlie all mathematics"? "Is the graph of how many repetitions it takes for a complex number to diverge interesting"? I have a hard time imagining computers would ever have a strong enough understanding of the human experience with mathematics to even begin pondering such questions unprompted, let alone answer them and grok the implications.

Ultimately the truths of mathematics, the answers, soon to be proved primarily by computers, already exist. Proving a truth does not create the truth; the truth exists independent of whether it has been proved or not. So fundamentally math is closer to archeology than it may appear. As such, AI is just a tool to help us dig with greater efficiency. But it should not be considered or feared as a replacement for mathematicians. AI can never take away the enlightenment of discovering something new, even if it does all the hard work itself.


> I have a hard time imagining computers would ever have a strong enough understanding of the human experience with mathematics to even begin pondering such questions unprompted, let alone answer them and grok the implications.

The key is that the good questions however come from hard-won experience, not lazily questioning an AI.


Even current people will feel differently. I don't bemoan the fact that Lean/Mathlib has `simp` and `linarith` to automate trivial computations. A "copilot for Lean" that can turn "by induction, X" or "evidently Y" into a formal proof sounds great.

The the trick is teaching the thing how high powered of theorems to use or how to factor out details or not depending on the user's level of understanding. We'll have to find a pedagogical balance (e.g. you don't give `linarith` to someone practicing basic proofs), but I'm sure it will be a great tool to aid human understanding.

A tool to help translate natural language to formal propositions/types also sounds great, and could help more people to use more formal methods, which could make for more robust software.


If you think the purpose of pure math is to provide employment and entertainment to mathematicians, this is a dark day.

If you believe the purpose of pure math is to shed light on patterns in nature, pave the way for the sciences, etc., this is fantastic news.


Well, 99% of pure math will never leave the domain of pure math so I'm really not sure what you are talking about.


We also seem to suffer these automation delusions right now.

I could see how AI could assist me with learning pure math but the idea AI is going to do pure math for me is just absurd.

Not only would I not know how to start, more importantly I have no interest in pure math. There will still be a huge time investment to get up to speed with doing anything with AI and pure math.

You have to know what questions to ask. People with domain knowledge seem to really be selling themselves short. I am not going to randomly stumble on a pure math problem prompt when I have no idea what I am doing.


I agree wholeheartedly about the beauty of doing mathematics. I will add though that the author of this article, Kevin Buzzard, doesn't need to do this for his career and from what I know of him is somebody who very much cares about mathematics and the future of the subject. The fact that a mathematician of that calibre is interested in this makes me more interested.


> Now most will just use some AI.

Do people with PhD in math really ask AI to explain math concepts to them?


They will, when it becomes good enough to prove tricky things.


The parent comment said:

> Now most will just use some AI.

I'm genuninely wondering whether it's true. The "now" and "most" part.


I think it will become apparent how bad they are at it. They’re algorithms and not sentient beings. They do not think of themselves, their place in the world, and do not fathom the contents of the minds of others. They do no care what others think of them.

Whatever they write only happens to contain some truth by virtue of the model and the training data. An algorithm doesn’t know what truth is or why we value it. It’s a bullshitter of the highest calibre.

Then comes the question: will they write proofs that we will consider beautiful and elegant, that we will remember and pass down?

Or will they generate what they’ve been asked to and nothing less? That would be utterly boring to read.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: