Hacker News new | past | comments | ask | show | jobs | submit login

The computer did find the answers itself. I.e., it found "even integers" for P1, "{1,1}" for P2, and "2" for P6. It then also provided provided a Lean proof in each case.



It would make a lot of sense for the lean-code-formalisation of the problems done by the researchers fed to the AI to be provided. Not assuming bad intent in not providing them, but it would help understand better the results.


formal definition of first theorem already contain answer of the problem "{α : ℝ | ∃ k : ℤ, Even k ∧ α = k}" (which mean set of even real numbers).if they say that they have translated first problem into formal definition then it is very interesting how they initially formalized problem without including answer in it


(You're talking to one of the people who was part of the project, which is why I took @ocfnash's answer as authoritative: they did not cheat.)


If they're talking to the people who are part of the project I'd hope the answer would contain detail and not expect to be taken as authoritative.


[flagged]


[flagged]


That wasn't very nice. Are you curious about anything? Happy to help. I'd proactively do it, but I don't want to guess at whats in your mind. My initial guess is you think I think that engaging with the public is an infinite loop. I don't!


I would expect that in their data which they train AlphaProof on they have some concept of a "vague problem" whoch could just look like

{Formal description of the set in question} = ?

And then Alphaproof has to find candidate descriptions of this set and prove a theorem that they are equal to the above.

I doubt they would claim to solve the problem if they provided half of the answer.


> I doubt they would claim to solve the problem if they provided half of the answer.

Stranger things have happened


> I doubt they would claim to solve the problem if they provided half of the answer.

This falls under extraordinary claims require extraordinary proof and we have seen nothing of the sort.


To be fair, that isn't half the answer it's like 99% of the answer.

They clarified above that it provided the full answer though.


The deepmind team has a history of being misleading. The great StarCraft 2 strategist bot is still in mind.


What’s the story with that bot? Always thought it was cool. Was that all smoke and mirrors?


I think maybe parent comment is referring to it essentially just employing a zerg rush but with the speed and reaction time of an AI? Not 100% sure... Unrelated, iirc the starcraft functionality was an early example of generalizing a pretrained NN, alphaGO, and showing that it could adapt to learn and defeat games across strategic domains, especially after it learned so much strategy from the most difficult, widely played, and most strategically-varied physical game available.


Exactly, a problem and its answer are just different ways of describing the same object. Every step of a proof is a transformation/translation of the same object. It would be disingenuous to say that some heavy lifting isn't done in formalizing a problem but it seems that step is also performed by a machine:

"We established a bridge between these two complementary spheres by fine-tuning a Gemini model to automatically translate natural language problem statements into formal statements, creating a large library of formal problems of varying difficulty."

I'm confused, is the formalization by Gemini or "manually"? Which is it?


Come up with many possible answers, formalize them all, and then try to prove or disprove each of them.


This is probably partially what they did idk why it's downvoted lol


its not clear if theorem is actual input formal definition, or formal definition was in different form.


Can you elaborate on how it makes guesses like this? Does it do experiments before? Is it raw LLM? Is it feedback loop based on partial progress?


"AlphaProof is a system that trains itself to prove mathematical statements in the formal language Lean. It couples a pre-trained language model with the AlphaZero reinforcement learning algorithm, which previously taught itself how to master the games of chess, shogi and Go."


Huh, so MCTS to find the ‘best’ token using a (relatively) small, quick language model? Sounds like an interesting approach to small model text generation too…


Yeah I am not clear the degree to which this system and LLMs are related. Are they related? Or is AlphaProof a complete tangent to CHatGPT and its ilk?


It's not an English LLM (Large Language Model).

It's a math Language Model. Not even sure it's a Large Language Model. (Maybe shares a foundational model with an English LLM; I don't know)

It learns mathematical statements, and generates new mathematical statements, then uses search techniques to continue. Similar to Alpha Go's neural network, what makes it new and interesting is how the NN/LLM part makes smart guesses that drastically prune the search tree, before the brute-force search part.

(This is also what humans do to solve math probrems. But humans are really, really slow at brute-force search, so we really almost entirely on the NN pattern-matching analogy-making part.)


These kind of LLMs are also very interesting for software engineering. It's just a matter of replacing Lean with something that is more oriented towards proving software properties.

For example, write a formal specification of a function in Dafny on Liquid Haskell and get the LLM to produce code that is formally guaranteed to be correct. Logic-based + probability-based ML.

All GOFAI ideas are still very useful.


You can also verify software like compilers in Lean:

https://aws.amazon.com/blogs/opensource/lean-into-verified-s...


Sure, but Lean has very little support for software problems compared to Isabelle, Coq or Dafny right now.

Those 3 also have a lot of training data as well. Hoping Lean gets more support as it is very friendly.

As a basic learning resource focused on software engineering, there's [1]. But nothing more advanced I am aware of.

[1] The Hitchhiker's Guide to Logical Verification. https://cs.brown.edu/courses/cs1951x/static_files/main.pdf


My reading of it is that it uses the same architecture as one of the Gemini models but does not share any weights with it. (i.e it's not just a finetune)


This is really interesting. I would have expected the understanding to be that humans make a guess, test it, and learn from what did or did not work. The lessons learned from the prior tests would impact future guesses.

Do you know if a system like the OP is learning from failed tests to guide future tests, or is it a truly a brute force search as if it were trying to mine bitcoin?


This quote from the article sounds like it learns from failed tests:

>We trained AlphaProof for the IMO by proving or disproving millions of problems, covering a wide range of difficulties and mathematical topic areas over a period of weeks leading up to the competition. The training loop was also applied during the contest, reinforcing proofs of self-generated variations of the contest problems until a full solution could be found.


Reading between the lines a bit, that does answer the question I had though don't think I I clarified very well.

I read that to say the model's token weights are adjusted as it goes, so in an LLM sense it is kind of learning. It isn't reasoning through an answer in the way a human does though. Meaning, the model is still just statistically predicting what an answer may be and checking if it worked.

I wouldn't chalk that up to learning at all. An AI solving complex math doesn't even seem too impressive to me with the predictive loop approach. Computers are well adept at math, throwing enough compute hardware at it to brute force an answer isn't suprising. I'd be really impressed if it could reliably get there with a similar number of failed attempts as a human, that could indicate that it really learned and reasoned rather than rammed through a mountain of failed guesses.


>with a similar number of failed attempts as a human

I'd be hard to know how many failed attempts the human made. Humans are constantly thinking of ideas and eliminating them quickly. Possibly to fast to count.


Ive never competed in math competitions at this level, but I would have expected it to be pretty clear to the human when they tested a different solution. As complex as the proofs are, is it really feasible that they are testing out a full proof in their head without realizing it?


Hmm, I think it comes down to what the definition of "testing" and "attempt". A human will generate many ideas, and eliminate them without creating full proofs, by just seeing that the idea is going in the wrong direction.

It sounds like AlphaProof will doggedly create full proofs for each idea.

Is what the human is doing testing attempts?


Computers are good at arithmetic, not math...

There's definitely an aspect of this that is 'airplanes, not birds.' Just because the wings don't flap doesn't mean it can't fly, though.


That's totally fair, though wouldn't the algorithm here have to reduce the math proofs to arithmetic that can be computed in silico?


yeah but it doesn't understand the exact syntax on an absolute level, does it...? I understood this to be the same as any language model applied to programming languages (aka Formal Languages). Is that mistaken?


As far as I understand, and I may be wrong here, the system is composed of two networks: Gemini and AlphaZero. Gemini, being an ordinary LLM with some fine-tunes, only does translation from natural to formal language. Then, AlphaZero solves the problem. AlphaZero, unburdened with natural language and only dealing with "playing a game in the proof space" (where the "moves" are commands to the Lean theorem prover), does not hallucinate in the same way an LLM does because it is nothing like an LLM.


Yes, but the problem space means that invalid outputs can be quickly identified - whereas general programming isn’t necessarily amenable to rapid checks.


I mean, aren’t you just describing formal language syntax? Seems like a fundamentally similar situation —- the computer can automatically flag any syntax errors in a millisecond by checking it against the generating grammar for that language. Thats what makes a formal language in the first place, I think!

I do think this language is considerably more robust than the typical programming language, which means a sound program is more likely to end up being valid/“correct”. But still, that’s a difference of degree, not kind, IMO


I don’t mean syntax errors - I mean the difficulty of validating code that contains side effects (like http requests, database access etc).

Validating a math proof either terminates in a reasonable time (in which case it’s useful for training), or does not (in which case the AI should be discouraged from using that approach).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: