Hacker News new | past | comments | ask | show | jobs | submit login

TIL Math Olympiad problems are simple grammar exercises.



They do way more than correcting grammar, but tbf, they did make something like 10,000 submissions to the math Olympiad to get that score.

It’s not like it’ll do it consistently.

Just a marketing stunt.


You’re talking about informatics Olympiad and O-1. As for Google’s DeepMind network and math Olympiad it didn’t do 10000 submissions. It did however generated bunch of different solutions but it was all automatic (and consistent). We’re getting there.


I wouldn’t really put AlphaProof in the came category as o1, Claude, or llama.

It was trained to generate text in the lean language (https://www.lean-lang.org/) which is specifically used for formal proofs.

It’s not a natural language model.

Source: https://deepmind.google/discover/blog/ai-solves-imo-problems...

Google seems to mainly be playing the game of more specialized models (AlphaGo, AlphaProof) with general training methods (AlphaZero)

I do think it’s kind of funny that they mention AGI in that article, but the model is specifically not general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: