Hacker News new | past | comments | ask | show | jobs | submit login

I agree that in and of itself it's not enough to be alarmed. Also i have to say i don't really know what grade school mathematics means here(multiplication? Proving triangles are congruent?). But I think the question is, whether the breakthrough is an algorithmic change in reasoning. If it is, then it could challenge all 4 of your limitations. Again this article is low on details so really we are arguing over our best guesses. But I wouldn't be so confident that an improvement on simple math problems due to algorithms can have huge implications.

Also, do you remember what go players said when they beat Fan Hui? Change can come quick




I think maybe I didn't make myself quite clear here. There are already algorithms which can solve advanced mathematical problems 100% reliably (prove theorems). There are even algorithms which can prove any correct theorem that can be stated in a certain logical language, given enough time. There are even systems in which these algorithms have actually been implemented.

My point is that no technology which can solve grade school maths problems would be viewed as a breakthrough by anyone who understood the problem. The fundamental problems which need to be solved are not problems you encounter in grade school mathematics. The article is just ill-informed.


>no technology which can solve grade school maths problems would be viewed as a breakthrough ...

Not perhaps in the sense of making mathematicians redundant but it seems like a breakthrough for ChatGPT type programs.

You've got to remember these things have gone from kind of rubbish a year or so ago to being able to beat most students at law exams now and by the sounds of it beat students at math tests shortly. At that rate or progress they'd be competing with the experts before very long.


The article suggests the way Q* solves basic math problems matters more than the difficulty of the problems themselves. Either way, I think judging the claims made remains premature without seeing the supporting documentation.


“Given enough time” makes that a useless statement. Every kid in college learns this.

The ability to eventually solve a given theorem isn’t interesting — especially if the time is longer than the time left in the universe.

It’s far more interesting to see if an AI can, given an arbitrarily stated problem make clear progress quickly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: