Hacker News new | past | comments | ask | show | jobs | submit login

> computers [AI] should be able to understand numbers, and run analysis on numbers.

That's not how any of this works!

"Human brains are made of neurons, so humans must be experts on neurons."

Large Language Models are all notoriously bad at simple arithmetic ("numbers") for the same reasons humans are. We cheat and use calculators to increase our numeracy, but LLMs are trained on human text, not the method used to generate that text.

They can see (and learn from) the output we've generated from calculators, but they can't see the step-by-step process for multiplying and adding numbers that the calculators use internally. Even if they could see those steps and learn from that, the resulting efficiency would be hideously bad, and the error rate unacceptably high. Adding up the numbers of just a small spreadsheet would cost about $1 if run through GPT 4, but a tiny fraction of a cent if run through Excel.

There have been attempts at giving LLMs access to calculator plugins such as Wolfram Alpha, but it's early days and the LLMs are worse at using such tools than people are.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: