Hacker News new | past | comments | ask | show | jobs | submit login
Researchers upend AI status quo by eliminating matrix multiplication in LLMs (arstechnica.com)
81 points by disillusioned1 3 months ago | hide | past | favorite | 12 comments



The relevant paper: https://arxiv.org/abs/2406.02528

In summary, they forced the model to process data in ternary system and then build a custom FPGA chip to process the data more efficiently. Tested to be "comparable" to small models (3B), theoretically scale to 70B, unknown for SOTAs (>100B params).

We have always known custom chips are more efficient especially for tasks like these where it is basically approximating an analog process (i.e. the brain). What is impressive is how fast it is prgressing. These 3B params models would demolish GPT2 which was, what, 4-5 years old? And they would be pure scifi tech 10 years ago.

Now they can run on your phone.

A machine, running locally on your phone, that can listen and respond to anything a human may say. Who could have confidently claim this 10 years ago?


I was confused by the claim in the headline but it seems that this is really the meat of the paper - they're looking for an architecture that is more efficient to implement and run in hardware. It is interesting. We know that computers must be wasting huge amounts of compute on something by analogy with a human brain and researchers will figure out why sooner or later.


With a ternary system, would we expect 1/3 of the elements to be zero? I kind of wonder about using a sparse MM, then they wouldn’t have to represent 0 and the could just use one bit to represent 1 or -1. 66% density is not really very sparse at all though.


Note that the architecture does use matmuls. They just defined ternary matmuls to not be 'real' matrix multiplication. I mean... it is certainly a good thing for power consumption to be wrangling less bits, but from a semantic standpoint, it is matrix multiplication.


"Call my broker, tell him to sell all my NVDA!"

Combined with the earlier paper this year that claimed LLMs work fine (and faster) with trinary numbers (rather than floats? or long ints?) — the idea of running a quick LLM local is looking better and better.


This is the same paper (or an extension) — using ternary weights means you can replace multiplication with addition/subtraction.


[dupe]

Some more discussion a few weeks ago: https://news.ycombinator.com/item?id=40620955


Noooooooo

The whole point of AI was to sell premium GEMMs and come up with funky low precision accelerators.


There's additional discussion on the same research in an earlier thread [1].

https://news.ycombinator.com/item?id=40787349



these quantization are throwing away an advantage of analog computers to handle imprecise "floats"


Heh, Nvidia may want to take steps to bury this. Will likely be a humongous loss for them if it pans out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: