Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
taneq
10 months ago
|
parent
|
context
|
favorite
| on:
LLaMA now goes faster on CPUs
Is four years really 'long tail' these days? Our VM host box is from 2010 (and I had to rebuild llama.cpp locally without AVX to get it working :P )
yjftsjthsd-h
10 months ago
|
next
[–]
For cutting-edge LLM work, probably? I mean, I run mine on older hardware than that, but I'm a total hobbyist...
d416
10 months ago
|
prev
|
next
[–]
It should be noted that while the HP Prodesk was released in 2020, the CPU’s Skylake architecture was designed in 2014. Architecture is a significant factor in this style of engineering gymnastics to squeeze the most out of silicon.
refulgentis
10 months ago
|
prev
[–]
For LLMs...yeah. I imagine you're measuring in tokens/minute with that setup. So its possible, but...do you use it much? :)
Consider applying for YC's Spring batch! Applications are open till Feb 11.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: