Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
nullc
10 months ago
|
parent
|
context
|
favorite
| on:
Qualcomm works with Meta to enable on-device AI ap...
There are 'desktop' (well server) cpus with 64GB of HBM memory per socket now. And big LLMs can be run on lower memory bandwidth systems (like zen4 chips with 12x ddr5 per socket) at lower performance, but where 1-2TB of ram is no big deal.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: