Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
tgma
6 months ago
|
parent
|
context
|
favorite
| on:
LLaMA now goes faster on CPUs
Either
https://lmstudio.ai
(desktop app with nice GUI) or
https://ollama.com
(command-like more like a docker container that you can also hook up to a web UI via
https://openwebui.com
) should be super straightforward to get running.
tchvil
6 months ago
[–]
Thank you for letting me know it was possible on an M1. I'll try all this now.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: