Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
thiag0
on June 27, 2023
|
parent
|
context
|
favorite
| on:
Locally hosted LLM use cases?
May I ask you how fast is gpt4all running on your laptop? I've tried to run locally on my desktop and it worked really well, however the "response" time it's pretty slow.
thensome
on June 28, 2023
[–]
Yeah, it's still pretty slow and can't take large inputs very well. I'd say it takes 30sec to a minute to process a 200-300 word selection.
Consider applying for YC's W25 batch! Applications are open till Nov 12.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: