Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
haolez
33 days ago
|
parent
|
context
|
favorite
| on:
Run llama3 locally with 1M token context
Can we expect that the 1M version has the same level of intelligence as the vanilla version? Across the whole 1M tokens without degradation? What are the trade offs?
haolez
33 days ago
[–]
Well, according to LocalLLaMa, it's surprisingly good:
https://www.reddit.com/r/LocalLLaMA/s/ypFd2Xlnvf
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: