Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
CuriouslyC
on May 25, 2023
|
parent
|
context
|
favorite
| on:
How to Finetune GPT-Like Large Language Models on ...
Fine tuning + context will outperform context alone, and it's cheaper to burn cycles fine tuning then use a smaller context than to use a larger context in production.
Guillaume86
on May 25, 2023
[–]
Fine tuning + same context will probably outperform context alone, but if you use a smaller context that does not seem to work that well as GP stated.
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: