I've only done two experiments with it myself - training a tagging model on my blog's content and using that to suggest tags for untagged entries - and I found the results very unimpressive fur both a cheaper and the most expensive model.
I've seen a few other people suggest that time tuning GPT is unlikely to give better results than just feeding the regular model a few examples in a regular prompt.
I've yet to see anyone talking about a GPT3 fine tuning project that went really for them. Maybe I haven't looked in the right places.
I've seen a few other people suggest that time tuning GPT is unlikely to give better results than just feeding the regular model a few examples in a regular prompt.
I've yet to see anyone talking about a GPT3 fine tuning project that went really for them. Maybe I haven't looked in the right places.