Hacker News new | past | comments | ask | show | jobs | submit login

Ok, so you're a library developer and create a greenfield API.

What do you do, in order for chatgpt to be able to pick up your library and it's patterns? What obstacles I see in this scenario:

* Base models takes months and millions to train

* RLHF supposedly can add knowledge, but it's disputed to mostly "change style"

* What incentive will OpenAI have to include your particular library's documentation?

I imagine, if that library starts being really popular, a lot of other code will include examples how to you use it. What about before that?

Including new knowledge always lags (are there two gpt updates per year? maybe a up to 4, but not really significantly more) few months, so what about a fast moving agile greenfield project? It could cause frustration in LLM users (I know I have been bitten a lot by some python library changes already).

It seems that it's just another tool in the box for humans to use. In far far future maybe, when we somehow get around those millions of dollars for fine tuning (doubtful) and/or libraries simply stop changing.

But still, put any really not small code base into 120k token context and see how easy both gpt and cluade opus trip up on themselves. It's amazing, when it works, but currently it's a roll of a dice still




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: