The lovely thing about LLMs is that it can handle poorly worded prompts and well worded prompts. On the engineering side, we'll certainly see more rigor and best practices. For your average user? They can keep throwing whatever they like at it.
Exactly. I have been using OpenAI for taking transcriptions and finding keywords/phrases that belong to particular categories. There are existing tools/services that do this – but I would need to learn their API.
With OpenAI, I described it in English, provided sample JSON that I would like, run some tests, adjust and then I am ready.
There was no manual to read, it is in my format, and the language is natural.
And that is what I like about all this -- putting folks with limited technical skills in power.
Have you used the OpenAI embeddings AI? It is used to find closely related pieces of text. You could split the target text into sentences or even words and run it through that. That'll be 5x cheaper (per token) than gpt-3.5-turbo and might be faster too, especially if you submit each word in parallel (asynchronously! Ask GPT for the code). The rate limits are per-token.
Not sure if it's suitable for your use-case on its own, but it could at least work as a pre-filtering step if your costs are high.
(The asynchronous speedup trick works for gpt-3 too of course.)