Practice. Keep notes on what works for you. Pay attention to what other people do and take the best ideas.
What is certain is that the field is changing and the tricks that work today will not necessarily work tomorrow.
For instance there was a version of DALL-E Mini that had an original character in it named "Darts Vader" that was exactly what you think. They rebuilt the model and "Darts Vader" went away and we never saw him again.
Most importantly, ChatGPT's greatest competence is that people give it credit for things it almost did right. Being a better prompt engineer is not going to overcome the structural deficiencies that cause ChatGPT output to be factually incorrect; a better prompt engineer is going to help ChatGPT maximize its bullshitting capability and in fact will have some of the skills that Emperor had in this story
Past results are no indication of future performance... But I can't recommend enough against specializing in using someone else's product. Take AI classes, learn about neural networks, build something with AI - but don't spend weeks learning how to make ChatGPT return something interesting. That's a bit like spending weeks learning how to maintain IBM mainframes - it might be a lucrative job for a while - but it may also vanish more or less overnight in favor of something else. Microsoft, just like IBM, is more than happy for you to be a Microsoft engineer or an IBM engineer. I'd suggest just being an computer engineer, instead.
Prompt Engineering isn't a thing - it should be called "amateur ChatGPT enthusiast".
> against specializing in using someone else's product
I'm assuming you're a software engineer of some sort. Do you have an issue with using computers even though they're made by someone else?
I'd assume the answer "no", but you'd probably be uncomfortable if you only knew how to use a MacBook Pro (or whatever).
Prompt engineering, if it does take off as a real profession, will be the same. The skillset should be applicable whether you use Dall-E or an open-source generative model.
Sure - I tell folks who work in IT or make iOS apps the exact same thing - specializing in someone else's technology is dangerous.
"Computers are made by someone else" sidesteps the point. You can build your own computer at home. You cannot build your own ChatGPT at home, because the exact way in which it was constructed has not been disclosed.
If the author was learning how to work with open-source AI tooling, then that's amazing and I bet they'll do very well. If they're learning how to make MegaCorps product dance a bit, then they've chosen a career that can disappear in a moment.
It's possible the prompts are generic enough that they work well against all AIs - but then you'd just be a normal author, no? If you take a peek at the "prompt engineering" guides out there - it's mostly tricks like early-google SEO was - another industry that probably paid well for years before disappearing overnight.
> It's possible the prompts are generic enough that they work well against all AIs
The skill here is to be able to systematically understand language of different AIs and how to instruct them. The various "tips and tricks" will eventually get synthesized into higher-level observations as time goes on.
> SEO
Still a huge industry, just not in the way we expected back then. I'm sure at least some of the people that were doing early-google SEO are still doing SEO (just from the other direction). And knowing the quirks of early Google probably helps them a lot! Maybe you'd consider it a different skill, but I think there's a lot to learn just by trying to squeeze the best results out of these AI models, even if they vastly change over the next few years.
Or that it will or should be a thing. If you are not talking about embedding, fine tuning, retraining, changing or making models yourself, prompt ‘engineering’ is a trivial, boring, often time consuming exercise available to anyone who can, you know, use natural language. If you read enough ‘ways’ a specific model (version/training/fine tuning) can be ‘told’ what to do and then spend enough time subtly changing your writing and testing prompts for your goal, you can achieve whatever is good to excellent results for that model, mostly (models like chatgpt, even for the ‘experts in prompting’ quite often does things the prompt explicitly told it not to do).
There is no engineering (scientific process in building something) involved and besides being articulate, there is also mostly no skill outside patience involved.
Soon there'll be prompt engineering evangelists, prompt engineering frameworks, prompt engineering <X> specialists, promp engineering wikis, prompt engineering conventions, prompt engineering courses and schools, etc. And probably also a crypto version of all that. /s
Just... try to enjoy this fun ride through clown world.
Agreed. Now we wait for the get rich quick prompt-prompt writers ‘be a prompt engineer for a low 19.99$/month; AI writes your prompts’. Which is in fact trivial as gpt3/chatgpt are already used to write prompts for other systems AND for themselves with good results (proving what a nonsense ‘job’ this is already).
Do recruiters ask you how good your Googling skills are now? They don’t, and you need to have really high expectations of the “LLM+chat” paradigm to think that it will be as useful and prevalent as search is now.
Google is an end-user product, LLMs are not inherently an end-user product. You're talking specifically about chatGPT, but GPT-3 isn't meant for end-users, and can be used in many other products besides chatGPT.
I'm still not necessarily sold on it as it's own profession, since I think long term the goal of these models is to interpret natural language well enough that anyone can use them, but I still think this is TBD.
Set goals for results you want to get from it. And then just practice. With different systems; sd, mj, chatgpt, gpt3, gptj etc. Because they differ and they change because of retraining, fine tuning or based on the prompts that result in information (text, images etc) its creators did not want it to give out.
What is certain is that the field is changing and the tricks that work today will not necessarily work tomorrow.
For instance there was a version of DALL-E Mini that had an original character in it named "Darts Vader" that was exactly what you think. They rebuilt the model and "Darts Vader" went away and we never saw him again.
Most importantly, ChatGPT's greatest competence is that people give it credit for things it almost did right. Being a better prompt engineer is not going to overcome the structural deficiencies that cause ChatGPT output to be factually incorrect; a better prompt engineer is going to help ChatGPT maximize its bullshitting capability and in fact will have some of the skills that Emperor had in this story
https://etc.usf.edu/lit2go/68/fairy-tales-and-other-traditio...