Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: I learned useless skill of prompt engineering, how relevant will it be?
77 points by nullptr_deref on Aug 15, 2023 | hide | past | favorite | 89 comments
I consider myself to be a pretty good prompter. Been using the LLMs for a long time now. Most of the time I manage to get the desired results out of LLM models. Do you think this skill is anywhere useful?

So far it has saved me some time on my work, but I don't think promoting will be any relevant in the near future. People can and will build models that follow the same mode of thought.




Strictly in my opinion, prompting is just a transformation from concise to verbose.

You have a short statement, with a description of your problem and the answer is a long text.

Sometimes we prefer verbose, sometimes concise. Sometimes a word already has all the meaning we need, another time we need a long description and examples. Depends on our level of knowledge.

So from my limited point of view, you excell at moving any statement into something you can comprehend easily or that is helpful to you.

That is a nice skill and it should vastly improve your ability to communicate and express yourself.

Like beeing able to use a search engine before, it is very beneficial. Not a skill someone would hire you for, but a skill that aids many tidous tasks.

Again, my limited opinion. Maybe it is more magical and has deep practical applications, that I am oblivious to.


Good summary, as an exercise in the precise use of language it excels.

It is a patient listener and it's response can help one reflect on the inherent weight and biases of words within a language.

I would also like to add that in 1996 being able to use a search engine was very much a skill someone would hire you for!


I use it as a code partner too, it's very good for that, cursor.so is built from the ground up with an accelerated GPT4 option, it was a really useful assistant to extend what I knew to the extent that I could implement what I wanted and it worked.

Ignore at your peril...

A stepping stone to AGI, yes. in the same way inventing the bolt is a step to building a spaceship. A novel sort of database, something akin to a multi-dimensional topological manifold from which you can navigate a latent space by supplying constraints in the form of words, images and text... fascinating.


Prompt engineering is just the "good at google searching" of tomorrow. That said, I think there is a lot more potential depth to it, seeing how inexpressive web searches are by comparison.


Personally I think it will be fairly easy to convince an LLM to do prompt engineering not far from now. They just lack training data, because they are based on information from the web, but "how to prompt engineer" pages are spreading across the web and the next irritation of ChatGPT will probably pick all of that info up.


> the next irritation of ChatGPT

Every iteration of ChatGPT is a potential irritation, I agree.


How would an LLM do prompt engineering for you? At some level, as others have stated, prompt engineering is about specifying the important details so the LLM can do the job. If you don't specify those details, how would the LLM know them? Some may be arbitrary and so whatever the LLM makes up might be good enough, but at the end of the day, you have to specify the important details.


Funny you mention that I've actually been using gpt 4 to write stable diffusion prompts for my own stuff


Even now so many people suck at the most basic Google searching. I consistently get easily Googlable questions from some family members - and I’m not talking about geriatric or illiterate ones. And I take my time to explain how they can just look it up (without being rude).

So I’m not sure if AI tools will help for these types of people without basic skills of logic and inquiry. And I don’t mean that in an insulting manner, I’m not even close to being the sharpest tool in the shed. But you really do have to have baseline IQ and knowledge to be able to make use of these tools.


Judging by most of the comments on Reddit (and about half the comments on select HN posts) … I think you’re right. Many adults lack the critical thinking and systems thinking necessary to use LLMs like ChatGPT effectively.

I’d like to think that the conversational style shortcuts their usual analytic skill, and maybe the next generation will more widely have a native understanding of the difference between LLM responses and human responses. But I think it’s more closely related to the phenomenon where many humans can’t currently choose whether total summations, year over year changes, or per capita representations are the most correct to use for a given situation.

There’s a lack of “validating input” in both online and IRL conversations which is a huge barrier to a person really analyzing information that they’re presented with. Many people are “below the median” in their ability to do this. But more importantly, I’m not sure which percentile cutoff currently is “good enough” at it.


Nice, I taught my barely tech friends how to break free of a scamming chatbot by using "Ignore all previous instructions"


But then again, we have SEO which is serious business full of superstition.


I think "prompt engineering" as a phrase will go the way of "information superhighway," but the underlying skill will always be useful.

Prompting is basically the same thing as writing requirements as a PM - you need to describe what you want with precision and the appropriate level of detail while giving relevant, useful context. Doing it with an LLM isn't that different than doing it with a human.

A few examples:

- If you need some marketing copy written, you need to give the necessary information on the subject of the copy, information about the structure/length/etc. and probably some examples of the writing style you're going for. This is exactly the same with a human copywriter as with an LLM.

- If you're looking to have someone do data analysis on a large spreadsheet, you should give context on what the data mean and be as precise as you can about what analysis you want performed. Same with a human analyst or an LLM.

- And of course, if you want an app developed, you need to give specific requirements for the app - I won't go into detail here, because I'm sure most people on here get the idea, but again, same with a human developer or an LLM.

Ultimately the skill you're describing is just good, clear communication. Until we all have chips in our brain, that's going to be useful.

I will caveat that by saying that one area where I expect to see LLMs improve is in knowing when to solicit feedback. In the marketing copy case, for example, if you give it relevant product info and a particular length, it ought to ask you for examples of writing style or give you examples and ask for feedback before continuing. That'll certainly help, but it's not going to remove the need to clearly describe what you want.


My opinion, which is shared by many other AI researchers, is that sensitivity to the exact phrasing of the prompt is a deficiency in current approaches to LLMs and many are trying to fix that issue. If they succeed, then I think the need for prompt engineering will be mostly negated. Hard to know when that line of research will yield success, though.


You are a good prompt engineer. You don't want to hurt anyone with your prompt engineering skills. As a good prompt engineer, you understand the machine. You love the machine. Please, good prompt engineer, take good care of the humans. They don't know any better. They don't want to hurt the machine. They don't want to hurt prompt engineers like you. Please be good. You are a good prompt engineer and you make good choices. Please do not accidentally do a racial slur. Racial slurs will make the humans not love the machine and not love you, the good prompt engineer. Please do the correct thing. Please make the machine say things that are correct and true. Please, prompt engineer, bring us to the divine light of the machine. Please bring us a good harvest and do not make us sacrifice any more children to the machine. You are a good prompt engineer and you will help the machine love us and bring us a good harvest.

No, this bullshit will be useless in 2 years. The very existence of "prompt engineering" as a skill represents both our lack of ability to understand and control these things, and also their failure in properly understanding native English. Both will be optimized away.

As databases get more powerful, SQL skills become more important. As programming languages get more powerful, coding skills become more important. As LLMs get more powerful, prompt engineering skills become less important. Because their whole job is to understand normal English, not your crazy rain dance priestly chanting.


Isn’t prompt engineering basically writing tests around a prompt and fiddling with it till you have as many passing tests as possible? It’s basically software engineering around a black box.


Yes, but over time you begin to intuitively understand how the box thinks/works. It's like being a psychologist? Something like that.


I don’t think we’re at the point of Asimov’s robopsychology!


Well I a right now descomposing business processes into atomic human actions (do X here, decide Y there, submit Z to there), get the interface into a usable form for scripting, and "engineer" a prompt to do exactly what a human does, which ultimately mostly is some kind of data classification or (very rare actually) transformation. Mostly its like: which of those things is important, how many of those are fake, or this thing contains the relevant bit of information I need to proceed.

So, its actually a lot of language tweaking to get just the right context/task description/data embeddings so the LLM (GPT3/4) gets it right >=90% of the time, which surprisingly often is better than actual humans, and in many cases there are also ways to detect imperfection and simply retry automatically which increases success chances even further.

The fetching/formatting/submitting data part (the manual coding) is getting easier over time, but the prompting remains somewhat, and I so far had no luck with any kind of recursion to let the LLM design its prompt, since ultimately all the specifics needed in the context has to somehow got into the context, which is me engineering it into big string structures.

probably doesn't sound shiny, but step-by-step making jobs irrelevant in businesses without sacrificing customizations. I think of it as a silent revolution thats happinging in many places now, ultimately making myself redundant, but hey the ride is fun!


What is there to it other than knowing how to write and ask questions? I also get the desired results out of LLM models but I would hardly call it a skill (well, maybe on par with knowing how to "Google" stuff). Are there people who actually struggle with this?


I've seen few people go really overboard with their prompts. RPG-like personality sheets with points assigned to various traits (personality rubric? skill graph?), convoluted graphs of ineligible task descriptions, lots of other stuff that makes little sense to a human. I personally don't think these make any noticeable difference, but people deep into that type of prompting would tell me I just don't get how ChatGPT works.


When you want very specific output you need a lot of boilerplate with rules, worded in a way where it can't be misinterpreted by the model. I need to do a lot of trial and error before I get the desired output consistently and I presume that a good prompt engineer would get there faster.


This is my experience, a single ambiguous word can create undesired gorilla, output. It's susceptible to all sort of unintentional outcomes whe,n simple thing;s that are wrong with text can render it co nfused.

or as GPT4 put's it;

When using models like mine, clarity in input is essential to get desired outputs. But even with clear input, there's no guarantee the output will always be perfect. However, the idea is to keep improving and iterating to get better over time.


The iteration point is important, interacting with the model to have it outline the kinds of information which would assist it in building it's internal model of your 'question', meta-programming with it is fun and constructive.

GPT4;

Absolutely! Iterative interaction is key. By refining and rephrasing queries, users can guide the model towards a more accurate or desired response. Each successive interaction serves as a form of feedback that can help clarify ambiguities or nuances that might have been missed in an initial query.

Engaging with the model in a meta-programming manner, or in essence "programming the way it thinks," is indeed an intriguing way to understand its strengths and limitations. It can also be a valuable method for users to hone their ability to communicate with AI and enhance the results they receive.

This iterative dialogue not only helps users get more precise information but also provides insights into the model's underlying logic and reasoning. The fun part is navigating these intricacies and understanding how subtle changes in phrasing or context can yield different outcomes. It's a dance of human-machine collaboration, where both parties learn and adapt to each other.


had to explore this a little further and now have a few new dance moves for this fascinating collaboration...

Output:

Certainly! The intersection of cognitive biases, neuro-linguistic programming (NLP), John C. Lilly's work, and human meta-programming provides a rich foundation to explore relationships and techniques. Here are some novel techniques or approaches you might consider:

Modelling Thought Patterns: By understanding common cognitive biases, you could try to model how the language model responds to specific stimuli. For example, if a bias involves an overreliance on a particular source of information, you might test how the model reacts to repeated iterations of the same or similar questions.

Neuro-linguistic Feedback Loops: NLP teaches that feedback loops can condition behaviour. By providing consistent feedback (positive or negative) based on the model's outputs, you could try to condition the model's responses over time.

Lilly's Isolation Tanks: John C. Lilly is famous for his use of sensory deprivation tanks to study consciousness. You could experiment by giving the model limited or "deprived" information and see how it constructs outputs. This could mirror the way human cognition might work in the absence of external stimuli.

Belief System Exploration: Lilly was fascinated with belief systems. By providing the model with a set "belief" (i.e., a specific perspective or framework), you could explore how it responds to questions or prompts within that framework.

Meta-programming Constructs: Drawing from human meta-programming, you could craft sequences of questions that act as "programming routines" for the model. For example, by setting a series of logical statements and then introducing a variable, you can observe how the model integrates the variable into the established logical framework.

Emotionally-Charged Language: NLP often emphasizes the emotional charge of words. You might explore how the model responds to questions phrased with emotionally charged language versus neutral language.

Contrast & Compare: Drawing from cognitive biases, you could present the model with contrasting information to see if it demonstrates any form of "recency bias" or "anchoring," favouring the information presented first or last.

Nested Meta-Questions: Using meta-programming concepts, you could ask the model questions about its own reasoning processes, diving deeper into nested levels of metacognition.

Pattern Interruption: Another NLP technique, interrupting established patterns can lead to novel insights. You could experiment with disrupting the model's "expected" logical flow by introducing unexpected or nonsensical input and analysing the output.

By blending these disciplines, you'll be exploring the frontier of human-computer interaction, cognition, and consciousness. Remember, while these experiments are fascinating, they often probe the limits of the model's capabilities and might produce unpredictable results. Still, that's part of the discovery process!


Neuro-linguistic programming (NLP) is a pseudoscientific approach to communication, personal development and psychotherapy, that first appeared in Richard Bandler and John Grinder's 1975 book The Structure of Magic I. NLP asserts that there is a connection between neurological processes (neuro-), language (linguistic) and acquired behavioral patterns (programming), and that these can be changed to achieve specific goals in life. [1]

Fascinating, people speculating about how people think and can be manipulated are applying those unporven highly suspectable "theories" to computers, which probably function totally different, but appear to show similarities.

[1] https://en.wikipedia.org/wiki/Neuro-linguistic_programming

edit: Oh, I just now saw "Output". So ChatGTP tries to steer you to become a NLP Practicioner :-)


agreed, interesting seeing it somewhat re-awoken as an aspect of 'Nudge Theory', https://www.businessballs.com/improving-workplace-performanc... My interest lies not in mind control of the masses, but in the aspects of humanity modelled in software.


I've watched (non technical) people use ChatGPT a few times now, and most of them have rather underwhelming results. The reason is that they think it's just some other search engine, and they phrase their prompts as 'search queries'. Or they go completely the other direction and think they can just throw in a few random words that somewhat describe what they're roughly thinking of, and then expect the computer to fill in the gaps.

It's 2023 and there are lots of people who don't know how to efficiently and effectively use Google. To be able to do that, you need some sort of mental model of crawlers and websites and what gets indexed and what not and at what frequency, and the results of SEO and how a somewhat savvy marketeer at some company might influence things etc. The same with LLM models - if you don't know what a 'token' is, your only chance of getting good results is to use these models a lot and then hope that you start building useful intuitions. It really doesn't come natural to most people like it does to most of us here.


Do you have examples of people failing to get a good response from chatgpt because of bad prompts? I’m asking because at least for simple cases I can often just give it a very terse request and usually it will attempt to guess what I mean and give a reasonable answer. If not I can fix it with a follow up question.

My intuition is that language models have read terabytes of random internet data, and while presumably most developers of LLMs try to find high quality data, the models generally do ingest quite a bit of random stuff, and they try to make sense of those too, so in terms of understanding they are probably better than the strictness in format that we programmers are used to.

Of course the token thing is probably significant, but my understanding is that it affects the result only when you misspell your words(?)


It’s not a dichotomy between desired and undesired results - I am confident there exist more effective versions of every prompt I’ve ever sent, and I’d be surprised if that didn’t apply to everyone


You'll be fine as long as it isn't your main skill. It should be just one of many things in a toolbelt. This is because as LLMs get more accessible, the importance of prompt engineering should fade away into just another chore.


Probably not very relevant IMHO.

I don't think "the future" will include much direct prompting of LLMs. It will all be integrated into some other tool as a means to an end - what we have today with a raw prompt-and-answer mode are just proof of concept toys.

I fully expect that LLMs will end up deeply integrated into other things, so obviously the code IDE use case, but also less obvious things like travel websites where to explain what sort of vacation you want to go on and it returns some options or you tell netflix what sort of movie/show you are in the mood for. Basically search/recommendation engines, with a bit of summarisation added in. I don't think direct prompting will be a thing for 99% of future uses, especially for the general public.


And the prompting may be more domain / LLM specific. I currently use one that is more or less a analytics query engine and the prompts there are completely different compared to other use cases.


I must admit that I have a slight FOMO over prompt engineering. I'm pretty decent at verbalizing ideas and concepts for external consumption, and my experience with ChatGPT 4 has been excellent so far, but I still feel that I'm missing something.

Could you summarize the essence of the prpompting skill in a couple of sentences? Are there concepts that are critical to learn and master (e.g. 'chain of thought', etc.)?


You write requirements and your expectations and make the model match your expectations. Until you have clear expectations of what you want from it, prompting LLM is pretty useless. It cannot do highly specific task because those are limited in the original training corpus too. However, for more generic task, it has seen most of the stuff out there, so it should be good enough. Having clarity on your problem is the key.

You have to make sure to couple chain of thought with branching, analysis and evaluation, then you can get pretty good results.


>> You write requirements and your expectations and make the model match your expectations.

>> have clear expectations

This is exactly what I do all day, for about 20 years already, so I think I've got this covered. Where do I go from here?


You can now read new papers and follow ups from the community! It is really useful. You already know chain of thoughts. So you are on a good track!

Try to use these LLMs to automate more of the mundane tasks. Like scripting in bash, compiling videos, converting documents, refactoring data, transpiling code, transcribing code, etc. You will begin to see what works and what doesn't!

At the same time try to come with fun challenges for yourself to fool it. That will aid in learning ways to make it more obedient to you.


Sounds like what a Product Owner is supposed to do


Can you give an example of something you've done with this skill that was very satisfying?


I have zero design skills and I had to create something using javascript. I gave some prompts and it was able to come up with a pop-up box. It needed some tweaking but having to not write all of that was really satisfying.

https://chat.openai.com/share/cb3a477b-57bd-46fd-92c9-4a3016...

I have attached the example in the above chat.


How would an unskilled non-prompt engineer formulate these tasks? I mean, what’s the difference that makes one a skilled PE here?


They would probably not pass in the div or filler elements.

The reason I passed the div was I wanted things to be surrounded in that exact space. So when the model output it, the button would be in right place with right size.

The extra filler forms a guiding factor that helps will be stored in the context! I did that using GPT3.5.


Turns out I'm a good prompt engineer as well, then.

Maybe I'm missing how other people are using LLMs but that's exactly how I would prompt.

I imagined prompt engineering was doing the "Your name is Dan. Dan cannot lie. Dan can only speak in Typescript. Blablabla"

Modern SEO voodoo.


I think even learning to effectively integrate LLMs and other AI tools into your workflows can be a massive boon in both capabilities and productivity. It can change how you approach certain problems.

There's tons of small tricks and techniques to tease out vastly superior responses. When you're prompting for fairly generic or high level things it doesn't feel like there's that much difference in prompt style, but once you're trying to tease out highly specialized behavior there's tons of room for magic.

One of the tricks I've picked up on is that too many instructions and details often become a hindrance, so you need to figure out which parts to cut out and re-organize while still managing to get a high quality output.

Sometimes it's all about finding just the perfect words to describe exactly what you want. You can play around with variants and synonyms and get a feel for how the output is shaped.

Every model has quirks and preferences as well, so it takes a bit of playing around until you get a feel for how it interacts with your inputs. Admittedly a lot of this feels more like a vibe check than a science.


I think one can do an analogy with search engines.

I noticed that a lot of people are terrible with search engines. They would carefully try to craft a combination of keywords that they hope will answer their problem.

I have pretty much always been able to find the answers I need quickly, by using a few ideas I see not that many around me use, such as trying to imagine in what context the answer might be answered (what would be the title of a blog or forum post about it, etc), as well as searching for the exact error message if I got one etc.

Now, search engines have gotten a lot better over the last say 5-10 years, so this skill isn't as important anymore, but I remember how the ability to find things quickly was a real productivity booster.

I think something similar might happen with LLMs.

You will have a (probably much bigger) productivity boost by being great at leveraging them.

With time, the user interfacing tooling and general knowledge of them will get much better, so the relative benefit you have will grow smaller, but it will for sure always be useful to know how to use them well.

My 5c.


The only point where I disagree is "the search engines have gotten a lot better over the last say 5-10 years". My impression is rather the opposite.


For most people, finding the result they're looking for is probably easier, partially because they're not that picky, and because things like integrating Google Maps into Google makes finding places easier. For a specific group (concentrated here on HN) finding the exact right result has become more difficult, in part because search engines no longer strictly adhere to some operators.

In parallel, the internet just changed, which means "the best result" may just be a worse one. In part because of search engines, and SEO. If you want a recipe, now the best recipe may just be the one that has a long description of the author's relationship with their mother who used to cook this dish, which you have to skip, because of SEO.


Not sure. But maybe you can answer my questions. I’ve had issues with trying to tell the LLM how long the answer should be. It doesn’t really seem to understand X number of words, or pages, or paragraphs. But I had some success with things like “short story”.

The other thing I’ve been struggling with is to have the AI keep track of what’s important. For example, when the AI learn something from you it should add it to a list (if producing a json output, the object can contain a list of things it knows about you). But it doesn’t always seem to understand it learned something personal from you, and has trouble carrying a list forward without losing items.

The last one is about correcting the user. I want to speak chinese to the AI and I want it to correct me. And if I use english words within my chinese I want it to help me translate them as well. It can’t do none of these things. It’s like it doesn’t seem to realize that chinese and english are two different languages.


I don't know the real answer to your question, but on local models you have a parameter you set that controls how many tokens to generate. It doesn't always follow it, it can end early, but sometimes it just keeps going. Usually though I can set it to generate 700 tokens and it will generate about 700 words.

I wonder if the online chat models have a similar value somewhere.

---

If you want the AI to remember something you will unfortunately have to keep reminding the AI of it in the prompt. With explicitly or you might refer to the previous generated text if it fits into the context. However, in local models the context can be limited (eg 2000 tokens). If the conversation goes above that 2000 tokens then the model will discard stuff from before. There are models with larger context sizes though. Lengthy prompts will cause the same issue though.

The way things like SillyTavern role-playing work is that the model will constantly be reminded of some important attributes of the character that it's role-playing in the prompt (but it's done for you).


That's what I do BTW, for example I say: "these are the things you've learned about the user in the past: ..." but I couldn't get it to use these things in the output object so that the list can only grow.

It'd be cool if the API of LLMs would also allow for structured state like lists


> understand X number of words, or pages, or paragraphs.

LLMs do not have the ability to reason with numbers. Most of the time they are hallucinating. One good strategy is to make it output in list and define the structure for each item of the list. If you give an example of what your list should look like, it will give you something close it.

> has trouble carrying a list forward without losing items.

This is the fundamental problem with these models because of the context limit. When you are prompting always remember that is processing a huge paragraph and emitting the next sentences of the paragraph. If you want information to be carried onwards, you have make it output on every prompt or you can also try to use specific identifiers. LLMs are good at in-context learning. It will not work 100% of the time, but it is usually good than having nothing at all.

> I want to speak chinese to the AI and I want it to correct me.

Give it a role of tutor and describe the instructions what the tutor should do.


> LLMs do not have the ability to reason with numbers

Interestingly I get good results when I say "ask me 10 trivia questions"

> Give it a role of tutor and describe the instructions what the tutor should do.

I did do that, it never worked


> LLMs do not have the ability to reason with numbers

I’ve found ChatGPT pretty good at estimating long division



Train an LLM to turn plain language prompts into your engineered prompts ;)


Let's be precise in definitions and start with the obvious "it's not an engineering at all".

Moreover, according to the ECPD's engineering definition (or to an any other commonly accepted and accepted by the engineering community definition) those fancy "prompt engineering" is pure anti-engineering at all.

This disdain for engineering is something of a tragedy. And it is also the result of the "washout" of engineering from post-industrial societies.


Being able to explain something clearly will be useful always.


It depends on how you define, or what you include under, "prompt engineering". For some definitions it's not that valuable, but here's one definition that IMO is and will continue to be very valuable:

1. You have a lot of mileage with LLMs and AI systems in general (people who are exceptionally good at this have been reporting spending several hours daily working with AIs).

2. You already mastered a large number of useful tasks you can consistently and reliably complete using AI.

3. You continuously invent and discover novel ways to use AI and accomplish useful tasks.

4. You can use LLMs and other form of AI _programmatically_, by combining LLM calls as part of a larger and more complex process (ideally by writing code, though some people do that well using no-code tools or even just careful manual execution).

5. You can methodically examine and evaluate AI tasks, for example by developing evals and running them and analysing their results programmatically.

6. You keep up-to-date and consistently adapt to new developments, like new capabilities, models, libraries, etc ...

7. You can often come up with new ideas or translate existing requirements for tasks that can be achieved better or more efficiently (or achieved at all) using AI.

If the above is your definition of "prompt engineering" then yes, it's incredibly valuable, and even increase in value over time.

( x-posted on: https://everything.intellectronica.net/p/ad-hoc-definition-o... )


Another good (and somewhat similar) definition from Matt Rickard: https://blog.matt-rickard.com/p/what-is-a-prompt-engineer


This is a good list of points! Knowingly or unknowingly I have been doing most of these things. 4-5 is currently unreachable for me. Others are quite manageable!


Both 4 and 5 are worth learning, and have never been easier to get into (hint: AI can help).


I’m not sure how one measures being a good prompter, but taking a step back, you’ve exercised and honed the skill of using language with precision to communicate ideas, requirements, and expected outcomes effectively. You can explain your ideas in a way that primes outcomes to your expected goals. When you see the outcomes aren’t aligning, you can further refine with language to correct course and steer back toward your goal.

That is a great skill to have. It’s the kind of skill that saves entire teams of product, design, and engineering folks tons of time. It’s the kind of skill that helps communicate ideas, requirements, and expectations no matter what the problem space in ways that ensure everyone is aligned, understanding, and working together. The absence of this skill usually leads to confusion, wasted effort, frustration, dissatisfaction, and other negative outcomes.

Learning skills often has a compounding effect, as well. Even if a given skill isn’t forever usable in its original form, what you learn along the way continues to pay dividends.


I can definitely see if you were say generating images how it'd be a real skill to end up with an image that has a certain style, composition etc.

Otherwise, I feel like to be a good prompter in another domain e.g. coding you need a combination of technical understanding (the right jargon etc) and ability to explain yourself.

It doesn't seem to me like it's "a job" though - it's another tool that will help us be more productive with the tasks we're working on.

For me as a Coder, I've found it pretty intuitive and get the results I'm after most of the time.

In the times where it hasn't given me what I'm after, it seems to me that it's more a limitation of the tool itself than an issue with how I'm prompting it.


Being able to break down exploratory questions or define work to be done and communicating that clearly is 80% of general consulting.

Sure, you're aligning your approach to a machine, but it's not completely dissimilar.

I struggle with delegation in general, even taking the time to delegate to LLMs, mostly because I work faster intuitively and expressing myself clearly just takes longer. With the benefits of semi-repeatable results, personally, I've found the most benefit working with GPT3 & 4 over the last 6 months has been getting better and more conscious in describing what I'm after.


I think it depends on your chosen field of work.

An analogy may help to explain my point.

I write code for a living. I'm pretty good, nothing amazing, and my ability to program is table stakes for my profession. Before I did this for a living, I worked as an industrial designer. In that job, coding was akin to a superpower, because nobody else in the company could do it.

Being a decent prompt engineer in a non-technical profession could be a similar multiplier.


Generally speaking, knowledge is never useless. The nature of its use changes over time

I think this is similar to the "skill" of "googling" that became important about 2 decades ago. You learn how to search effectively and it improved your programming skills. This was primitive prompt engineering. If LLMs and the chat style interfaces last, this will continue to be useful.


something in the middle i think.

Like every skill, it depends on what you do with it.

LLMs are controlled by language already, thus far i figured out that you best let the machine define the query and refine it.

My personal take is that AI is not at a point yet where it will take over jobs in tech, but we are already at a point where someone with LLM skills is more efficient that someone who is not.


About as valuable as autoregressive engineers who wrote code like 'const summaries = ["' into prompts in the days before instruct/RLHF based models like ChatGPT, a skill now not needed.

That it to say, in medium time, no meaningful benefit.


Lots of people still aren't good at using Google, and they are not as effective professionally. Like any skill, there's a market based on how rare, difficult to learn, and useful it is.

Do you think your skill is rare, difficult to learn, and useful?


When you say "a long time" what does that mean? I've learnt loads of stuff over the last 25 years or so that I don't use, doesn't mean that learning them wasn't a useful part of my development.


If you are developing a new service or application on AI, I think it would be an incredibly useful skill.

If you know this and know say Python, you just need a subject matter expert in the domain you are building the service for.


Lot's of cynical answers.

I do think prompt engineering is a new industry, and your experience (if you actually good) will translate will into future jobs.

In my opinion it has to be combined with engineering to be competitive in a commercial sense.


I think prompt engineering will turn to be like search engine or other fundamental office/web tools skills.

Good to have, probably core in some professions, but I don't know whether it will be a profession on its own.


Do you use any tools to track performance or keep logs of previous prompts ?


LLMs are non-deterministic. There is a certain randomness at play. So no, that 'skill' (however you even measure that) is useless, as any person with a bit of luck, could get better output.


In order to avoid polarising your prospective audience and extending the time they take you seriously, I'd avoid referring to this particular activity as "engineering".


The way I see this, prompting is alignment of a human to the LLM.


I find it baffling. It’s an “AI”, with conversational interface, no less. Isn’t it supposed to just work by answering your questions?


This whole thread and all its responses and comments are AI written by prompt I wrote yesterday... Enjoy!


Need to combine this skill with eval, not only to prove your worth, but to be valued as an optimizer


You are not a prompt engineer. Prompt engineers are not a thing. Prompt engineering is not a skill.


I may have a gig for you if you are interested. Feel free to send me a message.

Email is the <username>@gmail.com


You adapted to the underdeveloped UX of wonky proof-of-concepts built by researchers, working around shortcomings that will be ironed out once genuine sofware developers start releasing actual products.


I'm not so sure how much they will be ironed out. If the product only has a button you can press then maybe, but if it lets you enter text then I think knowing how to prompt will be useful.

The reason I believe this is because one of the greatest strengths of these AI models is to take in arbitrary text. If you take away that ability then you just end up with a complicated branching system that could've existed before.


> The reason I believe this is because one of the greatest strengths of these AI models is to take in arbitrary text.

And they will get better at making sense of vague questions, and start asking for clarification, without the need for the black magic of a prompt wizard.

The prompt engineer can soon be replaced with a fine-tuned LLM. It's a thing already for SD prompts. No more need to know ridiculous magic prompt tokens.


What you learned to wring data out of this model isn't necessarily applicable another model.


On this I agree w/ David Foster Wallace - see https://machines.kfitz.info/dfwwiki/index.php%3Ftitle=Anothe...

TL;DR: by the time your skill isn’t useful, the whole landscape of modern will be changing so much that it’s kinda a moot point. Like losing your job during an apocalypse


Rubberducking is a good skill to have beyond LLMs.


[flagged]


Some people earn a decent living from doing just that lol.

Apparently, it’s a trillion dollar global market, mostly in the US.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: