Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is prompt engineering just snake oil?
199 points by wg0 10 months ago | hide | past | favorite | 196 comments
Is prompt engineering a snake oil? The engineering, as the word stands so far in history is about observing clear limitations and capabilities of something and then "engineering" things around it such as "engineering a compiler" is all about knowing the underlying processor, memory and other characteristics and then engineering a solution that converts a text notation to a stream of instructions that try to strike a balanced tradeoff.

But with LLMs, no one knows their workings once they are trained. The other day, Bark model[0] for text to speech, the team itself has following to say on details:

> Below is a list of some known non-speech sounds, but we are finding more every day. Please let us know if you find patterns that work particularly well on Discord! [laughter], [laughs], [sighs]....

So that's the team themselves not knowing what their model is capable of then how come prompt engineering is any engineering at all?

[0] https://github.com/suno-ai/bark




Think of it more as "prompt-fu", akin to "google-fu"--or how to google. You do have to know how to talk to an LLM, and it will be important. But it's not so much engineering, as we come to think of it in software traditionally. I wouldn't get hung up on the name, but just see it for what it is--a bad name for something we'll need to learn to be effective. It won't go away entirely, but will be mitigated to be less important as LLMs and AI get better.


LLMs are very quirky. They need the correct context to produce something good and it’s often surprising what that needed context is. OP just play around with midjourney and you’ll soon realize that it is a very real thing


100%. People who have not tried to interact with these models don't understand there is actually quite a lot of skill in navigating the latent space.


It really should be just an intermediate phase. Proper, solid control system must be created. First you have Stable Diffusion, then you create ControlNet - and this as just a beginning.

This is also one of the reasons why the problem of transparency ("but why and how does it work?") is so important in Machine Learning: allowing control.


This sounds like a joke I heard decades ago when I first worked with LISP programmers - when I commented about some unexpected result, he said "ah, you're looking for the DWIM command..." "What?"... "The Do What I Mean command, they're still working on it..."

Decades later, and it looks like everyone is still working on the DWIM command...


Somethings will be easier, but it's still a lot like programming. You often need to do a lot of setup to accomplish a task. For somethings, there's just no way around creating a specific context.

In many ways, it's the difference between using a library like lodash and writing the methods yourself. You still need to understand what's being done, a library just handles more things automatically for you.


Adding a working control system doesn't take away the fact it requires skill though.

It seems like people think that one day they will be able to give a vague prompt and generate exactly what they were looking for, which I don't believe will ever happen. It feels like there will always be some skill in navigating the model.


> there will always be some skill

There is a world of difference between the needs in governance in just having to write "Those decorations where I marked the area, make them a bit more Hans Holbein as opposed to Klimt", compared to knowing patching tricks so that if you asked the engine to draw a 'T' you were not being served a 'B'.

This latter example - ask 'B', get 'T' - I have seen from Midjourney only a few hours ago, in the public showcase. I have examined several prompts and results, and very often they do not overlap but very partially - the engine is like a wild horse.

That this is not intrinsic, but just a current stage, is only part of the story: the user will want to have a tool that allows the production of what the artist had exactly in mind (not just "anything nice as /inspired/ by our suggestions"). It is the normal form of the product in its "ready for shipment" state.


> That this is not intrinsic, but just a current stage, is only part of the story: the user will want to have a tool that allows the production of what the artist had exactly in mind

Communicating to another entity (even a human) “exactly what you had in mind” (or, more to the point, so as to get them to behave so as to produce “exactly what you had in mind”) is a skill which involves understanding of the audience, understanding of the mode of communication, and understanding of the field about which you are communicating. “Prompt engineering” is just a fancy name for the particular application of that set of skills in the domain of getting desired results out of an LLM. Whether or not the terminology ends up being durable, the skill isn’t going away, as long as we are using language to communicate with models.

(Once we get the ability to just plug our brain into an AI, will their be a skill of focussing properly so that the mind-reading tech reads the right thoughts? Probably, and that will be the new “prompt engineering”.)


I disagree it will ever be that easy for people to generate specifically what they’re looking for.

I believe there will always be some amount of navigation of the latent space required, and people who don’t understand how to navigate it will struggle.

To use your analogy, even if the horse is tame you still need to know how to ride it.


> that easy

Can you code? Can you illustrate? Can you build? Piece by piece, you sketch, implement, hone, finetune, improve, correct, retry, go into detail, expand... The ease of the process is relative to the tools adopted.

Of course there has to be knowledge behind it. Of course you have to know your tools. But "wild horse" and "diligent consultant", axe and scalpel require different degrees areas and modalities of competence.

A well built tool will build on the competences already natural for the professional (e.g. a graphic designer will be able to sketch and to instruct the machine in a reasonably standard language). Of course the better you know your tool, the greater the effectiveness. But currently, the systems can be strongly unreliable - they do not respect the request.

The user will have to learn to deal with a machine taking natural language as an input just like any manager has to know how to speak to personnel to obtain what is wanted. On the other hand, the implementation of "virtual personnel" which is reliable is a responsibility of the tool developers.


My point is that even with improved tooling I don't think it will be accessible to a layman. You will need to develop some skill to wield the tool.

It seems like you agree based on parts of your comment like "A well built tool will build on the competences already natural for the professional".


If by "some skill" you mean the basic ability to type vaguely coherent English, than I agree.

If anything the recent generative art advancements coming out (Firefly, MJ v5) would seem to refute your point. There was a time when diffusion models require some level of knowledge and skill to use. To get high quality output, it was in your best interest to learn about the different types of samplers, upscalers, hyper networks, textual inversions, understanding denoising etc. etc.

Now? I'd say the millions of active users on MJ would seem to prove you wrong.

As far as language models like GPT, we've already seen the average level of skill needed come down with each successive release, ChatGPT is easier to use than GPT-3 which is easier to use than GPT-2.

Furthermore, devs in the space (Stability, OpenAI, etc) could refine/train pre-processing LLM models that transform more accessible amateur prompts to more professional prompts at real-time.


I've used MJ quite a lot. It's easy to create something. It's quite difficult to create a specific thing.


> pre-processing LLM models that transform more accessible amateur prompts

Dangerous! It looks like instead of driving directly, you are tapping on the shoulders of the one at the steering wheel.

Which is useful for children, and frustrating for drivers.


Just like you wouldnt be able to drive, let alone start Ford T.


Specially with image generation and like I can believe that there is certain amount of learning and practise to get exactly the results you want and how different inputs affect output. And then also picking up right model for right job.

Not that it is engineering, but there is lot of uses where it doesn't match either...


A great example of this hit me last week when a demo for prompting pixel art pointed out you need to date the video game you’re referencing. You might know you want “final fantasy 6 pixel art style” but there are many remakes of it.


Prompt engineering has been fascinating to follow at my job. People believe that they will be using "prompt engineering" as a new way to verbally interact with peers. As if they now have found the Rosetta stone for their own quirks.


Next you're going to say social engineering isn't real engineering :P


I’m saving this one, thanks.


Google Fi no long works. It's closer to prompt engineering.


when software first came out, us electrical engineers don't think this is "engineering" either, basically it is all people creating a imaginary machinary for other people operating in their imaging world.


It's okay; software engineering isn't real either.

(In many countries, the word "engineer" is regulated -- you can't call yourself an engineer without professional qualifications and oversight.)

https://en.m.wikipedia.org/wiki/Regulation_and_licensure_in_...


Yeah, pretty much this. Only a subset of the actual activity that falls under the umbrella of "software engineering" could be called engineering in the traditional sense.

The engineers working on aviation software and other low-level, real-time, performance critical systems probably need to use quite a bit of maths and are closest to doing what we traditionally consider engineering.

All those teams working on web apps and advertising...yeah not really. This isn't to say what they do isn't complicated or difficult, but it's not really engineering in the classical sense insofar as it doesn't (typically) require the use of continuous mathematics and deep systems theory. If you're primarily writing systems design docs and performing calculations, you're probably doing some kind of engineering. If you're mostly writing code to spec you're just programming. It's sort of akin to a carpenter calling themselves an engineer.

But hey, this is all symptomatic of a general trend here in the states of the trivialization and devaluing of expertise; people think it's fair to identify as whatever their heart desires regardless of qualification, knowledge, or experience. A world in which people regularly accept advice from total randos on social media is not one that has any strong sense of value around knowledge or critical capacity.


Thanks for reminding us that there are only a few real engineering professions. Reminds me of the Hemingway quote that there only three real sports: Motorracing, mountaineering and bull fighting - the rest are just games.

No seriously that kind of gate keeping is just ridiculous.


Is it? Are you legally culpable in the case that your software kills or maims somebody? Because a civil engineer is, if the bridge they built falls down. Is there a license you need to practice writing software, that can be revoked and prevent you from writing more software if you're caught writing software with gross negligence, leading to it getting hacked? Because other engineering disciplines are held to those kinds of standards.

Words mean things, and if we want to consider "software engineer" as anything more than title inflation, it should mean more than just a programmer who's full of themselves.

If I shit out a webapp that's crashes if you look at it funny, that shouldn't be considered software engineering because it's a pile of shit that barely works.

If I make a webapp that actually gives useful error messages to users and is robust in the face of failure and is maintainable and someone else would want to touch my code, then we're starting to get somewhere.


A society that elects to apply an existing term to cases in which it doesn't quite fit indicates the society is linguistically bankrupt. I'd argue that, instead of calling every profession "engineering" we ought to come up with more precise distinctions and designations.

Let's be honest, the main reason so many people glom on to the term "engineer" is because (as others have mentioned) the rigorous processes around licensing for other engineering disciplines keeps supply low, which attaches an inherent market value to the term. People then adopt the term without justification in the hope if nets them a pay raise. We ought to accept that it's OK not to be an engineer. Other professions are also valuable. It's also important to "gatekeep" meanings of words to a certain extent lest we think it wise to accept a language in which any word can mean anything.


It’s also a totally borked analogy. In software the spec is usually the carpentry equivalent of “build a house. Must have door”.

Building a house takes engineering (e.g. structural). It also takes carpenters and plumbers and so on. What’s fun about software is you get to do all of it on one project.


Out of curiosity, what is your definition of "engineering" and "engineer"?

Genuine question, I am not an engineer nor in any sort of role that could plausibly be considered engineering.


I refer to myself as a software developer, but also intensely dislike the gatekeeping around the term “engineer”.


I disagree, maybe if the software industry self regulated like more traditional forms of engineering such as mechanical and electrical, we could guarantee some minimal level of competency in the field and wouldn't have to resort to weeklong interview processes and hours and hours of leetcode and whiteboarding.

With all due respect, if your "engineering title" is preceded by the word salesforce or prompt, you're not an engineer.


It's only after these works that I've found "engineering" is worth a little gatekeeping:

  * The New Plague
  * Have Fun at Work
  * Friends in High Places
  * Design for Prevention
In short, some cynicism is warranted, but not complete and utter disillusionment.


I call myself a programmer. Nothing more, nothing less. While I don't necessarily go out of my way to gatekeep, I find 'engineer' pretty cringy, and 'architect' makes my eyes roll out of my head.

What's next, Software Physician?


They do give PhDs in Computers Science. And they require a similar amount of time (10-years-ish) to get as an MD, plus the salaries are similarly generous.


Lol yes, the people writing spaghetti code at boeing are real engineers and the people writing complex distributed systems at Google aren’t.


in aviation, engineering is basically trying to get 5-6 sigma safety


Isn't real?

You write about people who sling PHP or Javascript spaghetti?

I have engineering degree in computer science (I think it is mostly thing in eastern Europe, you have bunch of electronics, math, computer architecture to learn), I worked on automotive software which is I have to say fairly regulated. Where you have to care for things like ISO or MISRA standards in your code.

I don't mind people calling themselves "software engineer" even if they whip up front-end - but I do mind if someone calls "software engineering isn't real".

;)


In some jurisdictions, Professional Engineer (P.E.) is a protected title just like Doctor of Medicine (M.D.). You are licensed and can be sued for malpractice and lose your license to practice.


Within a lot of projects, before starting to code, quite some engineering needs to be done. Engineering is making first models that mock reality, trying them, refining them, making them more performant and so on. To me the title 'software engineer' summerize these skills nicely.


> Isn't real?

> I have engineering degree in computer science

For the record, I also have bad news about our shared degree's use of the word "science".


Changing a bunch of bits on a magnetic drive or another media is not like building an airplane that flies or a car that runs! Software engineering is closer to a priest that supposedly can communicate to god than it is to someone that builds a car!

I have a CS degree too and people without it are sometimes much better than me at slinging code

Actually forget people, you should realize how far this is from engineering when ChatGPT can do it better than me :) Ask ChatGPT to build a car though and we will see how far it gets.


>on a magnetic drive or another media is not like building an airplane that flies or a car that runs

I ain't sure what century you live in, but it's not the current one...

If I program some bits on my hard drive and now a stepper motor punches thought the side of the mount its bolted too. Or if I say "set the flaps to this pitch in this condition" and that doesn't happen, then something very real, and very bad, happens in the reality we both exist in.

Almost everything complicated that exists today uses electronic control in one way or another. Screwing up the software in that ECU is just as much as of an error as using the wrong metal in its manufacture.


Yeah honestly maybe I am feeling a bit jaded by a career in software development with jira and other bs. It doesnt feel like engineering to me :)

If youre doing anything that physically moves something like a stepper motor i have a lot of respect for that.


You can absolutely be a "software engineer" in the legal sense, just not in the US.

Here in France, the engineer diploma is a regulated thing, and there absolutely are software-centered engineering school who are accredited to deliver it, which is how I got mine.

It's in the "Grandes écoles" cursus, which usually follows a more classical STEM-centric cursus called "classes prépa".

As to whether "Software engineering" is a real thing, well on one hand most of it isn't very process-oriented, on the other hand there are such thing as ISO certification for software security, so it's less the individual which is regulated than it is the project in itself when it matters (like critical data-center, or handling of medical data), so it's not like there aren't any regulatory framework in the software world.


> As to whether "Software engineering" is a real thing, well in one hand most of it isn't very process-oriented,

Part of the reason for that is the cost of iteration is much lower. So software engineers can afford having less rigorous process. And the tooling is getting better, with formal specification languages, type systems and static analyzer.

Rapid prototyping, microcontrollers and simulation tools brought some of the same possibilities in mechanical & electrical engineering. Even the traditionally very conservative space industry is embracing faster iteration cycles (at least, SpaceX is).

"Software on a microcontroller isn't engineering, it's just coding, but replace said software by TTL chips and PROMs and suddenly it's EE".


An engineer is generally liable for what they sign off on. Software engineering isn't anything like that and all products are filled with legalese to restrict liability.


> all products are filled with legalese to restrict liability.

This doesn't seem to apply in the realm I've been working in. We've had to do the exact opposite thing.

In small, B2B software companies you may find that accepting an uncomfortable amount of liability is usually the only way to get your foot in the door. Certain promises have to be made (very carefully!) or no business would occur at all.

For example, if we told our banking clients that "if your front-line app crashes we might have it fixed in 5-7 days" just to be safe, they likely would have little-to-no interest in utilizing our solution. We have to say things like "We will respond within 1 hour and send a polite apology letter to your CTO every time the server makes a weird noise". And, we have to "engineer" our solution so that we have a minuscule chance of hitting that promised target.

Consider that in a <10 person company that there is going to effectively be some person who has to own all of the pain that comes along with these decision. Is that person not an "engineer" by some arbitrary definition? Sure. But, for the purposes of "a professional weighing all of the pros/cons and making a decision that they intend to accept all consequences for" - I think it's the same effective practice.

Ultimately, "software engineering" is a very broad space with varying degrees of professionalism. At some ends, you will find things that approximate actual engineering. At others, you will find things that appear more academic or artistic. The context is what defines the title of "engineer" for me.


"B2B software companies you may find that accepting an uncomfortable amount of liability is usually the only way to get your foot in the door"

The company would have to accept it. No legal trouble for the developper.


This is a contractual SLA, which is certainly not the same thing as legal liability, and your contract might (obviously I haven't seen it) shield yourself from financial liability on top of your lack of legal liability. (Saying that you will respond within one hour is not the same thing as saying that they can sue you for every hour you're down.)


I don't think a country charging fees to bless a word makes a profession more or less real.


You should rethink that whenever you drive under a bridge or get on a boat. The reason they collapse or sink so infrequently is largely because engineers are accredited and held responsible.


Causality will be tough here. I could also credit the certification regulations just as much. And truth to tell, construction techniques are vital, yet we don't value the construction workers in this way.

That is, it is a whole system. And largely comes to costs and known building techniques. After all, certified engineers built "galloping gerdy."

(And I didn't even get into maintenance costs and application. )


Construction techniques are vital, but a civil engineer is employed to conduct the correct QA/QC on those techniques.

This doesn't devalue the effort of the work crew, but it does recognize you need accredited professionals at certain parts of the process to ensure public safety.

To be a bit more pointed: causality isn't difficult here. This legislation and professional practice was deliberately designed as a result of the numerous failures observed throughout the 1900s. The cause: multiple fatalities and unreliable infrastructure let to the regulation of the engineering profession. The effect: we are safer.


My point is that it is all of the regulation involved. Not just the single line worker that is the civil engineer. I know more than a few of those folks, and they are not some magical being. And it is more than the civil engineer. It is also the auditor. The inspector. Even the manager making sure they are all working together.


It's true that lives hang in the balance in those forms of engineering, but there are a lot of forms of engineering that are low-stakes, like mechanical engineering of HVAC systems in an office, or audio engineering of a musical recording, where the stakes are not far off from that of writing software, or materials engineering of new stretch fabrics.

Whenever this question of engineering comes up it's always bridges, airplanes, and so on, but that's such a tiny percentage of engineering. Things don't have to be high-stakes to be engineering.


You have quite a dim view of humanity if you believe the only reason a bridge stays up or boat floats is due to engineers being anointed and threatened by some entity.


Whoever built the old bridges in my city is lost to history. There was probably no regulation then, and yet, they haven't collapsed.


That's survivorship bias. The ones that didn't collapse are still there. You don't know about the ones that did collapse because they aren't there to see.


It does however give it better control, a fixed threshold in terms of training that applicants should pass, and more oversight.

Those are the important parts, being real or not is irrelevant. Snake-oil selling is also a very real profession.


Truth to be told, Engineering license requirements have a bigger effect in gate keeping the profession and keep supply restricted, much like any other regulated profession. Gate keeping has some benefits, but is easily circumvented by bad actors all the time, such as all the sham degrees that just churn out the bare minimum to pass licensing tests.

If all the licensing to architects, doctors, engineers, lawyers, and so on, really had a big impact on quality, the world we live in would be very different and better. But that doesn't happen because licensing a profession only works if quality is upheld on all steps of the professional education chain.

A lot of software companies circumvent all of these downsides of bad quality in education by raising the bar on the hiring pipeline with a bunch of tests and so on. Does it always work? No, just like licensing doesn't. But at least it keep la the profession accessible to those who are willing to study algorithms, data structures and so on.

There's no perfect system in our world, but I am glad we don't have license requirements to develop software. I am a statistician that became a dev through self studying, training and learning on the job, and I also know many others (even real engineers) that turned to this profession without graduating in the field and contribute a ton with their specific background knowledge.

Should everyone be called engineers? Probably not as it will elude to licensing the profession, but not calling yourself one decreases chances of finding another job dramatically.

A true tragedy tbh, I'd rather if everyone called themselves devs, scientists or something less loaded than engineer.


>Gate keeping has some benefits, but is easily circumvented by bad actors all the time, such as all the sham degrees that just churn out the bare minimum to pass licensing tests.

Depends on the country and the educational system. In some countries no such sham degree is possible to be accepted for an engineering title.


I have yet to see a country where licensing equates to high quality in all the services. The examples I've used might not work in some places, but people find flaws in licensing systems all the time.

Sometimes licensing is so strict that things get done anyways ignoring license requirements because there is way more demand for some services than the supply of licensed professionals. In such cases, there are so many people violating regulation that policing becomes ineffective. It's the case with some Latin American countries.

And then you get to the problem that the license is ineffective and doesn't add to anything.

There's no perfect system to uphold quality of labor across a profession, we pick our poison and deal with it.


Good points. OP is arguing for regulations. Regulations failed egregiously all over in the Turkey earthquake and illustrate that they are not a silver bullet.

Setting a minimum threshold for competence makes sense, but when it becomes encoded into a legal system the result frequently becomes a way to enrich incumbents.


Engineering is a practice. Like medicine, but we’re building machines that run in a virtual space. We architect and build houses in our minds. We send those projects to production to make the business money.

That’s engineering.


Eh yes and no. "Traditional" engineering professions, ones where you need to get licensed, are all grounded in physics. Software engineering isn't (ignore the fact that bits running on silicon is physics, it's irrelevant). Software engineering is based on best practices and algorithms that are based on...some guy writing them, saying they're good, and others trusting that they're good. There's a million different ways to make something. Million different opinions. Million different cargo cults. It's why making a FE exam for software would be impossible; there's just too many ways to do something and often very little evidence why it's good.


Engineering = Applied Physics.

Software Engineering = Applied Math.

Both are about modelling, so you have a good idea if a solution works before you build it.

If you're gluing stuff together and hoping it's not going to fall apart too often you're not doing either.


If you're gluing stuff together and hoping it's not going to fall apart too often you're not doing either.

That's precisely what the majority of "software engineers" are doing nowadays. That's what web development is. Those that are creating tools other developers use and build things using algorithms/data structures in novel ways (essentially applied math as you said), they are the real software engineers in my eyes.


And the last piece of the equation:

Applied Physics = Applied Math.


I suppose technically the timing of traffic lights or deciding whether to use a roundabout involves physics in that if you get it wrong, cars bounce off each other, but I'm not sure civil engineering is any more grounded in physics than software engineering is.


"There's a million different ways to make something. Million different opinions. Million different cargo cults."

So... different tradeoff? Sounds like engineering to me :).


And in Canada, if you call yourself an engineer, you wear an iron ring to remember your responsibility to society.

https://en.m.wikipedia.org/wiki/Iron_Ring


That doesn't make much sense at all. This is not like being a lawyer or a doctor where the meaning is "you've completed a degree". The term "engineer" is meaningless. It's more like being a writer or a musician. There is no objective criteria to fulfill.

Furthermore, I don't understand why some people have the need to regulate everything. That's not how the real world works.


> It's okay; software engineering isn't real either.

> (In many countries, the word "engineer" is regulated -- you can't call yourself an engineer without professional qualifications and oversight.)

Those countries (at least France, Switzerland and Canada at least) also have official accredited Software Engineering degrees.

I know because I hired people who graduated from these programs!


I believe that there can be real software engineering. And in many places it has uses.

But how most of the sotfware done, is not really engineered.

And I think we might do better if we were more rigorous with what is being build, probably slower, but better off and lot of it could be replicated for close enough use cases.


Yes it is. It’s a protected term in Canada and they have software engineers.


Are you saying that software engineers in Canada are P. Eng licensed?

(If not, then your example is one where software engineers are misusing the protected term, but as they usually do not set up shop and mislead the general public into thinking they are certified, which would be the activity the protection is designed to thwart, the protection is rarely enforced.)


It's protected. And they've started going after people in recent years, and winning, for using the term, "software engineer."

https://engineerscanada.ca/sites/default/files/2022-08/2022-...

There are many folks in certain provinces arguing against regulation as, they claim, it makes it difficult for workers to compete in a global market where the term is common.


> Are you saying that software engineers in Canada are P. Eng licensed?

That's my understanding. They don't have a PE exam like here in the US. To be P. Eng licensed, they have to graduate from an accredited engineering program and register with some board. And they do have Software Engineering programs (I know; I hired folks from these).


This always depends on context. And often the results aren't as good as you think it will be. Sets up an obvious target for regulatory capture and can stagnate things.

I say all of this as a pro regulation person.


When a building collapses you can’t just reboot it

But in software you can simulate and test all you want before connecting it to the real world


To be fair, when the building collapses in the CAD simulation, you just "reboot" it with different parameters. Civil engineers don't actually put bricks or weld a metal.


Problem is that CAD is just an approximation of the real world and will never simulate it 100% faithfully.

Software on the other hand can technically be simulated 100% accurately, barring hardware or interpreter bugs.


No amount of testing will prevent all errors, and certain bugs can cost human lives.


Or you get a diploma for that, and you are a bona fide engineer.


I personally do not think Software engineering is a thing. There are opinions on how to structure Software even within same language and framework and there's little objective procedure to rate one way better that the other and there's one, that kind of performance gains usually are negligible.

But prompt engineering? Based on what knowledge exactly?


Software engineering is definitely a thing. There are numerous ways to code robustly and to prove the software will do what it's supposed to. They're much used in mission-critical applications where failure is a Very Bad Thing.

But most developers do none of these things, and many developers don't even know they exist.

But prompt engineering is just search++. You might save some time if you know a little about the underlying technology, and there's an element of creativity which is unusual for search.

But it's still not rocket surgery.


> But most developers do none of these things

When they should, failure cost time/money


Are you saying the definition of engineering means there is only one correct way to do things? What is this comment even trying to say?


In Canada, it’s a licensed profession


The “engineering” in “prompt engineering“ is more like in “social engineering”. It’s a secondary, metaphorical meaning.

For example, Google defines the second meaning of "engineering" as:

2. the action of working _artfully_ to bring something about. "if not for his shrewd engineering, the election would have been lost"

(https://www.google.com/search?q=define%3AEngineering)

Merriam-Webster has:

3 : calculated manipulation or direction (as of behavior), giving the example of “social engineering”

(https://www.merriam-webster.com/dictionary/engineering)

Random House has:

3. skillful or artful contrivance; maneuvering

(https://www.collinsdictionary.com/dictionary/english/enginee...)

Webster's has:

The act of maneuvering or managing.

(https://www.yourdictionary.com/engineering)

This is a well-established, nontechnical meaning of “engineering”.


I like this framing by Michael Hashimoto, on Prompt Engineering:

https://mitchellh.com/writing/prompt-engineering-vs-blind-pr...

Most of what we see on Twitter or YouTube is Blind Prompting. However, it is possible to apply an engineering mindset to prompting and that is what we should call prompt engineering. Check out the article for a much more detailed framing.

Dair AI also has some nice info and resources ( with academic papers) about prompt engineering.


Prompt testing, especially when for q/a pairs where there are multiple right answers, has been bugging me a lot

The article is reasonable, but also shows a big gap in tooling, as the techniques there feel closer to linting & typing then testing once you do more interesting prompts. They don't check the interesting parts..


> The article seems reasonable but ... closer to linting then testing... they don't check the interesting parts

can you elaborate a bit more on what those interesting parts are?

It could just be a limitation of computation.


We are helping our users with qa tasks involving code generation, where the answers may be either JSON, executable code, or markdown discussions involving the same. We are tuning for a bunch of tools following that pattern so our users don't have to.

It's easy to make a labeled training set for grading our homework (catching regressions, ...) in the case of classifiers, and that's basically what the blog post showed.

What about for the above qa tasks? We can ask GPT4 whether a generated A was a good answer for a Q, but that's asking it to grade itself. Likewise, in the code case, we can write unit tests for the answers. (Trick: we use the former to more quickly do the latter.) But I feel like there has to be better ways

Another: OpenAI always updates models based on use, so we have to be sure our tests are real holdout sets that never get back to them...


I don't think LLMs are going to be able to solve that. There are a number of things that are assumed are true, but may not necessarily be true. This can potentially lead to multiple possible answers (outputs) given the same inputs.

For example determinism in code, its required for computation and its a system's property, but generalizing a test for it is really hard. Its a property, and by knowing its true or false you can make inferences on whether a system maintains those properties, but most of this is abstracted away at lower levels and since the context can't ever be fully shared with an LLM for evaluation, nor can it automatically switch contexts when evaluation fails, this most likely will never be solveable by computers when there exists one single input that produces two separate (different) outputs, at least from what I know about automata theory and computability.

Its generally considered a class of problems that can't be solved by turing machines.

https://en.wikipedia.org/wiki/Theory_of_computation

https://medium.com/@tarcisioma/limits-of-computation-231bf28... (overview)

https://en.wikipedia.org/wiki/Undecidable_problem (crux of the problem)


Professional engineers are typically:

* Regulated by a profession and associated legislation

* Work first to ensure the safety and welfare of the public

* Perform only within their area of competence

* Act as faithful agents or trustees for their clients

* Avoid deception and represent matters in an objective and truthful manner

* Invest in continuous professional development

* Can be subject to penalties for malpractice: removal of licence, fines, or jail

It's hard to see any of these applying to LLM input entry people. It's a even a stretch to consider them non-professional engineers.


Hard to apply those to software engineering as well.


In most jurisdictions, a professional engineer is essentially a specific legal thing.

In most jurisdictions I am aware of, software engineers are not professional engineers. I am sure there are many exceptions.

That said, I think all of the above are rather easy to apply to SWE aside from, arguably, the first bullet point.


In the UK my computer science masters was a masters of engineering and I got some kind of engineering qualification (which I have since lost and have never used)


And I can list counter examples for each.

Civil engineers and architects don’t “move fast and break things” or “fake it til you make it” after a 3 month bootcamp.


There are software engineering domains, like aerospace, where the process is very stringent with rigorous planning and testing.

There are others, like consumer app development, where there's simply no need for that kind of process, so it doesn't exist. HNers might complain, but average people don't care if their social media app crashes once in a while and they have to restart it — so why should developers care? Better to spend their time on other stuff. Whereas people care a lot if their airplane crashes.

I don't think it suggests anything about the relative value of working in those domains.


Completely agree. It reminds me of 'They Write the Right Stuff' (1996) [1], how a group of engineers developed the software for the Space Shuttle.

[1] https://www.fastcompany.com/28121/they-write-right-stuff


But those consumer apps that use or sell personal data against that persons interests as a business model, or those that leak or lose personal information through negligence are very harmful to society. Why should they get a pass?


That's a totally unrelated argument. Plenty of "real" engineers work on weapons systems. Architects work on ultra luxury skyscrapers for Saudi businessmen. Civil engineers work on new highways that destroy poor neighborhoods. No one is talking about ethics here.


How is it unrelated if we’re talking about software engineering as a profession? Yes there are some SWEs working in engineering fields, but the majority are not classically “engineers”


Maybe you could explain how it's related? I feel like I did a good job of explaining how ethics makes no sense in this discussion in my above comment.


My original comment was that this list of engineering standards is not followed by most software engineering. You seem to be saying there are some that it can be applied ti, and I’m say those are in the minority.


Right — standards, not ethics. You still haven't explained how ethics matters here. Like I said, plenty of engineers that follow standards rigorously do it in the service of unethical goals.


I disagree, I think 1-2 are not really applicable, but SWE has some stuff other engineering fields don't have, either.


You might be interested in this three article series https://www.hillelwayne.com/post/are-we-really-engineers/


It’s “engineering” the same way we joke about “social engineering.” Both are ways of using language to subtly manipulate another. But the term should be understood as tongue-in-cheek and I’d run from anyone trying to sell you something to do with it.


I think the proper term for the job should be Prompt Alchemist.

We've seen this before. Back when Google was a search engine, you'd eventually get a feel for how things worked, and build a vocabulary of "spells" that give the results you want, without using the common words with many overloaded meanings. (Example: Annotation instead of mark up, when trying research the Marking Up of Hyperlinked Texts)

Similar things proved to be true of Stable Diffusion and the image generators. It seems quite reasonable to conclude the same will be true of GPT4 and its kin.


Prompt engineering is an 'oil' alright. It's a lubricant to make models work better.

The idea of engineering being backed by solid mathematical understanding is a relatively post-calculus idea. For the longest time, we built bridges a certain way, because those are the bridges that stood. We were flying planes long before we understood flight.

At the end of the day, engineering is the cycle of "identify a pain point -> launch experiments -> observe the delta -> apply the solution". Prompt engineering meets all of those requirements and therefore, is engineering.

It is funny to hear this, because 20 years ago : "is software engineering really engineering" was a rather common phrase around STEM circles.


"We were flying planes long before we understood flight."

This isn't true. Sir George Cayley elucidated several aerodynamic principles significantly in advance of the Wright Brothers first flights. The Wrights took advantage of this knowledge in designing their propellers and planes. We have since learned a lot more about how flight works, but the guiding principles of lift, drag, and thrust predate the mechanical engineering to realize a heavier than air aircraft.


Software engineering is also snake-oil.

Yes, software can be engineered, but that's not what the vast majority of so-called software engineers are actually doing at their jobs. The title mostly exists to inflate the importance of a programmer with years of experience under their belt. In reality, most of them couldn't explain to you what engineering itself is, and their job primarily consists of duct-taping and building features expediently. It's like taking a carpenter whose job is to crank out barely adequate sheds for a shed company and calling them an architect.

Don't even get me started on "computer science."

Prompt engineering is a legitimate area of study, and is obviously a practice demanded by LLMs, but you gotta just ignore the "engineering" part. It's the same skill as being a good communicator. Take a room full of "software engineers", tell them "build me an app that will let me sell gadgets", and they'll do their best to build one, but chances are it won't do what you want unless you communicate with greater specificity. It's hilarious how many people think LLMs suck just because they don't do the right thing given a single shitty sentence.


Most SWE is snake oil, take one look around and you'll see what I mean...

React..? Golang? GUIs?! We stepped too far from The Truth (vim, ghc) and now we are all Oilers.


SWE snake oil is like "Extreme Programming", "whiteboard interviews", "design patterns", "microservice architecture"...


You just know you're in a Bad Shop when you see "unit tests" come out the bag...


if we are the Oilers, who are the Flames?


What’s your beef with computer science?


Calling yourself a Prompt Engineer makes me think you don’t know college math and computer science. Prompt Engineer sounds like a Subway employee calling themselves a sandwich artist.


Welcome to Clown World! Where things that should not be are, and things that should are not. I first noticed CW when someone reached out to me during COVID and their title was "Professional LinkedIn Specialist". In hindsight, it was probably when people were making gobs of funny money from running their GPU 24x7

My advice? Don't fight it, accept it and embrace the change. Don't sell yourself short, you only need to put in like 3 months of work to earn that Principal Prompt Engineer title!


Yeah people are always trying to celebrate laziness and mediocrity.


I've wondered if some of the value of being a skilled prompt engineer will be eroded as the LLMs get better, especially as they can reply to ask clarifying questions or ask if you want XYZ tweaks (e.g., in the voice of Author X, or from the perspective of an ant, with backlighting).

I'm sure there will always be some people who are better at this than others, but if the interface is sufficiently chatty and 'smart', that could significantly reduce the gap between newbies and seasoned prompters.


Any form of engineering involves a certain degree of reliance on abstractions, e.g. a civil engineer really doesn't need to know everything about the molecular structure of steel and concrete to successfully design and build a structure such as a bridge.

What does matter is having stable, predictable abstractions that you can rely on, so in software engineering it would be a mistake to rely on system-dependent undefined behavior when using a language like C++. You also don't necessarily need to know what kind of low-level optimizations the compiler and linker are using to generate your executable binary, though in some cases it could be important.

With LLMs, however, it's not really clear if the same prompt always gives the same kind of output, and you can do 'regenerate response' with something like ChatGPT to see this in action.

In some sense, 'prompt engineering' can be more like 'interviewing an engineer' in that LLMs seem to provide better outputs if you start with a broad, general question and then narrow down to your specific interest over a series of questions. Helping the LLM out by defining context, asking it to expand on a specific output it generated, pointing out where it may be hallucinating etc. all appears to improve the quality of output but I don't know if this kind of iterative process is really 'prompt engineering', maybe 'prompt optimization' is a better word for it?

In other words, writing a series of prompts to an LLM that gets you the answer you need feels quite unlike writing a Python script to automate some task. You can plan the latter out from start to end, but the former is this back-and-forth process.


In essence it's all about getting results, and specialisation might yield better results. Just because people think it sounds cool to call it 'prompt engineering' doesn't mean it's not a specialisation (or optimisation), even if the terminology is based on nothing.

I do think that it's not broad or deep enough (right now) to merit a completely separate area of expertise on its own, but it can be too big to fit within the minds and areas of expertise of people who already have their work cut out for them in their existing workdays. Think about most desktop computer users, they really have no clue how the computer works, and even asking them to change a resolution on a mirrored display might be too far removed from their knowledge base (and that is fine). This also applies to tokenisation prompt inputs; it might take too much for someone to simply tack that on to what they already do and know.

This is then were we get job postings and those need to be easily identified, so people can search for them, and this is where we get nonsense terms used to distinguish the desired applicants from the pool of work-seeking people.


In general, any product or service that makes exaggerated or false claims of effectiveness without providing concrete evidence to support those claims could be considered a form of "snake oil."

Just because we don't know all the workings of something doesn't mean we can't effectively use it, or measure how well that approach of use is valuable or repeatable for similar uses.

For example, while we have a good understanding of the basic principles of magnetism, there is still much that we do not fully understand about how magnets work, and ongoing research is focused on unraveling these mysteries and expanding our understanding of this fascinating natural phenomenon.

That's not to say we don't know how to use magnets to make sound, however.

Prompt engineering, in my opinion, is a process of optimized querying of a frozen model. The approach can be augmented by using "hot" data from vector databases or other types of text storage engines. The approach to picking the right content for prompt building is not "snake oil", but based on well known processes including autocomplete, synonyms and related terms, personalization, and query expansion.


Yes, we would all prefer "philosopher of corpus-linguistic preference membranes, and domain-specific knowledge manager" because it's more precise, and doesn't trample on my position/salary/pedigree.


Is this just about gatekeeping the word 'engineer', so that its purity isn't tarnished? I know that software engineers were scoffed at by electrical engineers, who I bet were scoffed at by mechanical engineers, who were probably scoffed at by, you know, people who built siege engines or whatever.

Is prompt engineering 'real' engineering? It seems easy enough to test whether knowledgeable, self-proclaimed prompt engineers can outperform a random person with only moderate experience requesting information from an AI in a reproducible way.

If they can't, then there is probably no engineering involved in prompt engineering, at least at present.

If they can, then it seems like they're probably not doing it with magic, so there is some set of reproducible techniques involved. At that point, would it be fair to call it engineering?


IMHO - software engineering really isn't real engineering. Not in the traditional sense.

But it certainly is based on knowing the inner workings of all or as many and as much of the abstraction layers.

But who's a prompt engineer? Is anyone who understands how Tensors and Transformers work is a prompt engineer? Are prompt engineers model specific? I'm GPT-3.5 prompt engineer and we have a job openings for Alpca or StableML prompt engineers?

Lastly, if these LLMs are so intelligent and have so much deeper understanding as their emergent capability - why prompt engineering is needed in the first place?

You generally don't need prompt engineering even with average IQ humans unless one is trying to swindle someone.


> software engineering really isn't real engineering

Most people who call themselves software engineers have not studied engineering at all, let alone passed an engineering course.

I think there are two broad classes of benefit a real SE can provide.

The first is an understanding of how software can be proven correct, and the limitations of that vis-a-vis the real world - power failures, etc. They can help understand what portion of the software needs what level of performance, understandability, correctness, etc. Your core trade-pricing engine's needs are different than the webserver showing the status pages - both are required but the standards for both are not the same.

Second is an ability (and this is what apprenticing and is for in a real engineering career) to judge and design entire systems. We "devs" often get treated as fungible work units and tasked with various little subsystems. A professional engineer entering such a space would be required to obtain a pretty good understanding of the entirety of the company's systems and very good of what they integrate with, such that they can characterize the cost, risk, etc, of the solution space before even beginning to plan the technical solution. Engineers often tell clients that they've envisioned the wrong solution. They're trained to break down silos and integrate solutions where possible. (Not that they all succeed, but that's in the training.)

> if these LLMs are so intelligent and have so much deeper understanding as their emergent capability - why is prompt engineering needed in the first place?

Because they're language models and people are trying to solve things that aren't pure language problems with them. So you need to map the problem to something it can represent and where a solution can be formed, even if that solution may not use an LLM call. A simple example is taking a mathematical word problem and solving it. Currently the LLMs do not model math well so you'd use the LLM to separate out the clauses of the problem and turn it into variables and write an equation from it and then you'd evaluate that snippet of code for the real answer.

And then, depending on what part of what system this was, you may need to verify the parsing and other parts of the pipeline so you would need to build a chain where the results from one call are handed to other systems, maybe just back to the same LLM, with a bunch of examples, for multiple instances of more detailed checking, and then if those answers aren't the same, to another round where it tries to explain the difference and feeds that explanation into a round where you try to reprompt the initial layer and try again. Token limits often constrain how good the instructions and examples can be so you often have a few levels of prompts, nearly bulletproof ones, and shorter ones which are cheaper and allow other use of the token budget but may have more failure cases.

Building with non-deterministic tools isn't impossible, all ropes are different and yet we have rope bridges, but you have to know how they work and how they fail.

> But who's a prompt engineer? [If] I'm a GPT-3.5 prompt engineer do we need to have a job openings for Alpca or StableML prompt engineers?

No, just like architects learning different materials during their career. But all those LLMs are different and you need to experiment (scientifically) to find their characteristics in your area before building on them.


Communication is a highly valued skill that everyone thinks everyone else is bad at. But some people are clearly better than others. Describing a problem or abstracting a process into a prompt will become a highly valued skill that some people will be better at than others. I also think that it's not as simple as just writing a prompt and calling it engineering. If these jobs become commonplace in a professional capacity, I would assume that developing or being good at the tooling that will be built around LLMs will be part of it. The people that are really good at it will hook in their own systems such as LangChain, Microsoft Semantic Kernel, etc.


Its hacking not engineering. But what helps is understanding a bit how the attention mech works and how your prompt effects the output. This "prompt engineering" is a vastly different programming paradigm. The notion of time is different, as output is being formed (one token at a time), the attention mechanism can focus more or less on different parts of the original prompt (along with the previous output tokens). This means that your "code", (your original prompt), is essentially getting "revisited" at different spots and different times while the output is getting generated and you are essentially "steering" the output in different directions based on the instructions in the prompt. This is different than how traditional code works which is always sequential and top-down while prompt writing, its better to think and speak like a human with human thoughts than to try to narrow things down to logic and code-like syntax. The LLM isnt going to process your prompt in a top-down one-time manner - its going to keep revisiting it (reweighing different parts of it, emphasising different parts of it) based on the output tokens chosen, and those weights will help determine the next token chosen, etc. Its like a whole new world of hacking, and it can be super frustrating but also amazingly effective.


It's snake oil as soon as someone starts using the term unironically, which I haven't personally observed.



Prompt engineering made more sense before RLHF/instruct models.

Back then text just continued onwards, and many papers found ridiculous gains on conforming with intended output (and therefore properties of output) by finding a suitable widespread online format.

Example: I think all Question/Answer forums, and places like StackOverflow, used a certain format like 'Q:' - that worked much better for question content with high quality answers. One paper, "Ask me anything" showed that formulating tasks as questions-answers resulted in much more effective prompts: https://arxiv.org/abs/2210.02441 - another for reasoning is the "Lets think things through step by step" example. It was a challenge to output JSON data with more than 3 correct keys, unless you finetuned.

However, the instruct models and recent 3.5/conversational RLHF are just so good at doing tasks, that such "engineering" - which supported/improved a wide base of prompts across domains and functionalities - broadly entirely stopped.

Now prompt engineering sounds so cool and was also used in the way of specific use cases back then, that everybody uses the term for prompts to get certain outputs. This however, is not really the same level of engineering and to some expert, reminiscent of old-school knowledge system finesse, that we saw before 2022.


Don't take names so literally. They're just useful labels that help us categorize things and communicate shared meaning. At this point, people know what you mean if you say "prompt engineering".

Prompt engineering feels like a quite appropriate name because there is a lot of experimentation and refinement to find good ways to interact with these models (I.e. good prompts!).

It doesn't perfectly map to "engineering", but it maps pretty well. What else would you call an iterative process of experimentation and refinement?


Prompt Engineer is the new Scrum Master.


That's finally a real use for large language models. Replace the agile coaches. Needless to say, a team is better off replacing them with nothing.


Actually, Prompt Masters would be lot better term.

And it would work in both common meanings in current usage.


I know someone who studied ML as black boxes kind of as her PhD (I don't really know the specifics so don't ask me) but I did find it interesting that if we're dealing with a sufficiently opaque and complex black box, we have to resort to our old friends experimentation and empiricism. So maybe it's not engineering but science? :p

A little tongue in cheek but it did get the noodle working


It is pure, unadulterated snake oil. You cannot create a prompt or sequence of prompts such that you can substitute some variable data into it, to generate responses of predictable with the modified prompts. GPT4 isn't promoted as offering that capability. The output of each chat must be evaluated on a case by case basis for its quality and suitability.

If you want repeatable results, you get GPT4 to write you code. Then use that code. If you run into cases it doesn't handle, or have new ones, then you can revise the code, perhaps with the help of the AI once again.

The problem is, that the bullshit artists out there want repeatable results in areas for which it can't write you working code. For instance, a problem like taking a textual product description and producing a glib advertisement.

They want to be able to write a prompt where you can just substitute different product descriptions and have quality ads come out that nobody has to proofread (so they can be immediately served), just by "turning a crank".


I have seen people doing multi steps prompting SD to get exact image they wants, that includes textual inversion, merging models, creating lora, manipulate prompt weights, in/out painting, this is a serious job like graphic designer but with less imagination and more wizarding, now SD created open source LLM, I expect the same to happen


I prefer to call it "prompt alchemy" for exactly the reasons you described. It can't be engineering if you can't understand what you are trying to engineer with. Alchemists were sort of proto-scientists and engineers because they were trying to get use out of a physical world that they did not fully understand.


> Below is a list of some known non-speech sounds, but we are finding more every day. Please let us know if you find patterns that work particularly well on Discord! [laughter], [laughs], [sighs]....

This is likely a reflection of the data used to train the model - TTS models are trained on labeled audio, one form of which is subtitled audio streams. In subtitles / closed captioning, bracketed words are frequently used when there is something audible that is not speech.

Based on this insight, it should be possible to inspect the training data and extract a set of non-speech sounds the model is likely to generate well - but that doesn’t drive engagement of your users like asking them to experiment themselves does ;)

For example, closed captioning adds music notes (♪) when music plays or when people sing. According to the Bark docs, adding ♪ causes the model to output things as music.


There are some degradations, just like other "engineering" like software engineering. I am pretty sure most people don't know the ins and outs of everything that makes code run, from compiler to CPU, especially not the CPU's of today, I doubt any one person fully understands them. You just know the abstraction layer you manipulate and learn techniques how you can make it do the things you it want. That is quite similar to prompt writing.

You can certainly get better at it by learning the basics of LLM models, as well as finding and applying certain methods that improve the results.

You can actually see quite a lot of difference between someone trying out some things, or someone who has spent quite a bit of time trying to make it do non standard things.

What you call "engineering" is of course up for debate, but it's not snake oil.


I think the word Engineering means your own grasp on the subject/task at hand. For example, you can't engineer a prompt that would lead to a result outside of the scope of what the model/interface is capable of doing. You can, however, engineer a prompt that will refine the result to get exactly what you want.

In other words, you also have to understand the text that you're getting back from your prompts, and then "engineer" the prompt to fish for better results. AI itself can do this to an extent, but over-engineering is also a thing, as is the fact that AI itself does not actually know what you want.

That's all there is to it. Refine, revise, engineer - all just buzzwords that lead to the same end result: a game of ping-pong between you and the LLM.


The “engineering” aspect will rear its head one fine day when company A, that has bet the house on ‘prompt engineering’ their service or product, resorts to panicked uttering of LLM chants, spells, and whatnot, to make a misbehaving model work again.

But that it’s a bit further down the line. Today, like previous hype cycles, opportunistic types will cash out on the money train. And tomorrow you will do whiteboard interviews for them (just like the last cycles..)

So the relevant question for the thoughtful geek isn’t “Is crafting prompts engineering?”. No. The Q is “should I sit out this hype cycle and then end up doing leet code monkey dance for the opportunistic types who made it big by riding the hype cycle, yet again?”


One definition of "engineer" is "to plan or do something in a skillful way:". Example: "The administration engineered a compromise."

I'd say even in this example, there's more a sense of searching through a range of different options to achieve specific properties. This is the sense of "prompt engineering": you use skills to create prompts -- in part by searching through various alternatives -- that have specific properties.

[1] https://dictionary.cambridge.org/dictionary/english/engineer


I'm building a startup in this space between prompting and crafting agents, but for me using lang chain to create vector embeddings to enhance an LLMs memory and context is a core skill a prompt engineer should have.

I've already considered myself a prompt engineer instead of a software engineer, because I've been coding 15 years and now 80 percent of the code is generated for me, so I really don't think I'm ever going to want to go back to engineering without the aid of ai.

If you're a software engineer and you use chatGPT to help you code, you've switched careers from software to prompting.


I think alchemy is a better word.

Anyway, if you're trying things iteratively it becomes more a science. Engineering is more like designing stuff and building them without trying lots of things first.


I believe the words you are looking for would be, it becomes more a science when you attempt to reduce to first principles (which happens iteratively but not necessarily the other way around).

Engineering takes a first-principled approach, where upon knowing those principles you can then construct abstractly without physically building first.


Prompt engineering is just communication skills. I could tell you which of my coworkers would make good or bad prompt engineers just by looking at their transfer notes on tickets.


Is social engineering just snake oil?


Good question. Is political engineering also engineering?

My question is - for social or political engineering, you can gain definite knowledge (spying, observing, researching, information extraction) and then devise something.

How to do the prompt engineering exactly based on what knowledge? How to formulize and document that knowledge so that it isn't just about intuition and gut feeling but rather a learnable and transferable skill?


> Is political engineering also engineering?

Good question, even if I've never heard of "political engineering"

Is social engineering snake oil, too?


Political engineering is pretty common in societies where deep state or establishment has a firm control and wants to engineer results their way.

Most Western democracies are not that.


would you mind answering this question, please?

it's been ignored twice despite being kind of the key point:

> is social engineering snake oil, too?


Already answered above:

>For social or political engineering, you can gain definite knowledge (spying, observing, researching, information extraction) and then devise something.


unfortunately not: that doesn't answer whether social engineering, too, is snake oil, which is a yes or no question

straight answers to straight questions tend to be better


Notable to remember that snake oil actually had use. It was everything getting labeled as such that caused problems.

Probably makes it an apt comparison, all told. :)


It's a silly name for a real phenomenon. It's better categorized as a form of rhetoric; instead of persuading people to do what you want, you're persuading LLMs using particular patterns of language. But no one is going to call them "prompt rhetoricians". "Prompt writer" would be better, but "writing" is less of the spirit of the times than engineering.


It's like learning the limits of the tech. For example, the evening I was messing around and asked it to give the philosophy of Conan and what makes life best. It pulled the right quote from the movie and then went on the deep end telling about how violence is bad and not fit for the real world.

For me, prompt engineering was about getting it to stop moralizing and adding trigger warnings.


I don't know if it is fair to call it "engineering" but it is not snake oil.

Prompt engineering is programming but with natural language. Even between us humans being able to communicate clearly is essential part of understanding each other. It is also a form of programming.

Prompt engineering, or whatever you want to call it, will be an important part of how we communicate with machines in the future.


It seems akin to SEO experts identifying as engineers. Can we just call them experts or wizards if they want to be fantastical.


Prompt engineering is as valid as software engineering (of the non-certified kind). Why is it engineering?

Technical expertise. It is technical in nature, or only one level removed from having to look at code or internals. Unlike customer support--strictly speaking--you have access to additional tools or authorization to investigate the matter.

Domain expertise. You know that some phrases just don't help. You know about temperature and other jargon. You can look at someone's paragraph of prompt and immediately suggest something.

Stakeholder safety. You act in a manner that eliminates or reduces stakeholder harm. You care about your work.

If any of these are missing, it's not "engineering." Not to mention the importance of measuring things, data, etc--but I would hazard that being able to stare at graphs all day does not an engineer make.

At the end of the day, if your job title is "Prompt Engineer," who will object?

Versus professional (certified) engineers: https://news.ycombinator.com/item?id=35669226

Engineering as artful action: https://news.ycombinator.com/item?id=35670444

Tools of the AI Engineer: https://news.ycombinator.com/item?id=35669249


Yes


wait, people are talking about prompt engineering like an actual thing?

it's not snake oil, it's literally just a joke.



Is sort of snake oil. People would have an entire arXiv article explaining "Explain Step by step" as prompt engineerning, but in reality, it's just natural language to say "step by step" if you want it to be step by step.


To begin with, this is definitely not related to engineering... And, yes. Of course, yes.


The model works in a certain way, it just needs to be discovered and interpreted and you might want to hire someone to help you with that. Not really different than getting a solutions architect to have an API explained to you.


Unity has “customer support engineers” iirc. The USA misuses the term endlessly


> The USA misuses the term endlessly

Well most of corporate america certainly does, but they can just as easily be multinationals so its not really the USA other than they may have registered the entity here.


SEO, Software Engineer, Systems Administrator w/a budget, regex geniuses, obfuscated C challenge, demoscene, etc. - none of it snake oil.

$$ and demand don't change the skills?

I mean, Anakin _WAS_ good with machines.


Probably closer to SEO.

Works for the time being while there's low hanging fruit, but researchers will use those exploits to help build more robust systems.

Eventually there'll be no point in doing it anymore.


It’s not a durable skill. A year after learning to give perfect prompts for your favourite ai, the mode will get updated and you’ll have to learn it all again


It's new and it's different.

Don't dismiss it as "snake oil".

Approach it as any scientific topic - with curiosity and open mind.

Be rational with your skepticism.


I mean what is engineering though but trying to understand the unknown and come up with rules of thumb that work in practice


Yes.

> So that's the team themselves not knowing what their model is capable of then how come prompt engineering is any engineering at all?

This is why.


I think the integration of LLM into software interface to accept natural language input is the real “prompt engineering”


Isn't it another tech-influencer LinkedIn fad BS job title to make those who are related to it feel important?


I'm sure one can develop great skill in their arcane incantations, but how we call it is a matter of taste


Has anyone solved a real business problem with LLMs that they couldn't solve as efficiently before?


Funny. I used to ask the same question about blockchains.


Yes. It requires much less engineering than a Google Search, and there are no Google Search Engineers.


Prompt engineer is like search bar engineer

sure you can be good at googling, and it helps, but it isn’t a profession


I’ve always considered it was engineering in the “social engineering “ sense of the word


Prompting feels like it might be in part measured how social engineering skills are.


I just see it as proper thinking, but yeah prompt engineering sounds more catchy.


Prompt wrangling, prompt whispering?


I thought it was a joke?


10000000%


Purely.


Yes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: