Hacker News new | past | comments | ask | show | jobs | submit login
The Rise of the AI Engineer (latent.space)
219 points by swyx on June 30, 2023 | hide | past | favorite | 152 comments



There's a bit of snake oil in all of that.

By any means ML is a very specialized subfield, where you need solid math basis and a deep understanding of the science behind it all.

But I struggle to see the same thing for AI. If by "AI engineers" you mean someone who builds an LLM*, then it's very much just ML. If you mean someone who integrates with the LLM someone else built, then it's very much just backend work. Sure, you might need a few days to understand a few concepts, I've integrated with Paypal in the past and that doesn't make me a payment for engineer.

* wont even get into the argument of labelling LLMs with AI


If you have React engineers, why couldn't you have AI engineers?

I think the post is right to try to delineate the roles of "builds LLMs or other ML models" and "uses those models", as (as you noted) the skills required are very different (vs. "creates a JS framework" vs "uses a JS framework" are the same skills, just on another level).

"AI engineer" is, as you say, inaccurate, but as swyx says, it's least cringy of the alternatives.

> Sure, you might need a few days to understand a few concepts, I've integrated with Paypal in the past and that doesn't make me a payment for engineer.

No, but the post explicitly calls out the breadth of knowledge that the future AI engineers will have to have. It's not something you can pick up in a few days, even now, if you want to be on top of things.

I can look up some docs and hook up static file hosting on S3 and that doesn't make an AWS engineer. But there are people who full time work on providing solutions using AWS-everything and have built their entire careers (and companies) on top of that specialization.

The difference in scale is the difference in job description.


> If you have React engineers, why couldn't you have AI engineers?

We most definitely could. That's one way to prop up a cottage industry, stay relevant, command hefty salaries, and maintain that charade of importance through complexity by creating Frankenstein features in search of problems and promotions.


like the canned messages that are increasingly finding themselves into any app where you can send messages to people, be it email, whatsapp, tinder, whatever

"Hey John! I see you like traveling, I love traveling too! What are some of the most memorable trips you've had?"

like, who would want to receive such an obviously canned message lol?

soon, it will just be the apps talking to each other, no humans required


And that's how you get black mirror - hang the DJ


Because AI as terminology is vague and can mean anything.

Graph search is AI, pathfinding is AI, decision trees are AI, fuzzy logic is AI, expert systems are AI, machine learning is AI.

Just say LLM analyst or whatever.


Yea i agree, maybe calling them LLM Engineers as against AI Engineers would go down better with folks though? ML enables LLM's that enable AI, So its definitely semantics


Maybe the term "engineer" did the concept a disservice, but prompt engineering has a lot of parallels to the field of UX.

Currently there's a lot intuition involved so it can come across as made up, but there are novel concepts which meaningfully affect the end quality of what you make, and it takes time to learn and/or discover them.

As time goes I expect our understanding of what underlies "good prompts" will start to bridge the gap from intuition to science much like how UX bridged into neuroscience and psychology. If you understand things like attention and logits that's already kind of happening: you can use that knowledge to identify gaps in the abilities of LLMs and start to bridge those gaps.

-

People are convinced that future LLMs will obsolete prompt engineering. To me that'd be like people from the 90s thinking computers are going to obsolete UX because more powerful computers will be better at making user interfaces, and in turn anyone will be able to do it.

In some ways they'd be right: Today you don't need a UX expert at PARC to integrate a WYSIWYG interface into your product. Computers got so powerful that in milliseconds we can download libraries that implement the interface and render it across any form factor you can imagine. So now a WYSIWYG on your contact form is nothing.

But as computers got more powerful they could do new things, so UX advanced onto improving how we interface with those new things. Things like the Vision Pro will unlock new areas of UX based on novel capabilities they posses.

I think people are making a similar mistake with LLMs: they're focused on this idea that we'll just do the current things but better with more powerful models. But the more powerful models will be something we can "prompt engineer" into usecases we haven't even considered yet. (I also built notionsmith.ai and I'd argue it fits into that bucket a bit)


> where you need solid math basis and a deep understanding of the science behind it all.

I __REALLY__ really wish this were true. But I'll be honest, I know quite a number of researchers at high level institutions (FAANG and top 10 unis) that don't understand things like probability distributions or the difference between likelihood and probability. There's a lot of "interpretability" left on the table simply through not understanding some basic mathematics, let along advanced (high dimensional statistics, differential geometry, set theory, etc). The AI engineering often "needs" less of an understanding.

But I don't think this is a good thing. I specifically have been vocal about how this is going to cause real world harm. Forget the AGI, just look at how people are using models today without any understanding. How people think you can synthesize new data without considering diversity of that data[0], can create "self healing code" that will generate high quality and good code[1,2], how people think LLMs understands causality[3,4], or just how fucking hard evaluation really is[5] (I really cannot stress this last one enough). There is a serious crisis in ML right now, and it is also the thing that made it explode in funding: hype. I don't think this is a bubble in the sense that AI will go away, but I think if we aren't careful with how we deal with this then it isn't unlikely to see heavy governmental restrictions placed on these things. Plus, a lot of us are pretty confident that just learning through data is not enough to get to AGI. It just isn't a high enough level of abstraction, besides being a pain (see the semantic deduplication comments about generation). But academia is even railroaded into SOTA chasing because that's what conferences like. NLP as an entire field right now is almost entirely composed of people just tuning big models instead of developing novel architectures (if you don't win, you struggle to get published despite differing factors). We let big labs spend massive amounts of compute to compare to little labs who can get similar performance with a hundredth, but don't publish those works. It is the curse of benchmarkism and it is maddening. Honestly, a lot of times I feel like a crazy person for bringing this up. Because when I say "ML needs a solid math basis and deep understanding of the science behind it" everyone agrees, but when the rubber hits the road and I suggest mathematical solutions to resolve these, I'm laughed at or told it is unnecessary.

[0] https://news.ycombinator.com/item?id=36509816

[1] https://news.ycombinator.com/item?id=36297867

[2] https://news.ycombinator.com/item?id=35806152

[3] https://news.ycombinator.com/item?id=36036859

[4] https://www.cs.helsinki.fi/u/ahyvarin/papers/NN99.pdf

[5] https://news.ycombinator.com/item?id=36116939


Well, and frontend work. ChatGPT wouldn't been anywhere near the product it is without the webterface.

As far as PayPal integration, no but it makes you the subject matter expert (SME) in the room above a room full of people who aren't, and maybe aren't even developers.


You could have just read the article to see what the author meant by AI Engineers. You also would have seen that the entire article was addressing a point you tried to make, and you could have responded to the arguments made in the article instead of, ya know, just sounding off based on having read nothing but the headline


I have read the article (though I will admit, quite quickly). The author's definition for AI is by any means not the single one out there, as they outlined themselves. I disagree with the premise that this will be the job of the decade, and I disagree with the premise that it is inherently different from the rest of software as a service. This is a tool, that can be used to solve certain problems, I doubt there is a useful differentiation. That's my point of view, it is antagonistic to what is presented there, and I don't see why you think it departs from usefully commenting on the article.


I can see a world where "ML Engineer" (or similar) is someone that's hired to solve a known problem (whether it be with classifiers, LLMs, neural nets, etc), whereas a "AI Engineer" (or whatever the title) is hired to figure out how the hell to capitalize on the AI hype, without a specific problem to solve.

IMO right now we're entering the "Peak of Inflated Expectations" in Gartner's hype cycle model. https://en.wikipedia.org/wiki/Gartner_hype_cycle#/media/File...

Lots of companies want to jump on the AI bandwagon, but they don't really know what to do or how to leverage it.

What the LLM community needs now is for companies to leverage and productize these LLM models for truly game changing use cases. If that doesn't happen soon, the hype will start to fade and lose momentum ("trough of disillusionment").

I'd really love to see some killer use cases emerge soon.


(author here) There are at least 4 killer apps ($100m/yr revenue potential) so far:

1. Generative Text for writing - Jasper AI going 0 to $75m ARR in 2 years

2. Generative Art for non-artists - Midjourney has by some accounts $80m ARR

3. Copilot for knowledge workers - GitHub’s Copilot has roughly 50-80m ARR as well

4. Conversational AI UX - ChatGPT probably has >$100m ARR by now, Bing Chat has brought $m's worth of attention to Bing

(More on agents as the immature #5: https://www.latent.space/p/agents)

what else do you need to see to believe? (geniune question)


The tech problems will eventually get sorted out I'm sure, the most uncertain problems are legal. The world's copyright law is not even up to date enough to deal with the internet without crappy patchwork workarounds (like Youtube's contentID nonsense), much less LLMs.

Steam just banned AI art and text assets and built in generators because they don't want the liability of hosting it. Midjourney is getting sued by Getty, Copilot breaks GPL licenses and was even facing class action lawsuits, OpenAI trained using copyrighted and even pirated content, LLama models are unavailable for commercial use, AGI alarmists want to ban the whole thing altogether, etc.

A lot of end usage depends on how all of this plays out in courts over the next few years. Why would people dump capital into something that will be declared illegal?


If people get used to LLMs governing their lives, those copyright laws will "just" be changed. People will not accept a legal challenge taking away their toys. Granted, we're not at that point yet. But the window for the legal stuff really making a difference is slowly closing. Meanwhile, the legal system is pretty slow.


Because if they don't, their competitors will, even if those competitors are overseas in a more lax environment.

Either IP laws will catch up, or the countries with better IP laws will become more competitively successful.


75+80+80+100 is £335m per year in revenue. For a point of comparison, that's approximately 11 hours of Google's revenue.


Consider the possibility that may be a point in a technology's evolution where you can "look at what is being done with it" and conclude that it's useful from that, rather than just comparing revenue figures across domains.


Yeah the dollar values here are a distraction. I use copilot everyday at my Data Science job. Its useful!

What we are not considering is integrating a custom trained LLM into our work because the tech just isn’t there yet.


LLMs today are great for certain use cases, which is amazing when it suits your needs. Need to extract a hotel name, city, address and confirmation number from emails and return it as JSON? Not a problem. Need it to validate postal codes in Canada? It's the wrong tool.


I find myself using Bing Chat more and more instead of googling specific questions. And no, hallucinations are not a problem, because the questions are concrete and the answers immediately verifiable.


The innovator's dilemma on display.


but rate of growth is likely 100-1000 times higher


On a smaller time scale, is rate of growth that meaningful?

I guess I'm just thinking, until we see the how this all pans out over the next decade, we don't really know if the current rate of growth will hold, or if it'll plateau/slow down.


Yes, it’s meaningful. If Google doesn’t fix search in the next decade then Bing (or some other winner) will be the service earning Google’s ARR in 11 hours.


> On a smaller time scale, is rate of growth that meaningful?

revenue in initial stage of extreme growth is much less meaningful.


> what else do you need to see to believe? (geniune question)

It's not that I don't believe in LLMs.

It's that the level of media attention AI is getting isn't backed by the same level of real world use cases, yet. (All of the use cases you listed are awesome, no denying that, but I don't think those alone justify the amount of hype in mainstream media)


i mean, theres nothing i can do about mainstream media hype, but i guess my main point is this is a growing field with real money and utility behind it, and so will professionalize. if I am correct on that then AI Engineer will be a thing (because it is Least Bad title for the thing)


AI Engineers are just software engineers who use specific tools. If we want to use a fancy title then that's fine, I guess. But let's not pretend that a typical dev can't easily learn vector DBs, data pre-processing, fine tuning, etc.

None of these things require specialized knowledge in the way that say being an AI researcher would.


You could say the same about things like SREs or Devops. In fact, many of them transitioned from regular "software engineer" to these roles simply by learning closely related skills.

That won't stop the industry from inventing new, useful titles.


You are actively contributing to the hype cycle lol.


I'd add these use cases:

5. automated processing of unstructured paperwork and ingestion into ERP systems. Basically, upload any kind of bill and get all of the information into the system, not just "find out the total amount". That can save so much in accounting it's not even funny any more.

6. related to this, something that sorts incoming emails. Classify stuff into "look into it now" vs "look into it later" vs "yet another bullshit marketing email".


My problem with this is that AI at the moment is a kind of 80% thing.

And you want to plug that into ERPs and accounting?!?

We don't even really understand its failure modes.


The thing is it can be used to get rid of the tedious and error-prone busywork. Instead of having highly paid accountants manually type monetary amounts into SAP, the accountants can now just look over and check if the AI was accurate in transcription.


Why AI, though? Why not OCR and regular software? What does AI bring to this table?

Except for VC money.


OCR can't grasp context, it needs incoming bills to be formatted in exactly the same way or at least in somewhat consistent area patterns.

AI doesn't have that problem, so you can use it to detect areas of interest and then follow up on that with classic OCR.


Yea but humans manually retyping things is also a 80% thing.


The longer HN continues to doubt in AI's deliverables, the bigger our revenues and our moats can grow. Don't tell them.

(It's not that hard to hit $1M ARR with a good AI product. So many classes of new products and solutions have opened up.)


I’m sure I could find a similar comment about cryptocurrency or NFTs. There’s always snake oil salesmen.


I remember thinking this about Tesla too. They built a fairly massive automaker while everyone kept saying their whole approach was fundamentally doomed.


I’ve been thinking the leading edge is already in the trough of disillusionment! At least on HN etc.

We see lots of reports of limitations on HN. Those in the know don’t trust it nearly as much as the public and all the CEO’s of businesses drooling to install AI and have the money go up. Going to have to dig our way through those issues to get real value.


Well you have to consider the inherent bias on HN, in that the trough of disillusionment is HN's default state of mind regarding literally every topic. It's really rare to see a thread where most of the comments aren't negative.


Ha, when GPT 4 launched the default HN post seemed to be saying LLM's were about to exceed human intelligence and the singularity was upon us.

The disillusionment seemed to happen pretty fast.


The right way to leverage AI is to use it to do ye olde engineering, faster. Actually, most AI use cases are in fact that! Ye olde engineering - data pipelines, automation. Stuff we've been doing since before I was born.

Everything else - transformative tech, will hit the open domain fairly quickly, is my bet. Similar to databases.


We are indeed at the "Peak of Inflated Expectations" in the Garter hype cycle. Everyone is screaming that we are out of the AI winter and throwing LLMs at every problem.

> What the LLM community needs now is for companies to leverage and productize these LLM models for truly game changing use cases.

The killer serious use-case has always been summarization of existing text. That's it. Everything else is a constant flow of creative bull-shitting from a black box AI requiring triple checking everything before using the output, meaning that the output can't be trusted.

> If that doesn't happen soon, the hype will start to fade and lose momentum ("trough of disillusionment").

I think they will realize that there is more to "AI" than LLMs, just like the hype with CNNs and the like.


Not just summarisation, information extraction in general. LLMs are great data normalisers.


No. No they are not. Do not keep proliferating this idea, as there are real consequences. Your input only informs the likelihood of the output. There is no model actually extrapolating rules from the information, so the premise of LLMs being extractive is 100% verifiably false. They summarize well due to being handed a roughly correct arrangement of tokens to mimic the order of. This should not be confused with extraction.

To put it simply, keep falling for that, and it'll bite you in the ass.


You are talking from a top-down point of view. Of course they can make an error any time, after all they stochastically sample the output. All approaches to information extraction are upper bounded at 90-95% accuracy, I have extensive experience with this task and run many evaluations on invoices, receipts, forms and other doctypes. Human in the loop is still required.

But in practice you can rely on good copying and excellent ability to parse names, addresses and various values as good or better than other approaches. LLMs reduce the number of corrections, and they are easier to deploy - instead of labeling hundreds of documents you can just query with the field name.


I'm talking about an "at scale" point of view. A 90-95% success rate would be at worst 9:1 odds and at best 19:1. That means your best case scenario would be a likely failure on every 20th inference. This may be passable on an adhoc individual basis, but why do this when a deterministic solution can achieve beyond 99%, with proper error handling? Data normalization tools are rigid as a feature - LLMs are not, even at a temp of 0


Thanks for the info z3c0. Have you implemented open source solutions? Do you mind sharing some deterministic solutions to detect entities without huge dictionaries (NER)? Or maybe extractive QA if you used that instead?


Deterministic? Not anything that is "one size fits all". If your documents are of an unpredictable shape, then ML is your best bet. For both NER and QA, BERT models are very capable.

To your first question, most of my ML work is proprietary, unfortunately. I am hoping to change that in the near future.

Edit: spaCy is a great library for NER, if you're hoping for an open-source solution that can have you hitting the ground quickly. SQuAD for QA.


I'm a natural skeptic, and I believe we're still on the rising edge of the "AI" hype cycle. Five years ago, it was "blockchain", and everyone was trying to ram blockchain into everything, attracting lots of VC and media attention, etc. It seems that blockchain is beyond the honeymoon phase: I haven't seen an NFT or even a Bitcoin headline in HN for a while.

So I'm trying to wrap my head around what an "AI Engineer" is. As I see it, it's all about calling a function that takes some text as an input, and getting some text as output. That function, of course, is being run on some big hardware that in most cases, you don't own. So is the "engineering" part of this finessing the input and massaging the output? Do most "AI Engineers" actually understand what's going on in that function, beyond what they learned in the "LLM 101" videos and articles that have been flooding the web over the past year?

While an application developer doesn't need to know all of the ins-and-outs of the underlying operating systems they run on, those developers that do have a deep understanding of the OS are the ones who write more performant code, and can really get to the bottom of issues that arise do to how their code uses the OS. Can the same be said of "AI Engineers"?

All of a sudden, everyone's an AI Engineer. Where where these experts hiding five years ago?

https://trends.google.com/trends/explore?date=today%205-y&ge...


> So is the "engineering" part of this finessing the input and massaging the output?

I don't know if I'll ever use the phrase "AI Engineer" myself, but there's plenty of meaningful engineering work in that space that strays pretty far from just calling some provider's APIs. A few that come to mind just for LLMs:

- Custom fine-tuning of foundational models both in the classic sense and with more modern strategies like PEFT/QLoRA

- Data preprocessing pipelines to help automate fine-tuning, vectorization, etc

- Continuous integration suites to evaluate models on standard benchmarks as they change over time

- Vector db / semantic search engineering to help decorate context windows effectively

- Architecting ensemble models infrastructure to accommodate more complex task processing

I think many of those probably going into what folks are calling the "MLOps" bucket, but I think its a more broadly a combination of research, application engineering, and operations engineering.

Edit: for clarity, my position is that the line in the article between AI Engineer and ML Engineer need not be that bright. Just like software engineers today that write/operate their own devops tooling to deploy and manage the apps they build.


of all the things mentioned in this whole thread, this resonates the most. from my take, everyone here mentions AI engineering as someone just plugging in LLM (ie OpenAI) into every software application. However I believe there is a nice burgeoning of the "AI Engineer" where it involves more of the data processing and specialized AI fine-tuning that engineers have a role in, but of course it requires more specialization than just learning LangChain and calling it a day.


I think comparing AI to blockchain is not fair.

Sure, it was a hyped technology a few years back, but only in tech environments.

My mom has never heard of blockchain, I bet. AI is in the mainstream media all the time.

But ultimately it's a matter of scope. AI has the potential of, at the very least, transforming lots of jobs. Blockchain never had that potential.

The company I work for, a non-tech one, never ever mentioned blockchain. But they're trying to get AI everywhere.


A lot of people trying to refute you claiming the latest AI hype has actual value. The new models people are raving about do have value way above what blockchain had but I'd argue the AI hype is also way higher. Overall I'd say the gap between AI hype and AI value is bigger than blockchain despite blockchain value basically being zero if you dont count transactions for illicit goods.


ChatGPT was quite literally optimized to be convincing to humans thanks to HFRL. I think it's going to be a huge negative externality thanks to the amount of ai noise it's going to be pumping onto the internet for the next couple of years. It's going to get harder and harder to connect with real humans and get information that we know is true rather than just proliferated by an automated system that knows nothing about quality. That negative impact might be even worse than blockchain if we see fewer and fewer people participating in creative skills, or even being able to make a living thanks to the lies the AI industry has sold to MBAs.


> Where where these experts hiding five years ago?

I'll bet that many of them were trying to ram blockchain into everything!

Personally, I'm curiously watching emerging prompt pen testing scene. It brings a tear of cyberpunk joy to my eye to consider that we've built important, powerful systems advanced enough to be vulnerable to social engineering attacks... and pretty dumb ones, too.


What we're seeing now is a frantic attempt by companies to ram "AI" into everything. Some force (Wall Street?) is expecting companies to say "We're using AI." What do you need it for? "We don't know, but damn we've got to use it for something!"

A few years ago, few were talking about AI. Today, if it's not somehow crammed into your product roadmap, you might as well start looking for another job.


On the cyberpunk side, the LLM demoscene is also incredibly fascinating. We folks beating out the megacorps on model hacks like NKT and hecking crazy model names like TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-SuperHOT-8K-GGML


I'm also a skeptic, but for a slightly different reason. There are currently two types of business use cases that seem to be the focal point of this generation of AI.

1) Tooling. This one I think will probably bear fruit. It will likely result in huge productivity gains (I mean, it already has for me). But i don't know if it will result in a paradigm shift.

2) Agents. This is where most of the hype is focused. The idea is that you can cut humans out of the loop. If doable, this would be revolutionary. But from what I've seen, the currently technology is not likely to do this. The reason is that agents that are required to perform a long series of independent actions without human intervention can fall victim to an accumulation of small errors that occur in each step. This kind of "snowball effect" will often result in complete garbage being produced.

As a result of point two, I think a lot of the new AI products are likely to fall flat. And as this happens, the hype/funding will dry up pretty quickly. All the same, I think this will have more positive economic outcome than all the crypto stuff did.


I disagree completely with your last point. Flooding the internet with automated bullshit and people locking down their data because they're worried they'll get scraped is going to be a hugely negative consequence, to say nothing of what this will do to people like artists and writers because of the perception of this technology that the people paying them are going to have.


I actually agree with you. I think the societal impacts of this are going to be huge. I was speaking more to the monetization potential of AI. And I guess I should have specified that this refers legal monetization, since I'm guessing that for a time, illegal bot operators will do quite well.

In terms of societal impact, I suspect that this will ultimately result in the death of all open platforms on the internet and a withdrawal from online spaces in general.


It really wasn't blockchain 5 years ago.

It was blockchain 6 years ago (though to a much lesser degree, and focused on different iterations of the distributed ledger idea). Then it was really blockchain 2 years ago (but focused on protocols built on smart-contract-enabled blockchains).

Interestingly, those two targets are completely different. 6 years ago the application was the ledger, and the language was usually C, 2 years ago the application was "a financial product built to run on a resource-constrained virtual machine that executes a custom language (or Rust)".

The demand follows the money, so if the whole space experiences another "bull run" there will be lots of demand for "blockchain" again, and I'm sure the actual associated skills expected will be different as well


ChatGPT is useful in a way that block chain never was. There may be inflated expectations, but we are already on “the plateau of productivity.” I don’t think LLMs are overhyped relative to the normal background levels of tech hype.


When your task is too nuanced to be described in a prompt and a few demonstrations you need to use one of the fine-tuning scripts to bake into the model much more supervised data. That's still doable for AI Engineers, prepare data, call LoRA fine tuning script, deploy model.

The problem is having a good supervised training set. But recently it can be generated with LLMs + plugins and language chains, basically amplified LLMs. In other cases you need to collect human preference data and apply a different kind of fine-tuning. Still mostly dataset curation and iterative model building, something and AI Engineer should be able to do.

Working with LLMs is very different from building neural nets like in 2017. Tons of those old skills are not needed anymore, so many of the old tasks are basically solved at human level. And a whole new set of problems appeared.


I've been encountering machine learning on and off for the last 8 years or so. The idea that people are generating data with LLMs to then turn around and train LLMs without a hint of irony is hilarious and astonishing to me.


> Where where these experts hiding five years ago?

You could use GPT-3 in 2020 but it was expensive and difficult to make it behave. Iterations of GPT-3 starting in 2021-2022 allowed it to obey commands (InstructGPT) and made it more feasible to "engineer" with it.

The true inflection point was due to free, accessible and good-enough-quality AI generation in the form of Midjourney and ChatGPT.


Bitcoin is probably a better comparison - https://trends.google.com/trends/explore?date=today%205-y&ge...


> Do most "AI Engineers" actually understand what's going on in that function, beyond what they learned in the "LLM 101" videos and articles that have been flooding the web over the past year?

Do they need to, to be effective at their jobs?


> Where where these experts hiding five years ago?

Prompt engineering has been around for a few years already, and there are a lot of existing ML engineers who have been able to quickly learn how to adapt their skills.


AI Engineer here.

To start with, I share some of your skepticism about AI and hype (but I love these problems, so I'm happy to take the risk of overhyping to try to solve these challenges). But there is a lot of real work going on in this space. Though most of these are basically the same answers as you'd get in the article.

> it's all about calling a function that takes some text as an input, and getting some text as output.

Real world AI applications involve non-trivial prompts that are often composed of many different components and dynamically changed based on user interaction with the environment. So it's not quite as simple in practice as just calling an API.

> So is the "engineering" part of this finessing the input and massaging the output?

You could make this claim about all software engineering at the end of the day.

If you want to understand whether or not any of the billion companies shipping "AI" products right now are really doing AI, the big term to ask about is "evaluations". It is not trivial to evaluate the performance of LLM output across a broad range of tasks. However if you're not doing this, then you can't possibly know how your efforts are doing. The companies that are slapping "AI" stickers on old products are largely ignoring this issue.

The next challenge is "how do you improve bad outputs?" Prompt engineering is one solution, but there are potentially may other engineering solutions to recovering from a bad state. None of these are trivial.

A rapidly growing part of this space is working with "agents", that is you have multiple LLMs that are capable of interacting with each other. This area is changing rapidly.

Vector databases are also becoming very important of the work as not all LLM/AI work is just throwing around prompts, but often working with embeddings.

> All of a sudden, everyone's an AI Engineer. Where where these experts hiding five years ago?

It's not that mysterious. Everyone I know working in this space right now was either a very engineering focused data scientist in their last role, or an ML engineer working near this space. In either case they're people that have been interested in this space before that have all the skills necessary to change roles.

> Can the same be said of "AI Engineers"?

At least in my circle, everyone doing this work right now has a long history of working in machine learning and quantitative problem solving. Of course that used to be true of ML engineers as well (and I've not far to many MLEs that don't understand gradient descent).


Imagine comparing blockchain and AI.


LLMs and GenAI are already useful at scale but the current hype that they will lead to infinitely generalizable models, myriad groundbreaking applications in all fields and industries, or even AGI could be overblown - let's at least admit that as a possibility.


If AGI reached "human level" capabilities and took off from there how in the world could that possibly be overblown? That would change everything, comparing it to discovering fire would probably be apt, and maybe the first time in recorded history that making that statement isn't overly dramatic.


I would submit that we don't yet have enough evidence to say whether the comparison is apt or inapt. I personally think what's going on with these "generative" models seems like a bigger deal than blockchain, but it's fiendishly difficult to know what is or isn't hype while embedded within a hype cycle.


Seriously? The only comparison you can make is that both were hyped. Digging even a millimeter under the surface reveals they’re completely different.

Blockchain was and still is rife with scams and “you just don’t understand the technology bro” hype men. Just check out Dirty Bubble Media. Blockchain was rarely if ever a product that solved a problem, the whole point is The Line Goes Up. That’s why no one uses blockchain in industry, and crypto bros were constantly finding themselves proposing silly use cases like ticket sales and property deeds. These are people who have apparently never heard of a relational database.

The hype around AI is due to increased attention on things that have already existed and have already been studied, used, and improved for decades now. There was never much R&D into blockchain tech because the tech isn’t the point. For ML, there are researchers who have worked on these problems for decades. It doesn’t need to justify its own existence, the justification is that it can solve real problems.


Again, I do tend to agree that there is a lot more "there" there with generative AI. But I think it's also true that it's too early to be sure.

You're comparing the two technologies at totally different points in their hype cycles. The comparison point to where AI is right now is to Bitcoin / very early Ethereum in the late 2000s to early 2010s. Nobody knew where it was all going, some people saw endless potential, other people saw nonsense and scams. The explosion of bitcoin into mainstream consciousness in the early 2010s is akin to the explosion of ChatGPT over the past 6 to 9 months.

But what's next? That's what matters. The early 2010s bitcoin boom now pretty clearly looks like a fad, in hindsight. Was ChatGPT also mostly a fad, or is it going to be a lasting fixture of productivity and/or entertainment moving forward? I think it's the latter - it has already changed my habits at work in ways that I think will be permanent - but I just think it's too early to say for sure.

(And to be clear, I'm not talking about machine learning as an academic discipline; I totally agree with you that there is definitely enough evidence to say there is a lot more "there" there than research into chained hashing to solve double-spend-like problems.)


Well if you narrowly define AI to be ChatGPT and other generative LLMs, I think I agree so some extent. Unlike blockchain they do have use cases but it remains to be seen if those use cases can justify the money being thrown at them. How much is code completion really worth?

However, I disagree insofar as the outcome truly depends on an unknown technology. Blockchain was never going to revolutionize finance or any of its other grand claims. At best (and that’s if it worked), it would be a new database type that all of the existing financial systems would plug into. It was a libertarian pipe dream, naive about how the world actually works.

For any AI application, the world is different. If we simply replace AI with “automated system” we can see why. Pretty much every company would like to replace their workers with machines. And maybe machines can do things that humans would never be able to do (for example, search the entire internet for a very specific topic).


Yes that's what I'm talking about because that's what the article is talking about! The article is explicitly not about the ML / AI academic research. I agree that's well established.

What the article is about is the current hype cycle of people trying to take the newest generation of "AI" tools, of which GPT-4 is the leading edge and most widely known, and make useful products with them. And whether that is going to be a big deal or a fad is, as yet, unproven.

It is super easy to say, in 2023, that "blockchain was never going to revolutionize finance". But in 2013, that was an unknown. For what it's worth, you could go back to my commenting history in that period of time to find me saying "bitcoin is never going to revolutionize finance"; I was a skeptic then. But that doesn't mean I was definitely going to be right, I was just educated-guessing, just like the people on the other side of the conversation. That guess looks to have been prescient with the benefit of hindsight, but I've been wrong about lots of stuff too - I thought the iPad was stupid, I hated "Web 2.0", I thought the Facebook IPO was doomed, the list goes on and on.

My best guess is that building products on top of "generative AI" is going to prove to be a big deal, but I don't know that, and it's hard not to be influenced by an ongoing hype cycle, is all I'm saying.

> For any AI application, the world is different. If we simply replace AI with “automated system” we can see why. Pretty much every company would like to replace their workers with machines. And maybe machines can do things that humans would never be able to do (for example, search the entire internet for a very specific topic).

Sure, but again, we just don't know yet if the "AI Engineering" thing this article is talking about is going to, in any way, turn into any of that, or if it's going to be more of a bust.


Unfortunately, web3 bros pivoted to AI, which just adds even more noise to the space.


For me that's absolutely the strongest ground for comparison.

In recent decades, we've had two big waves of tech advance: the Web and mobile. A lot of people have lived through them both, giving them an expectation that another such wave should be along soon. You could see that in the decade of blockchain/ICO/DAO/NFT/web3 hype, where people, many with shaky credentials, touted the transformation soon to come, taking in a lot of cash.

In retrospect, from Mt Gox to FTX we can see that it was all horseshit. The main real advance was decentralizing not finance or property or computing, but the Ponzi scheme.

Despite this failure, and echoing The Great Disappointment [1], we see a lot of the same hype and even the same people. Is it possible that this is different, that there's more substance here? Or is this going to be another Groupon or Metaverse? That does not matter to the hypesters. They're going to run the same routine that worked before. They're going to take in a lot of money, which is the primary goal. It's possible that some of them will, by luck or accident, latch on to something that isn't a total fraud. Surely most of them won't.

But we should never forget that is basically irrelevant to a lot of people in the early stages of a cycle. And not just for the fraudsters, but for anybody who makes their living on the upswing of the the hype cycle, including a notable fraction of investors, "experts", and journalists. how ever it turns out, they'll get paid just fine.

[1] https://en.wikipedia.org/wiki/Great_Disappointment


Fraudsters will do what fraudsters do. If it wasn’t crypto it would be scamming retirees or hoarding PS5s. They’re a largely insignificant part of the economy. At a macro scale private capital has a much larger impact.

The issue is there’s a lot of dumb VC money floating around looking for a quick billion instead of investing long term fundamental research (boring!) that may produce results later on. It’s a fundamental issue with the economy because what Capital wants is not to do what’s best for humanity or even to build a sustainable widget factory. Capital wants a money printer. It’s a big inefficiency in the economy because ideally they’d prefer sustainable growth instead of 1000x unicorns.


I don't think there's as clear a line here as you think. It's not like crypto was just some small-time grifters who would otherwise be running crooked pop-the-balloon games at carnivals. Notable portions of the "real" economy got in on the game. And it's hardly just crypto where people have been hoovering up credulous money claiming they had the next Google all ready to go.


Web3 / Metaverse was a solution in search of a problem.

AI is solving problems today.


You're responding as if I said they were the same, but I was pretty careful to say the opposite. My point is explicitly setting the utility of "AI" aside.


It'll be fun to look back on these comments in a few years. It will be like looking back on the internet skeptics of the 90s. Most people have forgotten about those.

Of course, there was a big boom and bust cycle back then too, but just like then, this cycle is nowhere near its peak.


That's what they said about the coins and the tokens too.


I still feel a bit strange about calling someone an “AI engineer” and I think there are a few reasons.

1. AI is poorly defined. This is a fundamental problem behind almost every conversation on the “topic”. Depending on the context, AI can mean anything from a decision tree to deep neural networks to science fiction.

2. Engineering implies a deeper level of understanding. If you want to engineer a system, you need a deeper understanding of how each of the components work. Using a tool does not make one a tool engineer. It makes them a 21st century blacksmith. Does calling an LLM API make me an AI engineer? If so, calling a weather API makes me a weatherman.

3. This is a relatively new area, and make no mistake, it is a new area. 2 years ago most of the tooling around ML work meant you had to get your hands dirty. To use BERT, you pretty much needed to learn about tokenization and attention masks and CUDA. Not so much with GPT3. So it seems premature to even circumscribe at the moment.

What I am not saying: I don’t think one needs to take a class in linear algebra to work with this stuff. I also never believed in calculus for CS students, which may be a minority opinion too.


Tl; dr: a bunch of know-nothing plumbers swerve their digital cars from last years buzzword (blockchain) towards AI, narrowly missing a collision with a crowd of VCs. Upon arriving at the AI convention they switch hats to AI where they list themselves as experts (they can use the Chat-GPT API.) How any of this works or what problems may arise are a job for the cleanup crew (paid less, they aren't calling themselves experts.)


This is unnecessarily combative though it’s also probable I’m reading too much into this since I’m one of those swervers - from cloud computing guru to AI aficionado.

Personally, my mental bent is towards tinkering - I like wiring things up to make them work. I used to do that with wires and screwdrivers as a kid, now I read api docs and do the same thing.

The point being, it’s likely that I’ll continue connecting things to each other just to see how they work and offer that as a service until the day I die.

You need people who invent the algos, people to wire up algos to each other, people to sell this service to people, people to want to pay for that service, and so on. None is better than the other. They’re all necessary for people to put food on the table and live the life they want. Sneering at them is pointless.


I'm not sure I agree with their reasoning. There are a lot of generalist backend/infra engineers who are working in the AI space. What is the special skillset that distinguishes them from all others? If you say that they are "AI engineers" because they are working on AI as a product, then should we also have "advertising engineers", "billing engineers", "API engineers"?

The reality is that tons of people just build a trivial app calling an OpenAI API and put "AI engineer" on their resume to capitalize on the hype.


Tech companies ARE doing that.

I have met "Customer Success Engineers" and other managerial or marketing related roles with the moniker Engineer. People want to call themselves engineers.


Oh God, this is going to be like DevOps all over again isn't it? Where there's people constantly gluing together things from different companies.

I foresee layers and layers of abstraction dependent not on technology spec but on some company's poorly maintained docs and apis that you get by calling some service rep.


I have found data engineering to feel that way as well. Just throwing darts at a board of vendor solutions, trying to figure out how to cobble them together into something useful. I am worried this is going to be that as well.

I'd like to learn more about how to self-host systems that are large enough to be useful, rather than doing this cobbling together proprietary APIs thing.


AI/ML Engineers will be our modern day variant of the Mystic. They'll whisper sweet nothings into the ears of their models, looking at the goat entrails of what their models output and tell us of their predictions.

It's going to be a shit show.


This stuff is going to be so vague that an AI will be able to do it.


Hilarious!


Hey Shawn! Always enjoy your writing.

I think you've done well laying out what a lot of people want "AI Engineer" to mean at this moment in time. My concern (coming fresh out of the absolute semantic nightmare that was the "serverless" community) is that the term AI is so hopelessly overloaded, and has been such a moving target over the years, that it's unlikely that a plurality of people will ever share your mental model of what an "AI Engineer" is/ does / knows / is paid.

Personally I've been using Generative Engineering / GenEng [0] to describe the professional practice of building stuff with AI as your pair programmer. I recognize some are pulling away from the term "generative", but to me it feels like a better anchor into the specific flavor of AI we're talking about.

[0] https://cloud.google.com/blog/products/ai-machine-learning/t...


Hmm, I think "the professional practice of building stuff with AI as your pair programmer" is just "software engineering". That is, it's just one more tool to use to do the existing work. We never had "Search Engineering" or "StackOverflow Engineering" to describe the practice of building stuff using web search and stack overflow as tools...


Imagine calling yourself a compiler engineer because you use a compiler.


Maybe Text Processing Engineer or Text Engineer for short could express well that this person handles both text analysis and synthesis. It is analogous to the term Data Engineer, which describes someone that handles data processing and pipelines in general.


Generative isn't perfect for similar reasons perhaps, I consider procedural generarion adjacent to machine learning based generation so it would need to encompass both to make sense to me.


hey Forrest! thanks so much!

agree with the semantic overload risk, but i think at this point Worse is Better is applying here. like I said in the piece I'm not starting the trend, just calling out that it's already under way.


> "the fundamental gatekeeping that still persists in the market"

This person just called me a gatekeeper when just our definitions of what an AI Engineer is different... all while discussing the ambiguities of the term "AI Engineer".

Well, that's the internet for you!

My intentions were never of gatekeeping, our definitions of the term are wildly different. I laid the beginning part of a roadmap for a person who wants to solve problems that exist in the real world with Deep Learning- by choosing how to structure data, training models with existing architectures (and optimizers, loss functions, activation functions, and so on) and creating new ones from scratch- and to deploy them to solve the problem in the physical space.

Although, I have not spent hours on what the term "AI Engineer" should mean, it is not prompt engineering to me. If there is a consensus in future on the term meaning prompt engineering, then I will not give any advice on that, because I don't know much. Neither will I fight the definition.

My definition, still is the person who develops new kinds of microwaves as opposed to the author's whose definition is that of a cook who uses the microwave, by creating new recipes and dishes. That's fine, but didn't liked being called a gatekeeper.


apologies for that - it came from a friend who was reviewing that HN thread and used that word - but it was my choice to repeat that word in my writeup. I didnt take your feelings into account when using what I knew to be a loaded word, and in retrospect it did nothing to further the strength of my argument at all.

I'm sorry. I've removed it.


Thank you so much.

As I mentioned, my definition was just different. And I didn't even mean to gatekeep. My spirit was: "if I can do it, many more people definitely can, too". So I went on and shared an outline of my journey. That's all.

No hard feelings.


When you see the taxonomy of AI and ML you will learn that saying "AI engineer" is vague and essentially useless.

Do you use depth first search at work? Congratulations, you are an AI engineer.

Do you use any form of search or information retrieval at work? Congratulations again.

Do you have a system that makes decisions based on a decision tree? Again, congratulations.

In fact, stop using "AI" at all. Your washing machine is an AI agent, it uses fuzzy logic.

Are you using LLMs? then just say you work with LLMs.

Search engineers don't call themselves AI engineers.


If you are someone who was working in LLMs/Generative AI before it got "cool", The market has fundamentally changed and changed extremely in your favor.

Talent is extremely scarce and expensive right now in this space. Ask for well above FAANG compensation. Try to shop around for offers. I had 4 offers and a bidding war for my talent (with no leetcode - but I have an extensive github and publications about LLMs), when previously these same companies would have brutally leetcoded me.

I thought that there would be a ton of people getting really good really fast with Generative AI. It turns out that most people are terrible at using the tools, and there's even research about this right now - https://dl.acm.org/doi/10.1145/3544548.3581388


I'm someone who would like to get "really good really fast", but have found that the on-ramps remain pretty weak. For instance, I have yet to find a good book on the subject. There are a ton of tutorials and articles, but it's maddening to try to cobble together any depth of understanding from those little nuggets. And there are tons of good papers on how the systems actually work, but these are not very useful for people who are new to this set of tools to figure out how to use them to create useful things.

But I think this will hole will close up incredibly quickly over the next six months. (And it's yet another really good opportunity for people like you to be involved in that gold rush!)

Edit to add: For instance, the article lightly lambasts peoples' recommendations to read up on AI / ML fundamentals, and contains this:

> n the near future, nobody will recommend starting in AI Engineering by reading Attention is All You Need, just like you do not start driving by reading the schematics for the Ford Model T.

But, amazingly, it doesn't actually suggest an alternative starting point*. There is this call to action about the conference, but presumably that will mostly be people who have already figured this out to some degree. But (to carry on the author's analogy) what is the recommendation for driver's ed?


we haven't launched it widely yet but if you peek at the top level nav you'll find the course we are working on :) https://www.latent.space/s/university


How about that! :)

One interesting thing is that there do seem to be courses available for this, but I still haven't come across any books. Maybe this is just because I'm a dinosaur, but I really feel like what I'm missing is a book about this, with a good Introduction and Chapter 1 motivating the subject and giving a lay of the land. I'm sure every techie publisher will have one of these by the end of the year, but so far I really haven't seen what I think I'm looking for in this space.

(But having said that, you can bet I'll check out your course.)


thank you! yeah i guess its easier to iterate on a course than a book, but ofc we are also effecitvely writing and market testing the book contents. having written my own book before i'm not particularly keen on doing that again but am trying to work with partners to do this :)


Ha, totally get it.


It turns out that most people are terrible at using the tools, and there's even research about this right now

Thanks – this was a surprisingly fun paper to read. The authors are all CS researchers (and good ones – Björn Hartmann does a lot of great work in this area in particular) but I'd love to see some behavioral scientists tackle it at some point too.

I haven't got a citation but the concept initially reminded me of some studies from ~20 years ago where academics were wondering if the public would ever be able to use search engines effectively and how people's mental models could adapt to do so. All these years later, I'm not entirely sure if the public ever really did adapt, or if it was just the search engines that did..


In my experience people who make dismissive comments about AI (in its current form) and AI Engineering as a discipline tend to have very little experience with and superficial understanding of AI. Once you start using it seriously as part of your engineering stack it quickly becomes clear that there’s a lot of detail and complexity that more than justify specialisation.


Heavily disagree, and this just seems like posturing from the other side of things. I understand AI rather heavily, and have been a part of the scene far before the explosion of even the first GPT model.

Frankly, I've been quite irked by all the people who claim themselves "ML engineer" when most of what they do is glue huggingface models together. They're doing none of the work required to make models, which is quite extensive. They understand very little about the models they wield.

This new wave, I find even more irksome. "Prompt engineer" and "AI engineer" and aren't much more than pseudoscientists who fanangle API inputs until they get what they want. This is in stark contrast to all the maths I perform to understand a model's performance.


Strong agreement. What we have right now is a lot of people doing light glue code and pretending it is engineering.


They're building slapdash, code by night bullshit apps that barely fucking work and calling it engineering. When I see those "AI landscape" images with logos from 200 different companies, I know we're in a bubble. A lot of these guys are going to take VC money then crash and burn when someone else builds a marginally better version of their GPT api wrapper. The real winners will be companies like Nvidia cashing in on this hype cycle.


I think you've just proven my point.


On reddit, this might seem like a burn, but here, you need to explain that claim and not just hope somebody rides with it. How is tinkering with an API an actual understanding of AI? Please actually explain


AI engineer is a hype term for someone that can incorporate LLM to solve an engineering problem. It isn’t that hard to use OpenAI apis, LLM are like super abstractions. You tell it in natural language, as opposed to a specific programming language. It’s still a very narrow field. The hype is real but it’s getting a bit frothy


The rise of the monkey king of infinite monkeys

I'm using LLM for many things, but let's not pretend it's engineering


For this conference, I think it would also be great to have a presentation about job requirements at various levels, then continue to cite that presentation often.

It's important to have an on-ramp for people interested in this space with well-known requirements to aim for.

This also makes it easy for companies to know what they're getting with this title. The ambiguity of full stack engineers meant that it was a dumping ground for responsibilities.


(author here) one of my ideas of research for this blogpost was to actually ask a few employer friends who want to hire AI Engineers for a job description, and then put out a generic JD that people can use and clone (kind of a SAFE for AI Engineers if you think about it). however I feel like it is too early still to prescribe something. I'll maybe put up a strawman for the conference and would love feedback then.


It feels far too early to me for there to be any clarity on job requirements.

If I was hiring for an "AI engineer", I'd want to see examples of things they had built.


With the rise of LLM's there is definitely a new skillset needed in the bag of engineers - one who know how to align/instruct LLM's towards an objective. This new role is essentially a mix of standard programming logic + ability to instruct LLM's using language (natural language programming). That's been the primary difference I have seen in this emergent field of designing agents.

Most agents need some form of formal instructions tied with designing prompts - which are a mix between formal programming and natural language. This needs a slightly different skill set from the traditional backend engineer or services engineer. This is not a ML or data role either.

Rather than just a prompt engineer, the case for which could be made like a specialist designer - this role maybe more akin to a full-stack developer role who knows how to align LLM's or Diffusion models towards a multi-step goal.


I think it's all just being more accessible for people like me to integrate into applications via AIs. With that said - the barrier for entry also lowered substantially in the last few years to a point where implementing your own neural network from scratch with Pytorch is something you can do in a few (took me a weekend) hours coming in blind, but reading the docs and scrolling some existing repositories.

I remember a decade ago taking a stab at it and being very much over my head. Not sure if i'm getting older and there's less magic going on around it or if it's the ecosystem/docs/examples/pytorch/jax/hours of youtube content/etc but somewhere in there, at least for me personally, it's gotten much more accessible.


Personally I think it's a combination of

a) much better resources being available than in the past, which explain concepts clearly and in relatively simple terms, and

b) much better tooling existing now than ever before, so many things that you used to need to do "by hand" are now taken care of by relatively standardized tooling. Even just automatic differentiation engines are a huge deal, not having to backprop "by hand" (first release of TensorFlow was 7 years ago, so if you looked at it a decade ago I'm assuming you had to implement backprop yourself). Beyond that another jump was from Tensorflow's delayed execution/compilation model (it really was a headache to work around its APIs/set up the computation graph) as well as just having a generally ugly API, to PyTorch's "literal" setup, where it feels like you're just writing regular Python code and performing operations "normally"


I would never be able to be hired as an "AI Engineer" or "Prompt Engineer" despite my extensive AI portfolio because my job title is Data Scientist and the discrepancy would confuse most hiring managers.

Job titles are moving faster than career ladders.


There are a couple types of roles that sounds really interesting to me. One would be taking some proprietary data and training LLM in a format it could use. One company I know has a database of cars. They want to train their LLM with some inventory facts like "We have a Ford Mustang on the lot whose vin is ABC123 and it has the following features...". And then another role would be writing the prompts for API calls to the LLM. "Write a report on all cars currently in the lot which are available for sale and have the following features...."


This doesn’t seem like a thing any business should need AI for.

Filtering a known list for a report takes a few seconds at most, and is much more reliable than LLMs.


> They want to train their LLM with some inventory facts like...

So are they actually intending to retrain their LLM every time the inventory changes? Because, otherwise, how is it going to "know" the current state of the inventory? This is useless after a single sale or a single new delivery without retraining. (And it's likely useless before that anyways.)

And if they already have a database of inventory data with all this then they could just generate a report the "old fashioned" way that's worked for decades.


I would expect the solution is to take the NL question and get GPT to transform it into a SQL (or similar) statement to extract the data. Then another call (or set of calls) to generate "reports" summarizing the data returned by the DB query.


At which point, maybe a "GUI-interface" will be cheaper to build and maintain in the long run.

Now if AI could automagically update inventory data with what's actually physically happening on the lot, that would be cool.


That's a very generous take. But that application would be far more useful than just car inventories (the limited application described) and not trained in the manner described (on inventory data). It would be trained on transforming natural language to SQL (or other) query languages, and the application of that is exactly what we're seeing with code generation applications of LLMs (to the extent they're presently useful).


Existing LLMs are already pretty good at this, no? The tricky part is mapping however the NL question refers to the various types of data to the actual column names, which is where I'd imagine some prompt engineering (or pretraining) would be necessary.


BTW I tried it with ChatGPT 3.5 - with a prompt that roughly described the database schema and a question "I need to know the manufacturer for the vehicle with VIN X7820-A and to confirm whether it has the feature 'rear camera' installed", it came back with

    SELECT TVehicles.Make, 
       CASE 
         WHEN TVehFeatures.FName = 'rear camera' THEN 'Installed'
         ELSE 'Not Installed'
       END AS RearCameraStatus
    FROM TVehicles
    JOIN TVehFeatures ON TVehicles.TV_ID = TVehFeatures.TV_ID
    WHERE TVehicles.VIN = 'X7820-A';
One interesting thing to note - I didn't tell it that "Make" and "Manufacturer" are the same thing.

I even went the next level and asked it to write me code to execute the query and generate appropriate HTML output from the results. It didn't quite manage it to handle any possible SQL query (remembering that the query itself has been dynamically generated), but wasn't far off. My description of how the output should look was simply "sleek and modern", and it came up with CSS that could be reasonably said to fill that brief.


What I have done for a similar use case is to have ChatGPT via the OpenAI API generate SQL (or KQL) based on the user request and then run that query and display the results (with some prose if appropriate). Works fairly well even with GPT-3.5-turbo. With GPT-4 can handle more complex requests (slower). It could even create a custom Chart.js chart on the fly if requested.

To me this demonstrates that there is a specific job here, even if you don't want to call it "engineering". Which I would argue is the correct category of job at least.

The above project was presented as "let's put a table of data in a vector database and then search it using the embedding of the user query". Here you were suggesting fine-tuning an LLM with the structured data. Again, it makes more sense to just generate the SQL and leave it in the relational database.

So there are a few basic things about how this stuff works that are not obvious and require some specialization. Even for programmers.

Right now I think it's fair enough to put it in its own job category since there are plenty of software engineers that just don't have any experience with generative AI. But within a few years, I think knowing how to integrate generative AI into a product will be considered core knowledge for a software engineer. So using LLMs or Stable Diffusion will become bullet points on a job requirements list.


My thought on what an AI engineer is; an individual who uses AI and machine learning techniques to develop applications and systems that can help organizations increase efficiency, reduce costs, increase profits, and make better business decisions 1 .

AI engineers play a crucial role in helping enterprises leverage the capabilities of large language models (LLMs) like GPT-3 and beyond, this means that they will,

“Develop Domain-Specific Models” AI engineers can fine-tune LLMs to create domain-specific models to ensure that the data and the business process align to provide more context. For example, a model fine-tuned on medical literature can assist doctors in diagnosing diseases or answering patient queries, ect.

“Data Preparation and Management” By this, I presume they will be involved in cleaning the data, dealing with missing or inconsistent data, and ensuring the data is representative of the task the model will be performing.

“Integration with Existing Systems” they will help integrate these AI models into the existing IT infrastructure of an enterprise. This can involve developing APIs, designing user interfaces, and ensuring the model's outputs can be used by other systems or processes.

Etc… I believe that they enable organizations transform data into knowledge and ultimately, into wisdom - the highest level of data maturity. This is particularly true when dealing with domain-specific models, which can provide highly targeted and context-specific insights. When you can reach a level of data maturity that enbles actions on data, this is where AI will drive the change that is just starting. It’s very exciting to watch it unravel, not because of the possibility of generating a sentient machine, but with the possibility that will drive new fields of discovery that has been under our noses, and these tools will help us make sence of it all, IMHO.


I mean, I consider myself an effective "AI Integration Engineer" but I _have_ done an Andrew Ng ML Coursera course and also built a MLP in C++ from scratch. But none of that really matters when it comes to applying something like GPT or Stable Diffusion to a particular application. You just send appropriate text to a model via an API call.

I think "AI Integration Engineer" is a bit more of an accurate title because usually it's about integrating AI into existing products or domains. And "AI Engineer" by itself sounds a little bit like you might be claiming to be a PhD. But just a bit. I think it's fair enough to shorten it to AI Engineer though so we don't all have to type out "integration" over and over.


recently saw a demo app built on top of GPT3. Used a rest API for prompts. The backend was hooked to a corpus of PDFs and a SQL Database with financial data.

What changed was the queries were English/natural language. The query language has changed. That's the R in CRUD. I wonder if the C, U, and D will also change.

While this is one of many types of AI, it means commands will be in natural English. This might be a big deal for UI builders because how we ask questions has changed to NL.


I'm curious about this sort of use case - how long does it take for a GPT-based system to process a bunch of documents it's never seen before so that you can perform NL searches on them? Assuming something ike a million total pages of text are we talking minutes? Hours? Days?

And is it at all feasible to ensure whatever factual information is returned is only sourced from said documents, vs being "hallucinated" by virtue of whatever weights exist based on the core training corpus?


hi HN! am writing this up as a recap of the enhanced role of code in LLM applications, and the emerging professionalization that will happen as a result. would welcome any and all feedback!

I'm also soft launching the conference I am planning for Oct. Join us if you are in SF (will be streamed) https://ai.engineer/


Has "robopsychologist" shown up in job ads yet?


AI engineer is the new webdeveloper.


I predict Cognitive Engineer / Cognitive Architect will become a thing.


And it will have nothing to do with actual cognitive science, just like neural networks, artificial intelligence, and machine learning.


It's not the AI Engineer it's the AI Operator that everyone is looking for.


"Operator" feels right to me as well here.


Nice. One step closer to having Smooth Operator be a real title




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: