Hacker News new | past | comments | ask | show | jobs | submit login

Not really. It's just a higher level programming language. You still need to know how to decompose a problem and structure the solution. On some level, also systems thinking as this stuff will get integrated together.



It's not higher level programming, because it's imprecise. We don't know how input will impact output, or to what extent, or why. If prompt engineering is programming, so is poetry. Which, I suppose you could make an argument for (calling forth subjective imagery in the reader's mind), but at that point you're sufficiently stretching definitions to the point they mean nothing.


I have been using ChatGPT to test its ability to create JSON on the fly, just with a natural language description of the exact data format, and a natural language description of what I want to go in that object. It even escapes the string properly if it contains special characters, from what I noticed.

Other than complicated requests like “if object is of type A, include fields ABC but not D. If object is of type B, include only D but not other fields”, it gets this right 99% of the time.

It also works for CSV, but it’s trickier. It seems like it “knows” how JSON works to a much better extent.

And as for parsing JSON? I’ve not truly pushed it to its limits yet, but so far it’s had no issues understanding any of it.

It’s mind-boggling. Yes, it’s inefficient, but it can basically parse, generate and process valid JSON with just a brief set of instructions for what you want to do. For exploring ad-hoc data structures or quick mocking of API backends, this is great.


Try evaling completed Python dicts! Works like a champ…


> It's not higher level programming, because it's imprecise.

This argument is weak. Undefined behavior does exist and „high level programming language „is a moving target


Not only that but throwing away precision is an intrinsic part of becoming higher-level. This is just a different way of doing it.


Is “imprecision” that the previous commenter is reacting to maybe more specifically the strong potential for nondeterministic behavior exhibited by LLMs? That would seem to stretch the practical experience of programming vs. “lower-level” tools like Python or C. (Also, a world where I am calling Python “lower-level” is wild).


Prompt generation feels like it's going to reach a point where it becomes more similar to legalese — which in itself feels more similar to a programming language than natural speech.


why can't the model do those things? As someone said, it's like Google-foo. It used to be the case where you had to give Google some context to disambiguate certain queries but now it is really good so I don't have to write anything crazy like site:wikipedia.com etc


My experience with Google is the exact opposite. It is so poor at interpreting "what I really want" that for it to be really useful, I need to lean harder on google-fu than in the beforetimes. But Google has removed, weakened, or broken many of the operators that were used to do this, so google-fu isn't as possible as it used to be.


Because most people are so unbelievably boring, so statistically predictable and common, that taking off the edge of queries leads to higher click-through rates overall. It's the same pages all the time, and most people don't even bother to scroll to page 2. Of course, people who know what they want, and are best served with diversity and breadth - like you ;) - lose out.


I am absolutely not affiliated but you should seriously consider giving Kagi a try.

I was increasingly frustrated with all the NLPness and operator deprication in google which has been accelerating since at least the 2010s.

But with Kagi it reallys makes me feel like I am back at the wheel. I think for product search it still has some way to go but for technical queries it is just on a whole other level of SNR and actually respects my query keywords.


Huh, my recollection is the exact opposite. I remember the good old days when I could use inurl: link: and explore the website contents fully and drill down further if necessary, compared to now, where google seems to always think to know better than you what you are looking for. If you are not happy with the initial results it gave you, you are pretty much out of options, good luck trying to drill down to some specific thing.


Large language models will probably never be reliable computers. The solution is either providing translation examples (aka, in-context learning, few-shot) or fine tuning a model to return some symbolic method to tell an associated computing environment to evaluate a given bit of math or code.


In many ways this is sort of what humans do. They have something they don't yet know how to do and they go away and learn how to do it.


It definitely feels like I have to work to maintain the state of each algorithmic step when I'm multiplying two three digit numbers in my head. It's a lot easier if I can maintain that state on a piece of paper.

I'm definitely not calculating the path of a projectile in a similar manner when I catch a ball.

I'm definitely not "computing" a sentence when I read it in the same way that I compute the multiplication of those two three digit numbers!


>I'm definitely not calculating the path of a projectile in a similar manner when I catch a ball.

You're not calculating it in a traditional sense, but there's definitely some systems of partial differential equations being solved in real-time.


Because prompts are too general to solve most problems.

Prompt: "Calculate the average price of Milk"

This is far too vague to be useful.

Prompt: "Calculate the average Price of Milk between the years 2020 and 2022."

A little better but still vague

Prompt: "Calculate the average Price in US Dollars of 1 Gallon of Whole Milk in the US between the years 2020 and 2022."

Is pretty good.

For more complex tasks you obviously need much more complicated prompts and may even include one-shot learning examples to get the desired output.


> why can't the model do those things?

Isn't this just an intrinsic problem with the ambiguity of language?

Reminds me of this: https://i.imgur.com/PqHUASF.jpeg

edit: especially the 1st and last panels


Like an advancing army, chatgpt will soon write prompts for you better than any of this new crop of nontechnical wannabe shamans. This little wave of prompt gurus trading for attention their insights into how to game systems they can't comprehend has about as much to do with programming as does slinging shitcoin tips.


ChatGPT has certainly helped me refine some prompts. It’s 50/50 whether I’d make the same changes more quickly just rewriting the prompt by myself, but it helped me notice some blind spots in the prompt and get me thinking about how to fill them in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: