Hacker News new | past | comments | ask | show | jobs | submit login

I have tried these examples with GPT4 and it didn't need any prompt engineering. You can just type in a very loose way and it still works. Prompt engineering was much more needed in GPT3.5 I think. GPT4 itself is pretty good without any prompt engineering. And if improvements continue, I think prompt engineering will have no significance for almost all use cases, which is what matters.



Depending on what you need, you will need to refine the prompt a bit at least.

Lets say you're building a system that perform actions based on what the LLM gives back, then adding "Reply with a JSON object with the keys 'action' and 'parameters'" will make it return something actionable. And that is what people call "prompt engineering". Obviously, you're not gonna need to do something like that unless you have a bit more advanced use cases, but there are use cases where some prompts are better than others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: