Hacker News new | past | comments | ask | show | jobs | submit login

I do so as well, but usually the response is the chatbot first generating a paragraph about how it’ll comply with the request, making the prompt moot



It is probably an internalized "prompt engineering" trick from gpt3.5 times, where you could achieve near gpt-4 performance using stuff like that. Rephrase the question and plan your answer was on top of the list.


Keep in mind that tokens are LLM units of thought; the only moment the model does any computation is when generating tokens. Therefore, asking it to be succinct means effectively dumbing it down.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: