It is probably an internalized "prompt engineering" trick from gpt3.5 times, where you could achieve near gpt-4 performance using stuff like that. Rephrase the question and plan your answer was on top of the list.
Keep in mind that tokens are LLM units of thought; the only moment the model does any computation is when generating tokens. Therefore, asking it to be succinct means effectively dumbing it down.