Deepseek R1 uses <think/> and wait and you can see it in the thinking tokens second guessing itself. How does the model know when to wait?
These reasoning models are feeding more to OP's last point about NVidia and OpenAI data centers not being wasted since reason models require more tokens and faster tps.
From playing around they seem to 'wait' when there's a contradiction in their logic.
And I think the second point is due to The Market thinking there is no need to spend ever increasing amounts of compute to get to the next level of AI overlordship.
Of course Jevon's paradox is also all in the news these days..
These reasoning models are feeding more to OP's last point about NVidia and OpenAI data centers not being wasted since reason models require more tokens and faster tps.