Hacker News new | past | comments | ask | show | jobs | submit login

Deepseek R1 uses <think/> and wait and you can see it in the thinking tokens second guessing itself. How does the model know when to wait?

These reasoning models are feeding more to OP's last point about NVidia and OpenAI data centers not being wasted since reason models require more tokens and faster tps.






From playing around they seem to 'wait' when there's a contradiction in their logic.

And I think the second point is due to The Market thinking there is no need to spend ever increasing amounts of compute to get to the next level of AI overlordship.

Of course Jevon's paradox is also all in the news these days..


Probably when it would expect a human to second guess himself, as shown in literature and maybe other sources.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: