Hacker News new | comments | show | ask | jobs | submit login

Games haven't really been pushing CPUs very hard for the past few years. All the mucking about with overclocking has been purely a matter of culture recently, as you can play top tier games in 4K with a mid-range CPU, as long as it can handle enough PCIe lanes for your monster GPU setup.



Depends on the game, really. That may be true for some, but far from all.


Take a look at the game sections from the recent Kaby Lake reviews. Here's Anandtech's for example: http://www.anandtech.com/show/10968/the-intel-core-i7-7700k-...

Most of them look pretty much like that, with very little difference until you start getting to really low-end or old chips. Games are mostly GPU limited these days and those that are CPU limited are mostly leaning on one or two cores and thus haven't really gained much from recent generations.


You're arguing that benchmarks of recent top of the line chips (which at most tend to have a few tens of percent difference) not showing large performance difference is evidence that games aren't CPU limited, whereas the same test shows drops in single-thread performance immediately cause large performance differences (see AMD chips).

If anything, this shows that games aren't strongly multi-threaded (which comes with severe development and debugging impact) or aren't made to scale beyond the highest end chips consumers are likely to have.

Lastly, consider that while the graphics effects and physics effects of a game can be scaled down, scaling down the AI in many instances would cause balancing issues.


Since when is a $168 i3 chip "top of the line"?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: