Hacker News new | past | comments | ask | show | jobs | submit login

Most, yes. But things like databases, video/data compression, compute / deep learning workloads, etc, _are_ negatively affected by the fact that cores aren't really cores. Basically anything that's actually using the CPU to an appreciable extent will be affected by that. Add to that the hyperthreading-specific CVEs as well.



I'm confident that those working on applications where CPU performance is an important issue, as well as reliability of said performance, are actually not running their critical applications on virtualized infrastructure. The cloud is good for "good enough" solutions where architectural design leveraging horizontal scaling does the job well. In those applications, how a CPU is used or what their performance is is something that falls in the wrong side of the domain of microoptimization. In the Cloud world, the only thing that's relevant is if an application infrastructure needs to scale, and what's the financial impact of that in the operational cost.


Your confidence is misplaced. Netflix is a well known user of cloud services for encoding, and just about everyone runs at least part of their deep learning workloads on AWS or Google Cloud. Not to mention databases.


Encoding is an embarrassingly parallel problem which is trivially scalable by launching new instances. That usecase fits precisely the scenario where CPU raw performance is not an important issue, and the cloud is already good enough.


That's fine for small counts. When you're taking a 20-50% efficiency hit for this, that's a good sized bill difference.


Performance is not tied with price, only with instance count and the amount of computational resources assigned to each virtual instance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: