Hacker News new | past | comments | ask | show | jobs | submit login

I've seen this claim made for routers and other low intensity low latency workloads.

That would make sense, my understanding is that with a 100% pegged CPU hyperthreading won't be super beneficial as they aren't real cores, just smarter scheduling. You can't really schedule 100% load better, however for applications that are latency specific it would make more sense, as you don't have the CPU pegged, you just want a faster response.

> You can't really schedule 100% load better

Sure you can. You can do math while another HT is waiting for memory. Sometimes you can even multiplex use of multiple ALUs or one HT can do integer and another can do floating point.

It's actually under high multithreaded load that HT shines, especially if that load is heterogenous or memory latency bound.

I too was once under the misapprehension that HT was "just smarter scheduling", until I took a university course in microarchitecture that explained how Simultaneous Multithreading actually works in terms of maximising utilisation of various types of execution units. I wonder why "smarter scheduling" became a common understanding.

Wouldn't hyperthreading also be more power-efficient compared to running a second core?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact