
10-core i9-10900F desktop CPU lags behind 8-core Ryzen 9 4900HS mobile APU - DeathArrow
https://www.notebookcheck.net/10-core-Intel-Core-i9-10900F-desktop-CPU-does-well-in-Geekbench-multi-core-test-but-then-gets-put-to-shame-by-8-core-AMD-Ryzen-9-4900HS-mobile-APU.461162.0.html
======
sharken
The 65W is a bit misleading, according to
[https://www.techpowerup.com/265695/intel-
core-i9-10900f-can-...](https://www.techpowerup.com/265695/intel-
core-i9-10900f-can-allegedly-pull-up-to-224-w) and
[https://wccftech.com/intel-core-i9-10900f-10-core-desktop-
cp...](https://wccftech.com/intel-core-i9-10900f-10-core-desktop-cpu-crushed-
by-amd-ryzen-9-4900hs-8-core-cpu/) it is allowed to draw 170W in PL1 mode and
up to 224W(!) in PL2 mode.

Those numbers should feature prominently alongside the 65W figure.

It does not look good for Intel at the moment.

~~~
baybal2
Yes, Intel's "marketing TDP" is a complete sham.

You get completely different digits once you get full engineering datasheets
under NDAs.

Intel's 5W "ultraportable" CPUs for examples go to 17W.

15W ones can boost well above 40W.

~~~
SomeoneFromCA
Interestingly, it was not like that previously. I'd say it slowly started with
Haswells. Sandy and Ivy were often less power hungry than their TDP suggested.

~~~
darkteflon
I don’t think this is about power efficiency, but thermal envelope.

~~~
TazeTSchnitzel
Is there much difference?

~~~
darkteflon
Yeah, I believe the point was max performance, not performance per watt.

------
briffle
Does this new intel CPU have 'fixes' for all issues like spectre and meltdown,
where you were supposed to turn off hyperthreading for better security? I'm
just curious if the fixes for those (and other similar vulnerabilities) are
starting to make their way into silicon, but also that the fixes recommended
usually slow cpu's down quite a bit too.

~~~
kilo_bravo_3
Despite the protestations of l33t Cyberhaxxing Z3r0 Cools everywhere, you only
have to turn off hyper-threading if you are sharing a machine with someone you
don't trust.

If someone is on your machine and you don't know it, they don't have to use
side-channel attacks.

They'll can just use any of the thousands of other ways of privilege
escalation to read the super-secret information you have stored in RAM, or
find it when it is written to disk.

The OpenBSD folks disabled SMT by default because of TLBleed.

A non-datacenter/cloud user disabling SMT to avoid TLBleed is like a normal
person carrying a fireman's rescue saw around with them 24 hours a day in case
they get into a situation they have to saw themselves out of.

~~~
theevilsharpie
> Despite the protestations of l33t Cyberhaxxing Z3r0 Cools everywhere, you
> only have to turn off hyper-threading if you are sharing a machine with
> someone you don't trust.

If you're reading this reply, you probably automatically executed `hn.js`. Are
you _sure_ you know what it does?

Unless you're browsing the web with Javascript completely disabled, and you
also don't have any applications that automatically update themselves from a
remote source (are you _sure_ you know what those various auto-updaters are
downloading?), you're running untrusted code on your machine.

Perhaps you don't particularly care about that risk, or you don't feel the
risk is severe enough to warrant the performance hit, which is fair, but the
risk is there nonetheless.

~~~
flatiron
Are you aware of any spectre POC that in JavaScript you can break out of that
sandbox and do anything worth while?

~~~
bcrosby95
Yeah, my understanding is that browsers added mitigations for this sort of
stuff - so I'm curious about it too.

~~~
flatiron
i believe that the browsers fuzz your timings now which is what spectre
heavily relied on. i think people thinking JS+Spectre in the wild are misled
especially since 99% of people run with mitigations enabled, why would someone
try to exploit it on a browser

------
heelix
I've got a 4800hs 8-core laptop that should be showing up on my doorstep later
today. My kid is doing some heavy video/graphics work in Adobe in her
undergraduate program, and going to try this as a potential
desktop/workstation replacement since she is still a bit of a nomad as a
student. Her current laptop just can't handle the work without trying to cook
itself - but will have some apples to apples rendering comparisons based on
her homework soon.

The crazy thing is the cost.. about $1000 for a base unit. Will be doing some
comparisons to my other desktop threadrippers just to see where it stacks up.

~~~
formercoder
Nice - assuming it has USB-C? The external USB-C SSDs are great for source
media.

~~~
1996
Can you recommend a NUC-like computer?

Something easy to carry around, to LAN, ideally with PCIe slots or an NVIDIA
GPU

~~~
elabajaba
Intel is reportedly going back to their old ways and incentivizing nuc vendors
to either delay or cancel AMD versions.

[https://twitter.com/IanCutress/status/1249819270537715719?s=...](https://twitter.com/IanCutress/status/1249819270537715719?s=19)

------
nicoburns
I'm really hoping apple puts these AMD mobile chips in their next MacBook Pro.
At the moment I'm not seeing much reason to upgrade from my 2015 model, but if
I could have 8 cores, that would a different matter...

~~~
vijaybritto
I think Apple is already in the process of making their own chips. I'm
thinking that they'll release an ARM chip for laptops. They are already
blazing out on the mobile side.

~~~
Krasnol
Yeah and we already know, it won't be in any way better but it will cost you a
5digit number but it will look sooo nice.

~~~
Krasnol
@downvotes: you know I'm right and your anger just shows how guilty you feel
because of it.

------
nottorp
Hmm I need some schooling please:

"The TDP for the latter part is 35 W while the i9-10900F is listed at 65 W,
but being an Intel desktop processor that just reflects the TDP for the base
clock, with much higher energy demands required for higher clocks (e.g.
maximum PL1 has been recorded at 170 W)."

So where do I find the actual maximum power consumption on recent desktop CPUs
then? Do I set a limit in the BIOS? Do I read each and every review to see
what they measured?

Pointers to relevant links very much appreciated, thankee sai.

~~~
leeter
> So where do I find the actual maximum power consumption on recent desktop
> CPUs then?

Unfortunately you do this by putting it on a motherboard and and generally
putting an amp meter on the 12v EPS rail. Either that or just power from wall.
The CPU vendors have worked really hard to hide this. Mostly because for most
people it's largely irrelevant these days as the CPU will scale to cooling and
power delivery. There are some limits though.

~~~
derefr
You'd think "power consumption for prolonged 100% all-cores workload" would be
the _most_ relevant thing when selling server chips specifically, though,
since it translates directly into OpEx. And yet even on those data sheets it's
nowhere to be found.

~~~
leeter
So for that it's "Easier" because you check the VRM spec for the socket for
server. That's the maximum draw the manufacturer expects any CPU for that
socket to ever pull. Then generally most of the higher end parts will vary
clock speed vs cores etc to stay on that line.

This works because server motherboard manufacturers only ever build to that
spec and not one iota beyond it for cost reasons.

But yeah... average expected draw is going to be interesting and most
manufacturers won't give a number to avoid getting sued.

~~~
jandrese
Yeah, but the mobo manufacturer has to plan on you putting in the thirstiest
chips on there, when you might be putting lower end chips on because your
workload is mostly memory or PCI channel constrained and you are looking to
max out the density in the rack.

Of course you also have to consider the cooling capacity of your chassis. 170W
in a 1U form factor needs a hell of a lot of airflow to avoid thermal
throttling, especially on a dual CPU motherboard. Plus you gotta fit the power
supply and everything else in there.

------
Havoc
I really appreciate this entire race frankly.

For a desktop computer even mid tier 3700X is now "good enough" even for
someone with aspirations towards solid gaming.

On home use front CPUs seem to have outsprinted their usage cases.

~~~
_bxg1
For gaming in particular, the GPU is almost always the bottleneck. It's much
easier to throw in more polygons and texture pixels than it is to come up with
bigger and more complex gameplay simulations. I'm still running a quad-core i5
from 2014 in my gaming desktop and it rarely has an impact.

------
leetcrew
this doesn't quite pass the smell test. I can't quite tell if there's a way to
filter out results from overclocked parts in geekbench, but it looks like
these results for the i9-10900F are significantly lower than typical results
for a stock i9-9900K. hard to believe they would do worse on the same process
with more cores.

~~~
AnotherGoodName
It also doesn't pass the smell test in that the single core performance is
higher than the Ryzen 4900 and it also has both more cores and more threads
yet it is far behind on multicore performance?

Likely some thread affinity issue that just needs a patch.

~~~
throwaway2048
Multithreaded applications tend to share a lot of data between threads, and we
already know Intel's mesh fabric core interconnect[1] seems to be inferior to
AMD's chiplet+io die in terms of performance and access latency uniformity.

[1] used in all 10core+ Intel CPUs.

~~~
lliamander
AMD's 4000 APUs are monolithic and don't use separate chiplets and IO die. The
chiplet + infinity fabric has higher latency than Intel, but that's not a
problem for the APUs.

------
3fe9a03ccd14ca5
These mobile APUs are amazing. Especially when combined with a GPU. It would
be really nice to be able to use these in a standard ATX motherboard because
it would make the perfect home server CPU.

~~~
1996
I share your interest for a silent yet powerful homeserver.

Unfortunately, there doesn't seem to be any desktop motherboards for mobile
zen2 yet.

Too bad. I would even settle on a NUC.

~~~
lostlogin
A common use case includes video transcoding and the ability to offload this
to hardware makes the NUC a good choice.

I’d love a good AMD option, but for the size and price, I can’t find anything
that will transcode as efficiently.

The new Ghost Canyon NUCs look excellent, but I assume pricing will be eye
watering. They are also much larger.
[https://www.anandtech.com/show/15720/intel-ghost-canyon-
nuc9...](https://www.anandtech.com/show/15720/intel-ghost-canyon-nuc9i9qnx-
review)

------
websg-x
The leaked benchmark ran with single memory channel instead of dual. The
multicore score is 30% lower because of that. Somebody are trying to mislead
here.

------
_ph_
Is there a 4900HS laptop around already that can be recommended for Linux
usage?

~~~
papermachete
No, Nvidia Optimus is still an unsolved problem in Linux. Vegas run fine tho.

~~~
jakogut
Unsolved, but with exciting new progress. I've gotten PRIME running on my new
ROG Zephyrus G14, and all I have to do to run applications on the dGPU is use
"prime-run". Dynamic power management also shuts down the GPU when it's not in
use.

~~~
papermachete
That's highly inconvenient. The best solution I've found is optimus-manager or
prime-switcher.

------
paypalcust83
Btw, does anyone know how to buy boxed (PIB/WOF) AMD Rome EPYCs at reasonable
(not full MSRP) market prices rather than OEM/tray (non-WOF) ones?

I recently ordered WOF EPYCs from a small business that fronts one of the
largest distributors of PC components, and they/distributor sent the no-
warranty OEM/tray ones instead. (30 day warranty + 15% restocking fee is no
deal at all.)

------
TazeTSchnitzel
LinusTechTips recently took a look at a gaming laptop with the same APU and
were super impressed with its Intel-scorching performance:
[https://www.youtube.com/watch?v=ZYqG31V4qtA](https://www.youtube.com/watch?v=ZYqG31V4qtA)

~~~
stagger87
The laptop from LTT and this article is the same.

------
all_blue_chucks
Geekbench is self-reported, unverified information. Anyone could post
arbitrary benchmark results for any chip.

