
Linus Torvalds has switched to AMD - kdrag0n
https://lkml.org/lkml/2020/5/24/407
======
Someone1234
I just upgraded from an 2015 i7 4/8 Core/Thread CPU to a Ryzen 7 3700X 8/16.

When I first starting using the new CPU the most striking thing was how /few/
of the cores were under-load during my compilations due to several bottlenecks
in my build I didn't realize I had. Over half of my cores were near idle.

I was able to reduce compilation times by 75% (126~ sec to under 31~ sec),
just by allowing several processes to run concurrently, and changing the order
of a few others, so they weren't fighting over file system locks.

I went back and tested it on the old i7 machine, and still got a 30%~
improvement. My point is: Upgrade away, but make sure your tooling and scripts
are designed for that type of concurrency otherwise you'll be wasting a lot of
the potential. Mine weren't.

~~~
keyle
Are you saying that the AMD stuff doesn't do hyperthreading?

~~~
411111111111111
... Considering hyperthreading is a proprietary Intel system: no, AMD doesn't
do it.

Not sure how that would be in any way relevant to the parents point though.

~~~
gen3
It should be noted that AMD has SMT, which is pretty much the same thing.

[https://en.m.wikipedia.org/wiki/Zen_(microarchitecture)](https://en.m.wikipedia.org/wiki/Zen_\(microarchitecture\))

~~~
willis936
While shoring up nomenclature: I believe “simultaneous multi-threading” (SMT)
is the generic term and “Hyperthreading” is Intel’s branding of their SMT
implementation.

Calling AMD’s SMT “hyperthreading” is like calling all tissues Kleenex. It’s
fine imo, but it doesn’t hurt to know these things.

------
graton
I guess Greg Kroah-Hartman is also switching or has switched to AMD.

A video from the person (Level1Techs) who built him a new machine.

Building a Whisper-Quiet Threadripper PC For Greg Kroah-Hartman:
[https://www.youtube.com/watch?v=37RP9I3_TBo](https://www.youtube.com/watch?v=37RP9I3_TBo)

------
m0zg
I switched to the same CPU (+128GB of RAM) a few months back. Amazing value,
IMO. Truly a fire breathing workstation. Costs less that the base config of
Mac Pro, as well. In addition, while the first gen Threadripper boards were
picky AF with respect to memory choice, the new TRX40 board took 128GB in 4
DIMMs like a champ, and is 100% stable at the memory's "XMP" settings. I'm
pretty impressed with this and don't regret spending $1800 on the CPU. It's
really a no-brainer for anyone who does deep learning or works with C/C++, or
both, especially if you can write it off as a business expense.

~~~
rvz
> I'm pretty impressed with this and don't regret spending $1800 on the CPU.
> It's really a no-brainer for anyone who does deep learning

I hate to ask this but, why would this CPU be any good for Deep learning,
especially for training purposes? That doesn't make any sense.

Sure if I needed a workstation that can build large software like Rust, LLVM,
Chromium or Linux in ~30s then either the 3970 or 3990X are worth getting. For
Deep Learning? this will perform very poorly or even end up permanently
damaging the CPU, which is a very expensive investment to waste on. You might
as well get the TITAN RTX for that, which is a no-brainer for deep learning
use-cases.

~~~
m0zg
Because augmentation is done on the CPU, and a slower CPU can't keep up if you
have 4x 2080ti's in your workstation training in fp16, which is how I prefer
to run things. Moreover, a SATA SSD also can't keep up, so you need NVMe. And
for that you need an extension cable, since NVMe stick will overheat in its
default location (right under the GPU). Found out the hard way. This is also
why AWS sucks so bad for deep learning workloads: on some workloads it's very
easy to bottleneck on CPU, and unlike with Google Cloud, you can't just drag
the slider and give your VM more cores when needed. Jeff Bezos determined that
4xV100 should get 32 hyperthreads, so you get 32 hyperthreads.

~~~
ctchocula
Have you looked into using DALI [1] to do augmentation on GPU? They've gotten
some nice speedups for computer vision that way.

[1] [https://devblogs.nvidia.com/fast-ai-data-preprocessing-
with-...](https://devblogs.nvidia.com/fast-ai-data-preprocessing-with-nvidia-
dali/)

~~~
m0zg
Of course, I've looked at everything. Given how expensive modern GPUs are, it
is best to use their resources for deep learning rather than augmentation.
That way you also get to pipeline augmentation: while GPU is doing the
forward/backward pass, the CPU is also cranking out the next batch, so in a
way, you're utilizing the resources better.

Another issue is, DALI only supports a subset of augmentations that e.g.
Albumentations supports, and I'd much rather be working on the "neural" bits
than wrestling with augmentation algorithms.

------
terrywang
I've been running Fedora (21 - 32 as of now) on HP MicroServer N54L Gen7 since
2014, it has a relatively weak AMD Turion II Neo N54L Dual Core processor but
it's been working reliably over the years, still strong (not really when
compiling, e.g. ZFS modules via DKMS).

DIYed a AMD Athlon XP 2500+ (Barton) + Epox 8RDA3+ (AVC 112C86 fan) desktop,
overheating was a headache, overclocking even made it worse, finally that
motherboard didn't survive overheating problem (so AMD CPU cooling has always
been my concern lol).

Good to see consumers are more willing to accept AMD after all those Intel
Meltdown & spectre, etc. drama (I was working as Tech Lead for XenServer
support team, to some extent I knew how bad it negatively impacted from that
specific PoV). Personally I'll prefer AMD when buying new hardware.

~~~
threeseed
It's actually going to be a great era for consumers.

AMD and Intel in strong competition improves the quality and diversity of the
CPUs and most importantly reduces their prices. We saw this almost immediately
when the 10980xe which is largely identical to the 9980xe was sold for 50% of
the price as a result of the 3950x/3960x.

In the server space however AMD will likely remain as a minor player as ARM
inevitably starts to find its way in as a result of its far superior
price/performance ratio.

~~~
som33
>It's actually going to be a great era for consumers.

It's actually NOT, AMD, Intel, MS and big media companies are planning to put
hardware DRM inside the computer.

The last 23 years of PC gaming we've seen the PC become a closed platform
because of STEAM and mmo's, aka any client-server software you buy mean's you
no longer own your PC or have any personal privacy because the program is
constantly beaming data back to the mothership.

So no, they are going to turn the PC into locked down platform like mobile
where you never see the exe files, they are trying to kill off local
applications they want to "end piracy" by literally removing any control you
have over your PC.

That's what Windows 10 DRM is about, UWP - encrypted computing, vm's, etc.
Mean's it will be increasingly impossible to preserve old software because
they are not honest binaries.

Don't think so? That is what Irdeto is all about, they've been encrypting PC
game files for a while now and the future of PC gaming looks grim with always
online drm, encrypted files because of micro-transactions and in game stores.

[https://irdeto.com/](https://irdeto.com/)

So no... the future looks locked down and dystopian to anyone who's been
paying attention, what we're gaining in performance we're losing in freedom
and increasing levels of DRM, VM's and encrypted software.

~~~
imtringued
>It's actually NOT, AMD, Intel, MS and big media companies are planning to put
hardware DRM inside the computer.

That's pretty old news. Things like the AMD PSP or Encrypted Media Extensions
(DRM implemented by webbrowsers) exist primarily because media companies
strongarm vendors into implementing DRM against their will. Things like HDCP
simply do not work if they aren't deeply integrated into the hardware.

Steam is another example of a platform where developers are asking for DRM.
The reality is that DRM is optional on Steam [0] but almost no developer is
voluntarily disabling DRM. The high profile publishers even add third party
DRM to the games because they think what steam does isn't enough!

>The last 23 years of PC gaming we've seen the PC become a closed platform
because of STEAM and mmo's, aka any client-server software you buy mean's you
no longer own your PC or have any personal privacy because the program is
constantly beaming data back to the mothership. >So no, they are going to turn
the PC into locked down platform like mobile where you never see the exe
files, they are trying to kill off local applications they want to "end
piracy" by literally removing any control you have over your PC.

I'm not sure why you are using Steam as an example because it is a piece of
software that wouldn't exist once Microsoft forces every application to be
delivered through the Microsoft store. Not only is Steam third party software,
it is also a tool that installs even more third party software. This bypasses
the entire idea behind only allowing reviewed applications on an app store.

Steam also has another very nice feature that lets you avoid problems
associated with Microsoft. It runs on Linux and it even lets you play Windows
only games on Linux. Once you switch to Linux all of those problems you are
talking about are irrelevant.

[0] [https://steam.fandom.com/wiki/List_of_DRM-
free_games](https://steam.fandom.com/wiki/List_of_DRM-free_games)

~~~
som33
You don't get the end game was to client-server the big budget games which has
happened. AKA diablo 1 + 2 we owned the game outright, not so with diablo 3
and overwatch.

Steam was forced into half-life/cs in 2004, no one wanted it and steam is
malware. That is why we lost dedicated servers and level editors in the AAA
gaming space.

GTK Radiant - level editor quake engine games

[http://icculus.org/gtkradiant/](http://icculus.org/gtkradiant/)

Doom vs Doom eternal. Because the internet makes stealing software easy by
holding back program files from the user.

Doom was the grandfather of modding on the Pc, in doom 2016, we got a gimped
snapmap, and doom eternal is totally locked down. A far cry from the id
software of the 90's.

------
9wzYQbTYsAIc
He didn’t just switch to AMD, he switched to the 32 core Threadripper 3970x.

~~~
dzhiurgis
Wonder why not 64 core Threadripper 3990x?

~~~
SSLy
My guess: Amdahl's Law vs the noise made by the cooler.

~~~
wyclif
Yes, he says in the video that one of the goals was to put together a system
that was as quiet as possible.

~~~
dzhiurgis
They are same TDP

------
sithadmin
AMD's core density is a huge deal, and it's not just for gamers anymore. Lots
of enterprise software companies are rethinking or have restructured their
licensing agreements to prepare for a future where per-socket licensing (which
pretty much implied 2x sockets per server) will be undercut by 2nd and 3rd gen
Epyc making single-socket servers at scale relevant again.

~~~
TwoNineA
Core count is not that important for games.

~~~
fomine3
Core count for gaming is important until it matches to latest console game
hard (now 8core, next gen also 8core)

~~~
wtallis
Current consoles are still using low-power CPU microarchitectures, albeit at
higher clock speeds than the original PS4 and Xbox One. So it's still pretty
easy to match the console CPU power with a modern desktop processor that has
fewer CPU cores each providing much higher per-core performance. When the next
generation of consoles arrives at the end of the year, the Xbox and
PlayStation families will move to a desktop-class microarchitecture with
performance per clock that's competitive with retail desktop processors.

------
threeseed
Not surprised. There is no equivalent from Intel for the 3970x except from the
significantly more expensive Xeon line. And the closest Intel 10980xe part is
18 cores with availability close to zero.

Also I can't imagine Linus is going to be interested in OC which is where you
get most of the value from the Intel chips.

------
aorth
Cool! Honest question: what's he doing for graphics then? Intel chips have had
integrated graphics with superb Linux driver support for years. Does this AMD
ThreadRipper 3970x have an integrated graphics?

~~~
Jnr
I wonder the same thing.

Intel probably has the best graphics drivers for Linux.

I am currently using Nvidia with proprietary drivers and it works fine for now
but I probably won't be able to switch to Wayland any time soon. But other
than that it works great. Including gaming.

And I also have media box with AMD Ryzen 5 2400G APU and that thing has been a
bit of a problem. GPU drivers ted to crash, GPU performance on Linux is poor,
and it took about half a year of kernel updates to finally make it not crash
every day. Are there are any AMD graphics cards that have good Linux drivers?

~~~
majewsky
I have an AMD card from 2015 (an R9 Nano to be exact). I had to use fglrx for
the first few months, but ever since amdgpu entered mainline, it's been rock-
solid with good performance. Vulkan works great as well. (Footnote, I don't
care about OpenCL, so cannot comment on that.)

Based on this positive experience, when it was time to replace my notebook, I
got one with a Ryzen 2 APU. The GPU part of that also works great. There were
some problems with the IOMMU, but those were resolved by a firmware update
from Lenovo a few months in.

------
everybodyknows
>AMD Threadripper 3970x

Curious to know about the rest of the box ... but he doesn't say.

~~~
antisthenes
Well, we already know what he thinks about NVIDIA.

------
SomeoneFromCA
Intel is till more convenient for user like me, who dislikes external videos.
Most of AMD do not have integrated video, an those who have, show problems
working in Linux and BSDs. AMDs also have higher idle consumption. Other than
that, AMDs are clearly better CPUs.

~~~
washadjeffmad
Ryzen G series includes integrated graphics.

There also aren't any major bugs for Zen/Zen 2/TR4 on Linux/BSD.

Ironically, the largest stability issue with 1st/2nd gen Ryzen is caused by
idling _too_ efficiently. Older power supplies sense this sub-5W idle as being
suspended or powered off and throttle their 12V rails, leading to system
hangups. An option in BIOS must be set to raise the idle wattage for these
PSUs.

The demographic most affected by this, first time and budget builders, were
also the least likely to be able to diagnose it, leading to its prevalence in
forums. Search for "power supply idle control" to learn more about it.

~~~
SomeoneFromCA
This is clearly not true. There are serious problems with compatibility of
integrated graphics in Ryzen and Linux. More or less reasonable support in
Linux (and I am not even talking about FreeBSD, let alone other BSDs) appeared
only very very recently, and it is still buggy, causes lockups, black screen
boots etc. Not only that, Ryzen G series are not attractive at all because
they are always underpowered and one generation behind; simply put, they suck.

Speaking of Idle power consumption - there is evidence all over the internet,
that Ryzens themselves are not neccerraly very power hungry, but their
motherboard chipsets are. Here for example, Rx 3xxx consume 10W more at idle
[https://tpucdn.com/review/amd-ryzen-5-3600/images/power-
idle...](https://tpucdn.com/review/amd-ryzen-5-3600/images/power-idle.png).
The reason is unclear, but it is what it is - they really hungrier.

------
mattbillenstein
Not blazing single-thread perf, but pretty high up the total perf list...

[https://www.cpubenchmark.net/singleThread.html](https://www.cpubenchmark.net/singleThread.html)

~~~
starky
I mean, how silly would you have to be to buy a 32 core/64 thread processor if
single threaded performance was a consideration at all. There is obviously
going to be some sort of tradeoff in single core performance to obtain that
density of cores.

~~~
phonon
3970X has a maximum single core turbo of 4.5GHz, with a similar/better IPC
than Skylake. You're not missing much at all. The tradeoff only exists when
thermally limited when multiple cores are running, but given how many more
cores there are in the first place, you are still way ahead.

~~~
mattbillenstein
Passmark numbers seem to say -15.5% - but I guess that's maybe not that big of
a deal in most things.

I use my desktop for playing games, so having 4-8 cores is enough, I'd much
rather have fast cores for things that don't parallelize well.

That being said, I am pushing a pretty old cpu - i7-4770k - and I haven't been
able to convince myself to spend the money on the upgrade since I'm down ~32%
from the best thing you can get considering single thread perf.

Maybe the next round of cpus - Zen 3 et al. I'll be doing nvme pcie-4.0 ssd as
well in the next build which should give a big boost over the sata ssd I'm
using now.

------
andai
Is he still working on the kernel?

~~~
wolf550e
Yes, he is in charge of merging changes from ~200 subsystem git trees into the
new release, every week. Every ~8 weeks they do a major version with big
changes, and then they do weekly releases with bugfixes, and then they do a
major version with big changes again.

The major releases are then maintained by Greg Kroah-Hartman (the number 2
person in Linux), who cherry-picks fixes from mainline that should go to
stable. Distros have kernel teams that also maintain their own stable trees,
with or without help from upstream stable maintainer.

Linus can't code review all the changes queued for the next major release, but
he does make sure that in case the subsystem maintainer says "this is safe to
merge, it has been tested" but actually it doesn't compile, then Linus will
yell at him and call him bad words in Finnish. Because the subsystem
maintainer is an important job, people rely on them, they have years of
experience, they know better than to do pull requests with junk.

He is also involved with resolving disputes and fixing things that affects his
own workflow.

~~~
andai
Thanks, that's great to hear.

------
saagarjha
I'm pointing to this the next time someone thinks Hacker News is above
celebrity tabloids.

~~~
vijucat
Unpopular opinion: This is how git rose to where it is today. If people were
less blinded by celebrity status, they'd see that Linus' lack of exposure to
paradigms beyond C such as OOP or FP (due to deliberate disdain; his loss)
meant that C pointers were the primary abstraction on which he built git, and
that it suffers greatly because of this. Not to mention that good UX is a rare
skill among developers in general. See
[https://stevelosh.com/blog/2013/04/git-
koans/](https://stevelosh.com/blog/2013/04/git-koans/)

Mercurial should have won. And Plastic SCM is awesome, designed conceptually
from the ground up to build on lessons learned from git (and by an Associate
Professor in CS). In fact, the whole concept of DVCSs is ridiculous in 99% of
corporate contexts where you can't use your laptop / desktop at home without
logging in into the VPN. Why do we even need distributed version control in
such a setting? Subversion is perfectly enough and even non-developer users
can actually understand and use Subversion effectively. See
[https://svnvsgit.com/](https://svnvsgit.com/)

Mind you, I'm not a blind hater: I love that Linus did well in life. I'd be
thrilled if his net worth was a billion instead of just $150 million. Surely,
he contributed much more than that.

~~~
theamk
Can we stop with the conspiracy theories? Git rose where it was today for a
multitude reasons, most of them technical: it is very fast, rebase is well
supported, if it breaks it’s easy to fix and so on. (Yes, some of these are
fixed now, but it is too late)

I mean, many people had the exposure to bit git and hg at some point in their
life - especially the kind of people who make decision which vcs to use.
Celebrity endorsement may be good to convince me to try something, but it
cannot convince one to use something over alternatives, if they had enough
experience with both.

~~~
mikekchar
Speed was definitely a big deal when git first came into being. However, as
far as I remember, the main thing that made git popular was actually Github.
Before Github, various souce repository sites were _awful_ (and in the case of
what SourceForge became, more than awful).

~~~
taeric
I recall that the git-svn bridge had me hooked on git long before github. Also
made it braindead simple to migrate an svn shop.

We looked at hg. I also dabbled in a few others. bzr, I believe was a popular
one.

As said upstream. There are plenty of reasons that git "won." Most of them did
seem technical. But, I could see things going many ways. For the most part,
source control is not a problem most people think they have. Just look at how
terrible the data science folks are? And management is stuck in whatever MS is
doing in Word nowadays.

~~~
fomine3
"Plaintext and VCS solve all problems" is not very practical for everyone
other than programmers.

~~~
taeric
The other fields also have a terrible time recreating anything. So... I kind
of feel that they are making the poor choice here.

In particular, I have yet to see a successful collaboration that wasn't
ridiculously ephemeral in data science or document creation, that wasn't
backed by a much nicer format. (Where, ephemeral means that after it is done,
it is referenced as PDF, but not directly anymore. Which, to be fair, is the
majority of documents that exist in the world.)

------
klingonopera
...it's funny, because if anyone can write code that _wouldn 't_ require 32
cores to run, Linus would certainly be one of them.

Don't forget to keep in touch with reality, a bunch of people on this globe
still have to get by using dual-cores.

~~~
colejohnson66
So because a lot of the world is running dual core, programmers shouldn’t
attempt to improve compilation time with more cores?

~~~
tinus_hn
I think he wants Linux to just remove most drivers from the kernel so this guy
can compile it with everything enabled on his slow computer.

~~~
klingonopera
No, it completely makes sense on the compilation aspect, I don't deny it.

But where do you test the compiled software?

If I use an analogy from the Internet, it's like web developers using hugely
uncompressed pictures, but nobody cares, because everyone's got broadband. But
then, the guy with a mobile data rate wants to view the page. Or the poor sod,
who for some reason is still stuck on 56k. And except for bloated pictures, I
mean system services, tasks and processes, and no one realizes what a drag
they may be causing, because everyone's got at least four cores nowadays,
anyways. That's a very real danger for any developer, to "lose touch with
reality" when it comes to their users.

