
Threadripper 3960X Compiles Linux Kernel in Under 30 Seconds - zzeder
https://www.phoronix.com/scan.php?page=article&item=amd-linux-3960x-3970x&num=9
======
l33tman
I was compiling the kernel for my boxes for the most part of the 90's and 00's
and the funny thing is a sub-60-second compile time was always possible with
the "latest gear". Like a 100 MHz pentium. Of course the number of modules you
want/need to compile has steadily increased but still funny to see someone
impressed with a 30 second compile time 25 years later and with a 100x+
increase in CPU power :)

~~~
xenospn
Everybody knows the REAL test is a FreeBSD 'make world' anyway :)

~~~
basementcat
I used to think 'make world' on BSD was some big deal until I started to
synthesize HDL code to bitfiles or GDS.

~~~
Scipio_Afri
What is GDS in this context?

~~~
the-dude
[https://en.wikipedia.org/wiki/GDSII](https://en.wikipedia.org/wiki/GDSII)

File format commonly used for wafer masks.

------
kstenerud
Wow... I still remember recompiling the NetBSD kernel on my Amiga 3000 to
tweak the timings on my Retina Z3. Took 14 hours to compile the kernel...

So if processors are now >1500x faster than they were in 1991, why is it that
my Amiga 3000 has a faster, more responsive UI?

Why is it that an ssh session takes 8 seconds to establish?

Why is it that browsing a network share is 10x slower than viewing a file list
was on a BBS?

~~~
ralgozino
Edit your sshd config and add useDNS: no to it, don't forget to restart the
deamon.

Either way, I get your point, I remember programming in Delphi, compiling,
chatting on IRC and browsing the net with a 56K Winmodem (all the modulation
was software driven) with a Pentium 2 all at the same time, and now I can't
even have and IDE and slack open at the same time.

~~~
GordonS
I'm old enough to remember 14k, 28k and 56k modems - and while yh internet
held great wonder, waiting for pages to load and files to download was
_really_ frustrating.

Oh, and having to hangup so my parents could make a phonecall wasn't fun
either!

This at least is something that has got orders of magnitudes better, and is
very much noticeable. At times I still shake my head in wonder at being able
to download a GB in less than 2 mins on a residential FttC connection.

------
linsomniac
It is cool to see this level of parallelism coming to commodity hardware.
5-ish years ago a friend of mine had access to an experimental box one of the
server vendors had made. Never went to production AFAIK. My memory is that it
had 128 cores, 256 threads. One of the things we did on it, of course, was
compile the kernel. I don't remember the exact numbers, but recall it being in
the single digits number of seconds. This was done with just a ton of Xeon
sockets.

------
corysama
We need to move on from the Linux Kernel and start using “compile Unreal
Engine” as a CPU benchmark.

[https://gpuopen.com/threadripper-for-gamedev-
ue4/](https://gpuopen.com/threadripper-for-gamedev-ue4/)

~~~
dragontamer
That's probably a good point: the Linux Kernel is written using the C
language. But C++ is a far more complicated language to parse and compile.

Unreal Engine, Firefox, Chrome... large C++ programs easily take an hour to
compile on normal hardware.

With that being said: compiling for an hour is a huge hassle for reviewers. I
think reviewers prefer benchmarks that complete in under a minute, rather than
over an hour.

~~~
michaellarabel
That's why I also use the LLVM compilation test (C++) and other workloads
besides just the Linux kernel... But it seems the Linux kernel results are
always what generates the most interest.

~~~
Matthias247
I would bet it's just familiarity. Most people that are interested in PCs have
heard of Linux, and know that it has a Kernel. Or they even have already
compiled kernels themselves. Whereas LLVM is more of a thing that is known to
a subset of software developers.

I'm personally happy to see the LLVM tests, since it gives a good impression
on what benefits software developer which work on big native code-bases can
expect.

~~~
cesarb
I think it's path dependence. The kernel has a large number of compile-time
configuration options; back in the 90s, it was common to compile your own
kernel, after tuning these options to your own particular hardware. It was
also common to get new kernel releases (as a source code .tar.gz, or as a
patch to the previous source code .tar.gz) directly from kernel.org, instead
of waiting for the next release of the distribution you were using. So the
time it took to compile a new version of the kernel for your machine was
something many Linux users had experience with, and it was clear when a
machine was faster (perhaps as a consequence of having previously compiled and
installed a newer kernel!) because it took less time to compile the kernel. To
turn that into a benchmark was just a matter of standardizing on a kernel
release and a set of configuration options.

------
beagle3
It's a beast, but ... that's not a very impressive measure. Bellard strikes
again -

From [0], in 2004: TCCBOOT is a boot loader able to compile and boot a Linux
kernel directly from its _source_ code. It is only 138 KB big (uncompressed
code) and it can compile and run a typical Linux kernel in less than 15
seconds on a 2.4 GHz Pentium 4.

[0]
[https://bellard.org/tcc/tccboot.html](https://bellard.org/tcc/tccboot.html)
[2004]

~~~
anticensor
I would prefer gccboot (with gcc -march=native -mtune=native) though, because
tccboot cannot optimise the output.

------
abridgett
Reminds me when they built a (much older, smaller) kernel on a pSeries (POWER
architecture) in under 5 seconds:
[http://es.tldp.org/Presentaciones/200211hispalinux/blanchard...](http://es.tldp.org/Presentaciones/200211hispalinux/blanchard/talk_2.html)

Gosh, 18 years ago, now I feel really old :D

------
fortran77
If you want to keep these compile times, better not rewrite the kernel in
Rust.

------
hrgiger
i think this the script they use:
[https://openbenchmarking.org/innhold/9a7355bdb73d85c9e044d02...](https://openbenchmarking.org/innhold/9a7355bdb73d85c9e044d020939c0ae9bcc1774e)

on vanilla head branch myn "time make -s -j32" took "0m56.226s" with 1950x

------
Havoc
Was recently googling what the Kernel guys were using out of curiosity. GKH
mentioned that he had access to 32 core AWS instances for it....7 years ago.

Still...exciting times for consumers.

------
ilaksh
Nice. How long to build Chromium?

------
KirinDave
I don't really understand why this is newsworthy, so maybe someone could help
me understand. It's a big, power hungry desktop CPU with incremental
performance gains over the last generation.

Why is this important? Is there some kind of architectural breakthrough these
CPUs are using? Are these CPUs recovering performance lost to Spectre/Meltdown
mitigation?

~~~
barkingcat
This is a big power hungry desktop CPU with incremental performance gains with
the x86/x64 instruction set _from someone not Intel_ and forcing intel to
compete on pricing and performance.

Just that alone is newsworthy.

~~~
KirinDave
I think that's it: it's sort of an intel vs AMD interest piece. It's
newsworthy in that it's a market signal, not a technology signal. The desktop
CPU market segment is seeing a lot of transformation as secondary purpose
built processors and SBCs become more and more of the market, so this is just
sort of a fun puff piece.

Thanks for helping me answer the question.

------
ddevault
LLVM in under 2 minutes is way more impressive. That's a crazy CPU.

------
pella
discussion:
[https://news.ycombinator.com/item?id=21628149](https://news.ycombinator.com/item?id=21628149)

Linux # Threadripper :
[https://news.ycombinator.com/item?id=21628482](https://news.ycombinator.com/item?id=21628482)

~~~
philliphaydon
I clicked to see discussion on the topic of the thread but you’re linking to
threadripper discussion. Disappointed in lack of linux kernel compile time
discussion.

~~~
ghostpepper
Not trying to be rude, but genuinely curious: what else is there to discuss?

Is this a new record, or just a record for the price point? What was the
previous record? Will this have a large effect on how the kernel is developed?

For what it's worth, I understand how hard it is to focus on a project when
compile times get more than about thirty seconds.

~~~
philliphaydon
Was thinking people would talk about compile times with their hardware or
experiences with intel vs amd etc.

It’s just this title is very specific to compile time compared to the other
being very general discussion.

