Hacker News new | past | comments | ask | show | jobs | submit login
Linus Torvalds has switched to AMD (lkml.org)
288 points by kdrag0n 43 days ago | hide | past | favorite | 129 comments



I just upgraded from an 2015 i7 4/8 Core/Thread CPU to a Ryzen 7 3700X 8/16.

When I first starting using the new CPU the most striking thing was how /few/ of the cores were under-load during my compilations due to several bottlenecks in my build I didn't realize I had. Over half of my cores were near idle.

I was able to reduce compilation times by 75% (126~ sec to under 31~ sec), just by allowing several processes to run concurrently, and changing the order of a few others, so they weren't fighting over file system locks.

I went back and tested it on the old i7 machine, and still got a 30%~ improvement. My point is: Upgrade away, but make sure your tooling and scripts are designed for that type of concurrency otherwise you'll be wasting a lot of the potential. Mine weren't.


Were there any tools you used to help you with this or did you just look at your build directly?


Are you saying that the AMD stuff doesn't do hyperthreading?


... Considering hyperthreading is a proprietary Intel system: no, AMD doesn't do it.

Not sure how that would be in any way relevant to the parents point though.


It should be noted that AMD has SMT, which is pretty much the same thing.

https://en.m.wikipedia.org/wiki/Zen_(microarchitecture)


While shoring up nomenclature: I believe “simultaneous multi-threading” (SMT) is the generic term and “Hyperthreading” is Intel’s branding of their SMT implementation.

Calling AMD’s SMT “hyperthreading” is like calling all tissues Kleenex. It’s fine imo, but it doesn’t hurt to know these things.


No, they were almost certainly talking about some part of their build process limited it to a few processes (like maybe a 'make -k 6' limitation), or something that could be easily parallelized.


It was worse than even that, several parts of the build process were fully sequential. I only got benefit of multiple cores at all because the tooling in those sequential steps were utilizing it, or related system services were.

But, yes, I also did change at least one config to allow it to spawn more than 2 threads.


AMD calls it SMT (Simultaneous MultiThreading).


Simultanous Multithreading is the actual term. Hyperthreading is just Intel SMT in marketing language.


Yepp, SMT is the generic term also used in academia for ages. Source: Old editions of venerable "Computer Architecture" by Hennessy & Patterson.


I guess Greg Kroah-Hartman is also switching or has switched to AMD.

A video from the person (Level1Techs) who built him a new machine.

Building a Whisper-Quiet Threadripper PC For Greg Kroah-Hartman: https://www.youtube.com/watch?v=37RP9I3_TBo


I switched to the same CPU (+128GB of RAM) a few months back. Amazing value, IMO. Truly a fire breathing workstation. Costs less that the base config of Mac Pro, as well. In addition, while the first gen Threadripper boards were picky AF with respect to memory choice, the new TRX40 board took 128GB in 4 DIMMs like a champ, and is 100% stable at the memory's "XMP" settings. I'm pretty impressed with this and don't regret spending $1800 on the CPU. It's really a no-brainer for anyone who does deep learning or works with C/C++, or both, especially if you can write it off as a business expense.


> I'm pretty impressed with this and don't regret spending $1800 on the CPU. It's really a no-brainer for anyone who does deep learning

I hate to ask this but, why would this CPU be any good for Deep learning, especially for training purposes? That doesn't make any sense.

Sure if I needed a workstation that can build large software like Rust, LLVM, Chromium or Linux in ~30s then either the 3970 or 3990X are worth getting. For Deep Learning? this will perform very poorly or even end up permanently damaging the CPU, which is a very expensive investment to waste on. You might as well get the TITAN RTX for that, which is a no-brainer for deep learning use-cases.


Because augmentation is done on the CPU, and a slower CPU can't keep up if you have 4x 2080ti's in your workstation training in fp16, which is how I prefer to run things. Moreover, a SATA SSD also can't keep up, so you need NVMe. And for that you need an extension cable, since NVMe stick will overheat in its default location (right under the GPU). Found out the hard way. This is also why AWS sucks so bad for deep learning workloads: on some workloads it's very easy to bottleneck on CPU, and unlike with Google Cloud, you can't just drag the slider and give your VM more cores when needed. Jeff Bezos determined that 4xV100 should get 32 hyperthreads, so you get 32 hyperthreads.


Have you looked into using DALI [1] to do augmentation on GPU? They've gotten some nice speedups for computer vision that way.

[1] https://devblogs.nvidia.com/fast-ai-data-preprocessing-with-...


Of course, I've looked at everything. Given how expensive modern GPUs are, it is best to use their resources for deep learning rather than augmentation. That way you also get to pipeline augmentation: while GPU is doing the forward/backward pass, the CPU is also cranking out the next batch, so in a way, you're utilizing the resources better.

Another issue is, DALI only supports a subset of augmentations that e.g. Albumentations supports, and I'd much rather be working on the "neural" bits than wrestling with augmentation algorithms.


Yes. From the extra context you have given, since you already have multiple high-end GPUs, it makes sense here in order to remove the bottlenecks of a weaker CPU for example. Then I agree on this to some extent.

But for those who don't have multiple GPUs and want to do DL training, I'm sure they would start off with one NVIDIA GPU and a more recent CPU that is just good enough for DL training rather than a high-end powerful CPU, which I why I hated to ask this investment-wise.


CPU must not be overlooked, IMO. You don't want to spend thousands of dollars on GPUs and then only be able to keep them busy 50% of the time, if that.


Are you doing deep learning on the CPU?


I've been running Fedora (21 - 32 as of now) on HP MicroServer N54L Gen7 since 2014, it has a relatively weak AMD Turion II Neo N54L Dual Core processor but it's been working reliably over the years, still strong (not really when compiling, e.g. ZFS modules via DKMS).

DIYed a AMD Athlon XP 2500+ (Barton) + Epox 8RDA3+ (AVC 112C86 fan) desktop, overheating was a headache, overclocking even made it worse, finally that motherboard didn't survive overheating problem (so AMD CPU cooling has always been my concern lol).

Good to see consumers are more willing to accept AMD after all those Intel Meltdown & spectre, etc. drama (I was working as Tech Lead for XenServer support team, to some extent I knew how bad it negatively impacted from that specific PoV). Personally I'll prefer AMD when buying new hardware.


It's actually going to be a great era for consumers.

AMD and Intel in strong competition improves the quality and diversity of the CPUs and most importantly reduces their prices. We saw this almost immediately when the 10980xe which is largely identical to the 9980xe was sold for 50% of the price as a result of the 3950x/3960x.

In the server space however AMD will likely remain as a minor player as ARM inevitably starts to find its way in as a result of its far superior price/performance ratio.


> In the server space however AMD will likely remain as a minor player as ARM inevitably starts to find its way in as a result of its far superior price/performance ratio.

Interestingly that's starting to evaporate. Epyc 7702 is only ~3 watts per core, with Zen2 cores. That's crazy high performance per watt. If ARM is still more efficient at all it's not by very much, and then it has worse per-thread performance and has to emulate legacy code.

There are probably going to be places where ARM makes sense in the datacenter, but that doesn't look like an easy path to taking over entirely.


In terms of Workload per Watts on AWS, cost per workload, or the actual cost of the processor the Graviton still has a lead. ( Although that was when compared to Zen 1 )

It wont be taken over any time soon. But I suspect the server market for x86 to shrink by quite a bit over the next 10 years.

Anyway where is Zen 3!


>It's actually going to be a great era for consumers.

It's actually NOT, AMD, Intel, MS and big media companies are planning to put hardware DRM inside the computer.

The last 23 years of PC gaming we've seen the PC become a closed platform because of STEAM and mmo's, aka any client-server software you buy mean's you no longer own your PC or have any personal privacy because the program is constantly beaming data back to the mothership.

So no, they are going to turn the PC into locked down platform like mobile where you never see the exe files, they are trying to kill off local applications they want to "end piracy" by literally removing any control you have over your PC.

That's what Windows 10 DRM is about, UWP - encrypted computing, vm's, etc. Mean's it will be increasingly impossible to preserve old software because they are not honest binaries.

Don't think so? That is what Irdeto is all about, they've been encrypting PC game files for a while now and the future of PC gaming looks grim with always online drm, encrypted files because of micro-transactions and in game stores.

https://irdeto.com/

So no... the future looks locked down and dystopian to anyone who's been paying attention, what we're gaining in performance we're losing in freedom and increasing levels of DRM, VM's and encrypted software.


>It's actually NOT, AMD, Intel, MS and big media companies are planning to put hardware DRM inside the computer.

That's pretty old news. Things like the AMD PSP or Encrypted Media Extensions (DRM implemented by webbrowsers) exist primarily because media companies strongarm vendors into implementing DRM against their will. Things like HDCP simply do not work if they aren't deeply integrated into the hardware.

Steam is another example of a platform where developers are asking for DRM. The reality is that DRM is optional on Steam [0] but almost no developer is voluntarily disabling DRM. The high profile publishers even add third party DRM to the games because they think what steam does isn't enough!

>The last 23 years of PC gaming we've seen the PC become a closed platform because of STEAM and mmo's, aka any client-server software you buy mean's you no longer own your PC or have any personal privacy because the program is constantly beaming data back to the mothership. >So no, they are going to turn the PC into locked down platform like mobile where you never see the exe files, they are trying to kill off local applications they want to "end piracy" by literally removing any control you have over your PC.

I'm not sure why you are using Steam as an example because it is a piece of software that wouldn't exist once Microsoft forces every application to be delivered through the Microsoft store. Not only is Steam third party software, it is also a tool that installs even more third party software. This bypasses the entire idea behind only allowing reviewed applications on an app store.

Steam also has another very nice feature that lets you avoid problems associated with Microsoft. It runs on Linux and it even lets you play Windows only games on Linux. Once you switch to Linux all of those problems you are talking about are irrelevant.

[0] https://steam.fandom.com/wiki/List_of_DRM-free_games


You don't get the end game was to client-server the big budget games which has happened. AKA diablo 1 + 2 we owned the game outright, not so with diablo 3 and overwatch.

Steam was forced into half-life/cs in 2004, no one wanted it and steam is malware. That is why we lost dedicated servers and level editors in the AAA gaming space.

GTK Radiant - level editor quake engine games

http://icculus.org/gtkradiant/

Doom vs Doom eternal. Because the internet makes stealing software easy by holding back program files from the user.

Doom was the grandfather of modding on the Pc, in doom 2016, we got a gimped snapmap, and doom eternal is totally locked down. A far cry from the id software of the 90's.


Don't know why you're getting downvoted. I've been trying to keep an old Vaio Z series ticking along with Windows 7 using a community hybrid graphics driver. Lo and behold, you can't even do that anymore without basically hacking the windows kernel from the bootloader because Micro$oft is too busy sucking up to the movie/streaming industry to allow regular users to keep their things running. The kernel will not let you load an unsigned driver and give you the option to overrule their BS setup, even if you have looked at what the driver to determine what it is doing is correct, and are willing to use it.

B2B is the new policy setter. Businesses are the new first class User, and everyone else is just a Luser. To hell with em' all I say. If I could I'd find a way to crack their DRM anti-end-user circuitry and share it with the world out of spite. I liked that damn laptop. I still like it. I'm going to figure out how to pull off that bootloader thing, and I'm putting it out there for other Z series owners. You shouldn't have to fight a computer damnit!

And yes, I could swap to Linux, but that isn't really the point. The DRM crap moving to hardware means everyone has to deal with it. Furthermore, everything else I run is already Linux, and that laptop is my token Windows machine, which has quite a bit of sentimental value, as it was one of the machines that got me through college.

Anyway. Consider my hat firmly in the outraged bucket. This is ridiculous. Worthy of ridicule in every sense of the word. The entire software/hardware industry should look at the industries or actors asking for it, and tell them to work on getting on better terms with their users. The majority won't misbehave if you just provide a reasonable experience.

About the only two industries that have a reasonable claim to needing these types of features are National Security, medical devices, and grudgingly finance. That's it. Even then, I have difficulty swallowing the application, because it just leads to people trying to pry the lid open ever more. If they aren't going to give people the capability to opt out of this draconian nightmare, I want nothing to do with them.


ARM doesn’t do x64. I have no doubts that it will get higher, but if you don’t wanna introduce a new CPU architecture, it’s Intel or AMD.


That's not ... entirely true (edit: I see I misread you, it doesn't do X64, but there is 64 bit ARM in the form of AARCH64):

https://www.edn.com/arm64-vs-arm32-whats-different-for-linux...

You can get Arm64 linux workstations now.

https://www.anandtech.com/show/15733/ampere-emag-system-a-32...


At some point you’ll recompile for ARM…


A decade ago ARM was a decade ago from entering the server market. Today it is still a decade away from entering the server market.


Those are all previous and mediocre AMD architectures though. The Zen line is where things pop off.


He didn’t just switch to AMD, he switched to the 32 core Threadripper 3970x.


Same as Greg K-H's new computer: https://www.youtube.com/watch?v=37RP9I3_TBo


I mean, he wasn't about to splash for an AMD9080ADC...


Wonder why not 64 core Threadripper 3990x?


One possibility would be the 3990X's lower base clock of 2.9GHz vs. the 3.7GHz of the 3970X.


My guess: Amdahl's Law vs the noise made by the cooler.


Yes, he says in the video that one of the goals was to put together a system that was as quiet as possible.


They are same TDP


I'd love to have one of these in my gaming rig, but even with my 1700 I rarely have cpu bottlenecks.


Most games aren't that multi-threaded.

So it would actually end up being a lot slower than a 10900k.


AMD's core density is a huge deal, and it's not just for gamers anymore. Lots of enterprise software companies are rethinking or have restructured their licensing agreements to prepare for a future where per-socket licensing (which pretty much implied 2x sockets per server) will be undercut by 2nd and 3rd gen Epyc making single-socket servers at scale relevant again.


Unfortunately Microsoft saw this coming in 2012 and changed their database licensing off of per-socket and onto per-core.


Core count is not that important for games.


Core count for gaming is important until it matches to latest console game hard (now 8core, next gen also 8core)


Current consoles are still using low-power CPU microarchitectures, albeit at higher clock speeds than the original PS4 and Xbox One. So it's still pretty easy to match the console CPU power with a modern desktop processor that has fewer CPU cores each providing much higher per-core performance. When the next generation of consoles arrives at the end of the year, the Xbox and PlayStation families will move to a desktop-class microarchitecture with performance per clock that's competitive with retail desktop processors.


It's becoming that way, raw single core performance is becoming less of a bottleneck over time.


I’d be wondering why, though. From my experience, much of what a game engine usually does looks embarrassingly parallel.


It’s parallelized on the GPU, but I/O isn’t really something you can parallelize that much.


Most of the CPU work in games is in making draw calls, which can be parallelized. Interesting that the meme that games are ST bound persists when that hasn’t been the case for several years (see: DX11+ and Vulkan).

The problem is game devs and engine makers don’t spend the effort to parallelize in the main loop everywhere they can.

You can get extra fps by having a faster single thread. Even still, if you had a 6 GHz single core CPU with a contemporary architecture then you would have an abysmal frame rate in a contemporary game. Those cores are used.


>Interesting that the meme that games are ST bound persists when that hasn’t been the case for several years

Except you don't get that it's not a meme, ideally CPU's were expected to scale into the 10-30Ghz range, that never happened because of the end of dennard scaling.

So yes ST performance is paramount, the only reason it's not is because CPU scaling hit a brick wall and because of power and leakage issues, when new materials become available that enable higher frequencies, you will see everything dramatically improve.

So no DX11 and Vulkan will not magically make all games faster, they are optimizations for graphics pipelines.

Most of today's games we're interested in run on 10year old machines just fine. If you think you can't run an i5 2500K /w a modern GPU and run 99% of all games you are clueless.

Most games are targeted at console specs and have held PC gaming back for decades.


There's not much to parallelise about I/O maybe, but many games simulate entire worlds filled with monsters, NPCs and tons of game effects. That sounds like stuff that's easy to parallelise.


even when it's written the name Epyc seems dumb. Why the hell did they pick that name? Why did they let their gaming division name a server product?

Nothing against the product. A product name wouldn't ever stop me considering it as a serious choice, it just seems like a rushed decision that's unfortunately stuck with it.


Well, intell called their then cutting edge server architecture EPIC...


> it's not just for gamers anymore

Gamers never cared about the number of cores. Only recently has this slowly started to change. Intel is still king if you primarily care about gaming.


Really only true if you _only_ care about gaming.

If you're split at all, single core perf on AMD these days is real close, and multicore Intel is just getting trashed (especially in any kind of price-per-perf metrics).


Everybody is always looking at performance per dollar. I'm more interested in performance per Watt. All that power generates heat which needs to be blown out of the system, which creates noise. I like my machine silent.


You'll be happy to learn that AMD is beating Intel in perf-per-watt as well these days. Certainly when it comes to the high-end, but Zen2 is doing well in laptop tests as well.


That's good to know. I got my son a Ryzen Thinkpad for school. When I bought my own almost a year ago, I was sad that Thinkpads didn't come with Ryzens yet.


Me too, but the heat (energy) is probably the more important discriminator. Data center operators very much care about how much is created (wasted) as they have to pay for it twice.

Desktop users might in addition care about heat (-> noise) being created in low load / near idle situations where those machines spend most of their time in.


I also care about heat and noise under load. I hate it when all the fans kick in.


Intel is still king if you want the last few frames per second squeezed out at the top end.

It's not king in bang-for-buck, even for gamers.


Not surprised. There is no equivalent from Intel for the 3970x except from the significantly more expensive Xeon line. And the closest Intel 10980xe part is 18 cores with availability close to zero.

Also I can't imagine Linus is going to be interested in OC which is where you get most of the value from the Intel chips.


Cool! Honest question: what's he doing for graphics then? Intel chips have had integrated graphics with superb Linux driver support for years. Does this AMD ThreadRipper 3970x have an integrated graphics?


I'm sure he is not a running Nvidia GPU - that would be a giant pain for kernel development work.

There are many AMD discrete GPUs (APUs are problematic a little) including newer ones that work well with modern Linux kernels, so that's probably what he is running.


I wonder the same thing.

Intel probably has the best graphics drivers for Linux.

I am currently using Nvidia with proprietary drivers and it works fine for now but I probably won't be able to switch to Wayland any time soon. But other than that it works great. Including gaming.

And I also have media box with AMD Ryzen 5 2400G APU and that thing has been a bit of a problem. GPU drivers ted to crash, GPU performance on Linux is poor, and it took about half a year of kernel updates to finally make it not crash every day. Are there are any AMD graphics cards that have good Linux drivers?


I have an AMD card from 2015 (an R9 Nano to be exact). I had to use fglrx for the first few months, but ever since amdgpu entered mainline, it's been rock-solid with good performance. Vulkan works great as well. (Footnote, I don't care about OpenCL, so cannot comment on that.)

Based on this positive experience, when it was time to replace my notebook, I got one with a Ryzen 2 APU. The GPU part of that also works great. There were some problems with the IOMMU, but those were resolved by a firmware update from Lenovo a few months in.


I've been running a Radeon RX 470 for a few years now and the Linux drivers are superb. Better than the intel ones that I have in my laptop.

It took ~6 months after launch for it to get stable but it has been rock solid ever since. It is a dedicated GPU though so it also lets me play most games using proton. That has also been amazingly solid (Metal Gear V, etc.)


AMD Radeon graphics most liekely. The 5600 or 5700 is very competent and has open source drivers.


>AMD Threadripper 3970x

Curious to know about the rest of the box ... but he doesn't say.


Well, we already know what he thinks about NVIDIA.


Intel is till more convenient for user like me, who dislikes external videos. Most of AMD do not have integrated video, an those who have, show problems working in Linux and BSDs. AMDs also have higher idle consumption. Other than that, AMDs are clearly better CPUs.


Ryzen G series includes integrated graphics.

There also aren't any major bugs for Zen/Zen 2/TR4 on Linux/BSD.

Ironically, the largest stability issue with 1st/2nd gen Ryzen is caused by idling too efficiently. Older power supplies sense this sub-5W idle as being suspended or powered off and throttle their 12V rails, leading to system hangups. An option in BIOS must be set to raise the idle wattage for these PSUs.

The demographic most affected by this, first time and budget builders, were also the least likely to be able to diagnose it, leading to its prevalence in forums. Search for "power supply idle control" to learn more about it.


This is clearly not true. There are serious problems with compatibility of integrated graphics in Ryzen and Linux. More or less reasonable support in Linux (and I am not even talking about FreeBSD, let alone other BSDs) appeared only very very recently, and it is still buggy, causes lockups, black screen boots etc. Not only that, Ryzen G series are not attractive at all because they are always underpowered and one generation behind; simply put, they suck.

Speaking of Idle power consumption - there is evidence all over the internet, that Ryzens themselves are not neccerraly very power hungry, but their motherboard chipsets are. Here for example, Rx 3xxx consume 10W more at idle https://tpucdn.com/review/amd-ryzen-5-3600/images/power-idle.... The reason is unclear, but it is what it is - they really hungrier.


You raise an interesting point. I've been considering upgrading to threadripper from an older Xeon system but am put off by the cooling requirements (cpu, video, system) and how negative that will be on background noise.


Threadripper with a be quiet! Dark Rock or Noctua cooler will run almost silently. With two GPUs, 8x DIMMs, and a X520-DA2, my home system pulls 380W under full load and is whisper quiet.

That said, if you don't need very high core counts, the PCIe lanes, or the memory bandwidth, I'd build on Zen2 Ryzen instead.


> That said, if you don't need very high core counts, the PCIe lanes, or the memory bandwidth

I'm on 1900X because it has the PCIe lanes and memory bandwidth, but do not need the cores. The bonus is, that the fewer cores are clocker higher, becuase they still have the same temperature budget as the higher models.


The YT link above to the L1 Techs build is like that. They wanted to optimize for quiet, so if you're like that too you should watch that video.


Not blazing single-thread perf, but pretty high up the total perf list...

https://www.cpubenchmark.net/singleThread.html


I mean, how silly would you have to be to buy a 32 core/64 thread processor if single threaded performance was a consideration at all. There is obviously going to be some sort of tradeoff in single core performance to obtain that density of cores.


3970X has a maximum single core turbo of 4.5GHz, with a similar/better IPC than Skylake. You're not missing much at all. The tradeoff only exists when thermally limited when multiple cores are running, but given how many more cores there are in the first place, you are still way ahead.


Passmark numbers seem to say -15.5% - but I guess that's maybe not that big of a deal in most things.

I use my desktop for playing games, so having 4-8 cores is enough, I'd much rather have fast cores for things that don't parallelize well.

That being said, I am pushing a pretty old cpu - i7-4770k - and I haven't been able to convince myself to spend the money on the upgrade since I'm down ~32% from the best thing you can get considering single thread perf.

Maybe the next round of cpus - Zen 3 et al. I'll be doing nvme pcie-4.0 ssd as well in the next build which should give a big boost over the sata ssd I'm using now.


Is he still working on the kernel?


Yes, he is in charge of merging changes from ~200 subsystem git trees into the new release, every week. Every ~8 weeks they do a major version with big changes, and then they do weekly releases with bugfixes, and then they do a major version with big changes again.

The major releases are then maintained by Greg Kroah-Hartman (the number 2 person in Linux), who cherry-picks fixes from mainline that should go to stable. Distros have kernel teams that also maintain their own stable trees, with or without help from upstream stable maintainer.

Linus can't code review all the changes queued for the next major release, but he does make sure that in case the subsystem maintainer says "this is safe to merge, it has been tested" but actually it doesn't compile, then Linus will yell at him and call him bad words in Finnish. Because the subsystem maintainer is an important job, people rely on them, they have years of experience, they know better than to do pull requests with junk.

He is also involved with resolving disputes and fixing things that affects his own workflow.


Thanks, that's great to hear.


I'm pointing to this the next time someone thinks Hacker News is above celebrity tabloids.



I’d say it’s more of a celebrity endorsement:

“I'm now rocking an AMD Threadripper 3970x. My 'allmodconfig' test builds are now three times faster than they used to be, which doesn't matter so much right now during the calming down period, but I will most definitely notice the upgrade during the next merge window.”


It's like guitar endorsements

AMD is Fender, Intel is Gibson


I rock an Ibanez. ;)


So I guess my PRS is what, SiFive?


An HiSilicon integrated chip in an old TCL tv ?


You ARM cheapo rebel!


Well we already had the John Carmack fans going wild over him 'contributing to OpenBSD', so this isn't the first time HN has this celebrity programmer affection.

Also he should have got a AMD Ryzen™ Threadripper™ 3990X instead. 64 Cores / 128 Threads to compile the kernel in ~30 seconds.


Phoronix has actually benchmarked compiling the Linux 5.4 kernel on all three of the 3990X, 3970X, and 3960x[1]:

- 22.48s, 3990X

- 23.64s, 3970X

- 27.54s, 3960X

But keep in mind that the 3970X is USD $2,000, while the 3990X is USD $3,990... can't speak for Linus but that extra 1.1s per compile isn't worth $1,990 to me.

https://www.phoronix.com/scan.php?page=article&item=3990x-th...


Well, double the cores, double the cost...


Somehow I'm thinking of IBM mainframes reading this.


3990x does have more cores and threads, but it also has lower clock speeds than the 3970x.


It's only about 15% faster...not really worth it.

https://www.servethehome.com/amd-ryzen-threadripper-3990x-re...


A celebrity tabloid talks about the personal life of celebrities. This is a technology focused forum and the article is about a big name in technology choosing a new technology.


s/celebrity/technologist/g

s/sports team/company/g


A sports team choosing a new technology sounds quite interesting! Please do post content like that.


You mean like when then NFL started peddling Microsoft Surface? Those were funny times.

[1] https://www.theverge.com/2016/10/18/13320664/bill-belichick-...


So a sports team produces something with analogous utility to a CPU? People buy sports gear or tickets and expect to run computations on them? What kind of workload can the Packers handle?

I think for the project leader of something like a kernel, which needs to care about the details of a CPU, it's entirely appropriate to set an example and remind people that they aren't building for a hardware monoculture.

With Linus I start to think of old writings he used to have about Alpha, or his prior employment at Transmeta (I know no details about it, would not be shocked if it was at least in part a PR move) ... If he's not willing to give some less common hardware a good sanity check, what example does that set for maintainers of various pieces?

Similarly, I remember the old stories about Stallman's free software MIPS rig. He ran that to prove a point that was important to him.


Unpopular opinion: This is how git rose to where it is today. If people were less blinded by celebrity status, they'd see that Linus' lack of exposure to paradigms beyond C such as OOP or FP (due to deliberate disdain; his loss) meant that C pointers were the primary abstraction on which he built git, and that it suffers greatly because of this. Not to mention that good UX is a rare skill among developers in general. See https://stevelosh.com/blog/2013/04/git-koans/

Mercurial should have won. And Plastic SCM is awesome, designed conceptually from the ground up to build on lessons learned from git (and by an Associate Professor in CS). In fact, the whole concept of DVCSs is ridiculous in 99% of corporate contexts where you can't use your laptop / desktop at home without logging in into the VPN. Why do we even need distributed version control in such a setting? Subversion is perfectly enough and even non-developer users can actually understand and use Subversion effectively. See https://svnvsgit.com/

Mind you, I'm not a blind hater: I love that Linus did well in life. I'd be thrilled if his net worth was a billion instead of just $150 million. Surely, he contributed much more than that.


I spent easily 10 000 hours using Mercurial over nearly 10 years. I was, and still am, a huge Hg fan. I switched to Git quite late (some time in 2018) and converted all projects, both home+work, cold turkey.

I recently revisited an old Hg project. It was really nostalgic. Hg is easier to use and TortoiseHg is the best GUI for SCM bar none.

But Git is simpler. The simplicity is a mind-opener. Git deserved to win. When you want to do something in Hg that's not built-in - even something like rebase - all the 30+ included extensions are all marked as "EXPERIMENTAL" and you can only safely interact with the data model by writing a custom Python 2 script.

With Git you can go into the .git directory and delete a refs file. Filter-branch is a first-class feature. A few basic concepts underpin more functionality. There are genuine multiple implementations. This is radical transparency.

Originally I thought I would have missed having draft phases, embedded branch names, and ordinal commit numbers. But in practice I don't miss these at all. And in retrospect these all actively work against code-sharing in general, and the pull-request model specifically with its working revisions. Git is inherently more social-capable and will always have a stronger network effect. Hg does not have separate author + committer.

With Git you can stage individual lines, not just hunks. This alone should be reason to use Git over Hg (oh, you can enable the optional crecord extension and do everything from the CLI? that's not Hg's famed ease-of-use).

I once started to develop a Gogs/Gitea clone for Hg. For compatibility it had to communicate by IPC with a standalone Python process, then I hit a roadblock because the wireproto was undocumented for bundles. Hg developers were also not enthusiastic about my proposal to switch from full+diff storage to chunk storage.

I do miss some things. A central .hgtags file would definitely resolve some tag conflicts and issues with coworkers re-pushing deleted tags.


Thanks, very good points.


Can we stop with the conspiracy theories? Git rose where it was today for a multitude reasons, most of them technical: it is very fast, rebase is well supported, if it breaks it’s easy to fix and so on. (Yes, some of these are fixed now, but it is too late)

I mean, many people had the exposure to bit git and hg at some point in their life - especially the kind of people who make decision which vcs to use. Celebrity endorsement may be good to convince me to try something, but it cannot convince one to use something over alternatives, if they had enough experience with both.


Speed was definitely a big deal when git first came into being. However, as far as I remember, the main thing that made git popular was actually Github. Before Github, various souce repository sites were awful (and in the case of what SourceForge became, more than awful).


I recall that the git-svn bridge had me hooked on git long before github. Also made it braindead simple to migrate an svn shop.

We looked at hg. I also dabbled in a few others. bzr, I believe was a popular one.

As said upstream. There are plenty of reasons that git "won." Most of them did seem technical. But, I could see things going many ways. For the most part, source control is not a problem most people think they have. Just look at how terrible the data science folks are? And management is stuck in whatever MS is doing in Word nowadays.


"Plaintext and VCS solve all problems" is not very practical for everyone other than programmers.


The other fields also have a terrible time recreating anything. So... I kind of feel that they are making the poor choice here.

In particular, I have yet to see a successful collaboration that wasn't ridiculously ephemeral in data science or document creation, that wasn't backed by a much nicer format. (Where, ephemeral means that after it is done, it is referenced as PDF, but not directly anymore. Which, to be fair, is the majority of documents that exist in the world.)


It also doesn’t really matter. Git is good enough. Hg is good enough.


Having used subversion and git in big teams, I can tell you the single fact branches are free in git changes the way teams interact and is worth every learning curve or design flaw in git.

From spending half a day per branch copy, to releasing only a few times per year because branches are expensive, to never changing more than the minimum because can't branch-test a feature fast, to days of offline CI because bad merge on only dev branch, to keeping 3 copies of the entire repo to do backports because not easy to shift branch... SVN never again.


It's impossible to separate out the celebrity of Linus the person, and the celebrity of the Linux kernel he created. Git rose to where it is because of its success in being used for managing distributed Linux kernel development, rather than adulation of Linus. That it's not a great fit for things that aren't Linux kernel development isn't exactly a surprise.

In a world where "DVCSs is ridiculous in 99% of corporate contexts where you can't use your laptop / desktop at home without logging [sic] into the VPN", why (and what) should Mercurial, a popular DVCS, have "won"? (I don't mean in Mercurial vs Git)

DVCS isn't ridiculous. Especially in corporate contexts over a flakey VPN (like, say, if there were a global remote work experiment going on). Cheap local branching is an indispensable feature. So much so that if you're forced to use SVN, you can still get local branching via SVK or by using `git-svn`. SVN is still the right tool in a lot of places (anywhere there are binaries in play) though the overhead of being actively fluent in multiple VCSs at once isn't free.


I can’t upvote you because I agree on half of what you say and disagree with the other half.

Yes, git “won” largely because it was “what the Linux guys are using” (and because github was better than bitbucket and other alternatives). It’s concepts can be painful to work with.

No, DVCS should not be ditched - the simplicity of repointing, rehosting, and working offline, that dvcs systems give us, should never be rolled back (and screw repo corruptions in svn and other crap). Just because a lot of people don’t rely on these features every day, it doesn’t mean they are not hugely beneficial to the ecosystem at large.


I use Mercurial for almost everything

The hg command line options are far more sane than git. And TortoiseHg is the best gui ever

But I might switch eventually, when even bitbucket is dropping hg support. And many linux distributions stopped including a working tortoisehg.


This instead of the site that collects PG's parenting tweets? It was on the front page a few weeks ago.


While you're right to a certain point, more Linux developers on the AMD platforms will eventually lead to less AMD-related bugs in the kernel. From what I've seen of AMD, the hardware is often excellent but the software leaves something to be desired. AMD GPU drivers, for example, are stable for some and unstable for others.

The more people involved with building Linux use AMD hardware, the fewer AMD-only bugs find themselves into the kernel. It might also help with optimizing the software for higher core counts now that 32 cores without NUMA is feasible for workstations.

In the end I think parts of the core team switching to AMD will make the hardware platform more attractive for people browsing for workstations. AMD provides great value for money but if you want stability, you buy Intel. Now that probability of instability is decreased with major kernel developers testing out changes and encountering some of the same potential bugs, Threadripper becomes more attractive to certain groups of power users.

This is celebrity news that might affect a lot of people in their day-to-day business down the line, which is different from the typical "Paul Graham ate breakfast" or "Linus Torvalds flips off nvidia" celebrity news cycle.


The whole origin of HN is from PG fandom, isn't it?


I think in modern day it is called News from KOL. And since it is a Tech Person talking about a piece of Tech. I thought it was quite relevant.

Hopefully more people will purchase AMD as the result of this. Its financially figures are not good. ( As I have been saying in every AMD thread )


This isn’t about some guy getting a new car, right. At least it’s about tech related to a person important in tech.


Well at least this isn’t teamblind


...it's funny, because if anyone can write code that wouldn't require 32 cores to run, Linus would certainly be one of them.

Don't forget to keep in touch with reality, a bunch of people on this globe still have to get by using dual-cores.


So because a lot of the world is running dual core, programmers shouldn’t attempt to improve compilation time with more cores?


I think he wants Linux to just remove most drivers from the kernel so this guy can compile it with everything enabled on his slow computer.


No, it completely makes sense on the compilation aspect, I don't deny it.

But where do you test the compiled software?

If I use an analogy from the Internet, it's like web developers using hugely uncompressed pictures, but nobody cares, because everyone's got broadband. But then, the guy with a mobile data rate wants to view the page. Or the poor sod, who for some reason is still stuck on 56k. And except for bloated pictures, I mean system services, tasks and processes, and no one realizes what a drag they may be causing, because everyone's got at least four cores nowadays, anyways. That's a very real danger for any developer, to "lose touch with reality" when it comes to their users.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: