When I first starting using the new CPU the most striking thing was how /few/ of the cores were under-load during my compilations due to several bottlenecks in my build I didn't realize I had. Over half of my cores were near idle.
I was able to reduce compilation times by 75% (126~ sec to under 31~ sec), just by allowing several processes to run concurrently, and changing the order of a few others, so they weren't fighting over file system locks.
I went back and tested it on the old i7 machine, and still got a 30%~ improvement. My point is: Upgrade away, but make sure your tooling and scripts are designed for that type of concurrency otherwise you'll be wasting a lot of the potential. Mine weren't.
Not sure how that would be in any way relevant to the parents point though.
Calling AMD’s SMT “hyperthreading” is like calling all tissues Kleenex. It’s fine imo, but it doesn’t hurt to know these things.
But, yes, I also did change at least one config to allow it to spawn more than 2 threads.
A video from the person (Level1Techs) who built him a new machine.
Building a Whisper-Quiet Threadripper PC For Greg Kroah-Hartman: https://www.youtube.com/watch?v=37RP9I3_TBo
I hate to ask this but, why would this CPU be any good for Deep learning, especially for training purposes? That doesn't make any sense.
Sure if I needed a workstation that can build large software like Rust, LLVM, Chromium or Linux in ~30s then either the 3970 or 3990X are worth getting. For Deep Learning? this will perform very poorly or even end up permanently damaging the CPU, which is a very expensive investment to waste on. You might as well get the TITAN RTX for that, which is a no-brainer for deep learning use-cases.
Another issue is, DALI only supports a subset of augmentations that e.g. Albumentations supports, and I'd much rather be working on the "neural" bits than wrestling with augmentation algorithms.
But for those who don't have multiple GPUs and want to do DL training, I'm sure they would start off with one NVIDIA GPU and a more recent CPU that is just good enough for DL training rather than a high-end powerful CPU, which I why I hated to ask this investment-wise.
DIYed a AMD Athlon XP 2500+ (Barton) + Epox 8RDA3+ (AVC 112C86 fan) desktop, overheating was a headache, overclocking even made it worse, finally that motherboard didn't survive overheating problem (so AMD CPU cooling has always been my concern lol).
Good to see consumers are more willing to accept AMD after all those Intel Meltdown & spectre, etc. drama (I was working as Tech Lead for XenServer support team, to some extent I knew how bad it negatively impacted from that specific PoV). Personally I'll prefer AMD when buying new hardware.
AMD and Intel in strong competition improves the quality and diversity of the CPUs and most importantly reduces their prices. We saw this almost immediately when the 10980xe which is largely identical to the 9980xe was sold for 50% of the price as a result of the 3950x/3960x.
In the server space however AMD will likely remain as a minor player as ARM inevitably starts to find its way in as a result of its far superior price/performance ratio.
Interestingly that's starting to evaporate. Epyc 7702 is only ~3 watts per core, with Zen2 cores. That's crazy high performance per watt. If ARM is still more efficient at all it's not by very much, and then it has worse per-thread performance and has to emulate legacy code.
There are probably going to be places where ARM makes sense in the datacenter, but that doesn't look like an easy path to taking over entirely.
It wont be taken over any time soon. But I suspect the server market for x86 to shrink by quite a bit over the next 10 years.
Anyway where is Zen 3!
It's actually NOT, AMD, Intel, MS and big media companies are planning to put hardware DRM inside the computer.
The last 23 years of PC gaming we've seen the PC become a closed platform because of STEAM and mmo's, aka any client-server software you buy mean's you no longer own your PC or have any personal privacy because the program is constantly beaming data back to the mothership.
So no, they are going to turn the PC into locked down platform like mobile where you never see the exe files, they are trying to kill off local applications they want to "end piracy" by literally removing any control you have over your PC.
That's what Windows 10 DRM is about, UWP - encrypted computing, vm's, etc. Mean's it will be increasingly impossible to preserve old software because they are not honest binaries.
Don't think so? That is what Irdeto is all about, they've been encrypting PC game files for a while now and the future of PC gaming looks grim with always online drm, encrypted files because of micro-transactions and in game stores.
So no... the future looks locked down and dystopian to anyone who's been paying attention, what we're gaining in performance we're losing in freedom and increasing levels of DRM, VM's and encrypted software.
That's pretty old news. Things like the AMD PSP or Encrypted Media Extensions (DRM implemented by webbrowsers) exist primarily because media companies strongarm vendors into implementing DRM against their will. Things like HDCP simply do not work if they aren't deeply integrated into the hardware.
Steam is another example of a platform where developers are asking for DRM. The reality is that DRM is optional on Steam  but almost no developer is voluntarily disabling DRM. The high profile publishers even add third party DRM to the games because they think what steam does isn't enough!
>The last 23 years of PC gaming we've seen the PC become a closed platform because of STEAM and mmo's, aka any client-server software you buy mean's you no longer own your PC or have any personal privacy because the program is constantly beaming data back to the mothership.
>So no, they are going to turn the PC into locked down platform like mobile where you never see the exe files, they are trying to kill off local applications they want to "end piracy" by literally removing any control you have over your PC.
I'm not sure why you are using Steam as an example because it is a piece of software that wouldn't exist once Microsoft forces every application to be delivered through the Microsoft store. Not only is Steam third party software, it is also a tool that installs even more third party software. This bypasses the entire idea behind only allowing reviewed applications on an app store.
Steam also has another very nice feature that lets you avoid problems associated with Microsoft. It runs on Linux and it even lets you play Windows only games on Linux. Once you switch to Linux all of those problems you are talking about are irrelevant.
Steam was forced into half-life/cs in 2004, no one wanted it and steam is malware. That is why we lost dedicated servers and level editors in the AAA gaming space.
GTK Radiant - level editor quake engine games
Doom vs Doom eternal. Because the internet makes stealing software easy by holding back program files from the user.
Doom was the grandfather of modding on the Pc, in doom 2016, we got a gimped snapmap, and doom eternal is totally locked down. A far cry from the id software of the 90's.
B2B is the new policy setter. Businesses are the new first class User, and everyone else is just a Luser. To hell with em' all I say. If I could I'd find a way to crack their DRM anti-end-user circuitry and share it with the world out of spite. I liked that damn laptop. I still like it. I'm going to figure out how to pull off that bootloader thing, and I'm putting it out there for other Z series owners. You shouldn't have to fight a computer damnit!
And yes, I could swap to Linux, but that isn't really the point. The DRM crap moving to hardware means everyone has to deal with it. Furthermore, everything else I run is already Linux, and that laptop is my token Windows machine, which has quite a bit of sentimental value, as it was one of the machines that got me through college.
Anyway. Consider my hat firmly in the outraged bucket. This is ridiculous. Worthy of ridicule in every sense of the word. The entire software/hardware industry should look at the industries or actors asking for it, and tell them to work on getting on better terms with their users. The majority won't misbehave if you just provide a reasonable experience.
About the only two industries that have a reasonable claim to needing these types of features are National Security, medical devices, and grudgingly finance. That's it. Even then, I have difficulty swallowing the application, because it just leads to people trying to pry the lid open ever more. If they aren't going to give people the capability to opt out of this draconian nightmare, I want nothing to do with them.
You can get Arm64 linux workstations now.
So it would actually end up being a lot slower than a 10900k.
The problem is game devs and engine makers don’t spend the effort to parallelize in the main loop everywhere they can.
You can get extra fps by having a faster single thread. Even still, if you had a 6 GHz single core CPU with a contemporary architecture then you would have an abysmal frame rate in a contemporary game. Those cores are used.
Except you don't get that it's not a meme, ideally CPU's were expected to scale into the 10-30Ghz range, that never happened because of the end of dennard scaling.
So yes ST performance is paramount, the only reason it's not is because CPU scaling hit a brick wall and because of power and leakage issues, when new materials become available that enable higher frequencies, you will see everything dramatically improve.
So no DX11 and Vulkan will not magically make all games faster, they are optimizations for graphics pipelines.
Most of today's games we're interested in run on 10year old machines just fine. If you think you can't run an i5 2500K /w a modern GPU and run 99% of all games you are clueless.
Most games are targeted at console specs and have held PC gaming back for decades.
Nothing against the product. A product name wouldn't ever stop me considering it as a serious choice, it just seems like a rushed decision that's unfortunately stuck with it.
Gamers never cared about the number of cores. Only recently has this slowly started to change. Intel is still king if you primarily care about gaming.
If you're split at all, single core perf on AMD these days is real close, and multicore Intel is just getting trashed (especially in any kind of price-per-perf metrics).
Desktop users might in addition care about heat (-> noise) being created in low load / near idle situations where those machines spend most of their time in.
It's not king in bang-for-buck, even for gamers.
Also I can't imagine Linus is going to be interested in OC which is where you get most of the value from the Intel chips.
There are many AMD discrete GPUs (APUs are problematic a little) including newer ones that work well with modern Linux kernels, so that's probably what he is running.
Intel probably has the best graphics drivers for Linux.
I am currently using Nvidia with proprietary drivers and it works fine for now but I probably won't be able to switch to Wayland any time soon. But other than that it works great. Including gaming.
And I also have media box with AMD Ryzen 5 2400G APU and that thing has been a bit of a problem. GPU drivers ted to crash, GPU performance on Linux is poor, and it took about half a year of kernel updates to finally make it not crash every day. Are there are any AMD graphics cards that have good Linux drivers?
Based on this positive experience, when it was time to replace my notebook, I got one with a Ryzen 2 APU. The GPU part of that also works great. There were some problems with the IOMMU, but those were resolved by a firmware update from Lenovo a few months in.
It took ~6 months after launch for it to get stable but it has been rock solid ever since. It is a dedicated GPU though so it also lets me play most games using proton. That has also been amazingly solid (Metal Gear V, etc.)
Curious to know about the rest of the box ... but he doesn't say.
There also aren't any major bugs for Zen/Zen 2/TR4 on Linux/BSD.
Ironically, the largest stability issue with 1st/2nd gen Ryzen is caused by idling too efficiently. Older power supplies sense this sub-5W idle as being suspended or powered off and throttle their 12V rails, leading to system hangups. An option in BIOS must be set to raise the idle wattage for these PSUs.
The demographic most affected by this, first time and budget builders, were also the least likely to be able to diagnose it, leading to its prevalence in forums. Search for "power supply idle control" to learn more about it.
Speaking of Idle power consumption - there is evidence all over the internet, that Ryzens themselves are not neccerraly very power hungry, but their motherboard chipsets are. Here for example, Rx 3xxx consume 10W more at idle https://tpucdn.com/review/amd-ryzen-5-3600/images/power-idle.... The reason is unclear, but it is what it is - they really hungrier.
That said, if you don't need very high core counts, the PCIe lanes, or the memory bandwidth, I'd build on Zen2 Ryzen instead.
I'm on 1900X because it has the PCIe lanes and memory bandwidth, but do not need the cores. The bonus is, that the fewer cores are clocker higher, becuase they still have the same temperature budget as the higher models.
I use my desktop for playing games, so having 4-8 cores is enough, I'd much rather have fast cores for things that don't parallelize well.
That being said, I am pushing a pretty old cpu - i7-4770k - and I haven't been able to convince myself to spend the money on the upgrade since I'm down ~32% from the best thing you can get considering single thread perf.
Maybe the next round of cpus - Zen 3 et al. I'll be doing nvme pcie-4.0 ssd as well in the next build which should give a big boost over the sata ssd I'm using now.
The major releases are then maintained by Greg Kroah-Hartman (the number 2 person in Linux), who cherry-picks fixes from mainline that should go to stable. Distros have kernel teams that also maintain their own stable trees, with or without help from upstream stable maintainer.
Linus can't code review all the changes queued for the next major release, but he does make sure that in case the subsystem maintainer says "this is safe to merge, it has been tested" but actually it doesn't compile, then Linus will yell at him and call him bad words in Finnish. Because the subsystem maintainer is an important job, people rely on them, they have years of experience, they know better than to do pull requests with junk.
He is also involved with resolving disputes and fixing things that affects his own workflow.
“I'm now rocking an AMD Threadripper 3970x. My 'allmodconfig' test builds are
now three times faster than they used to be, which doesn't matter so
much right now during the calming down period, but I will most
definitely notice the upgrade during the next merge window.”
AMD is Fender, Intel is Gibson
Also he should have got a AMD Ryzen™ Threadripper™ 3990X instead. 64 Cores / 128 Threads to compile the kernel in ~30 seconds.
- 22.48s, 3990X
- 23.64s, 3970X
- 27.54s, 3960X
But keep in mind that the 3970X is USD $2,000, while the 3990X is USD $3,990... can't speak for Linus but that extra 1.1s per compile isn't worth $1,990 to me.
I think for the project leader of something like a kernel, which needs to care about the details of a CPU, it's entirely appropriate to set an example and remind people that they aren't building for a hardware monoculture.
With Linus I start to think of old writings he used to have about Alpha, or his prior employment at Transmeta (I know no details about it, would not be shocked if it was at least in part a PR move) ... If he's not willing to give some less common hardware a good sanity check, what example does that set for maintainers of various pieces?
Similarly, I remember the old stories about Stallman's free software MIPS rig. He ran that to prove a point that was important to him.
Mercurial should have won. And Plastic SCM is awesome, designed conceptually from the ground up to build on lessons learned from git (and by an Associate Professor in CS). In fact, the whole concept of DVCSs is ridiculous in 99% of corporate contexts where you can't use your laptop / desktop at home without logging in into the VPN. Why do we even need distributed version control in such a setting? Subversion is perfectly enough and even non-developer users can actually understand and use Subversion effectively. See https://svnvsgit.com/
Mind you, I'm not a blind hater: I love that Linus did well in life. I'd be thrilled if his net worth was a billion instead of just $150 million. Surely, he contributed much more than that.
I recently revisited an old Hg project. It was really nostalgic. Hg is easier to use and TortoiseHg is the best GUI for SCM bar none.
But Git is simpler. The simplicity is a mind-opener. Git deserved to win. When you want to do something in Hg that's not built-in - even something like rebase - all the 30+ included extensions are all marked as "EXPERIMENTAL" and you can only safely interact with the data model by writing a custom Python 2 script.
With Git you can go into the .git directory and delete a refs file. Filter-branch is a first-class feature. A few basic concepts underpin more functionality. There are genuine multiple implementations. This is radical transparency.
Originally I thought I would have missed having draft phases, embedded branch names, and ordinal commit numbers. But in practice I don't miss these at all. And in retrospect these all actively work against code-sharing in general, and the pull-request model specifically with its working revisions. Git is inherently more social-capable and will always have a stronger network effect. Hg does not have separate author + committer.
With Git you can stage individual lines, not just hunks. This alone should be reason to use Git over Hg (oh, you can enable the optional crecord extension and do everything from the CLI? that's not Hg's famed ease-of-use).
I once started to develop a Gogs/Gitea clone for Hg. For compatibility it had to communicate by IPC with a standalone Python process, then I hit a roadblock because the wireproto was undocumented for bundles. Hg developers were also not enthusiastic about my proposal to switch from full+diff storage to chunk storage.
I do miss some things. A central .hgtags file would definitely resolve some tag conflicts and issues with coworkers re-pushing deleted tags.
I mean, many people had the exposure to bit git and hg at some point in their life - especially the kind of people who make decision which vcs to use. Celebrity endorsement may be good to convince me to try something, but it cannot convince one to use something over alternatives, if they had enough experience with both.
We looked at hg. I also dabbled in a few others. bzr, I believe was a popular one.
As said upstream. There are plenty of reasons that git "won." Most of them did seem technical. But, I could see things going many ways. For the most part, source control is not a problem most people think they have. Just look at how terrible the data science folks are? And management is stuck in whatever MS is doing in Word nowadays.
In particular, I have yet to see a successful collaboration that wasn't ridiculously ephemeral in data science or document creation, that wasn't backed by a much nicer format. (Where, ephemeral means that after it is done, it is referenced as PDF, but not directly anymore. Which, to be fair, is the majority of documents that exist in the world.)
From spending half a day per branch copy, to releasing only a few times per year because branches are expensive, to never changing more than the minimum because can't branch-test a feature fast, to days of offline CI because bad merge on only dev branch, to keeping 3 copies of the entire repo to do backports because not easy to shift branch... SVN never again.
In a world where "DVCSs is ridiculous in 99% of corporate contexts where you can't use your laptop / desktop at home without logging [sic] into the VPN", why (and what) should Mercurial, a popular DVCS, have "won"? (I don't mean in Mercurial vs Git)
DVCS isn't ridiculous. Especially in corporate contexts over a flakey VPN (like, say, if there were a global remote work experiment going on). Cheap local branching is an indispensable feature. So much so that if you're forced to use SVN, you can still get local branching via SVK or by using `git-svn`. SVN is still the right tool in a lot of places (anywhere there are binaries in play) though the overhead of being actively fluent in multiple VCSs at once isn't free.
Yes, git “won” largely because it was “what the Linux guys are using” (and because github was better than bitbucket and other alternatives). It’s concepts can be painful to work with.
No, DVCS should not be ditched - the simplicity of repointing, rehosting, and working offline, that dvcs systems give us, should never be rolled back (and screw repo corruptions in svn and other crap). Just because a lot of people don’t rely on these features every day, it doesn’t mean they are not hugely beneficial to the ecosystem at large.
The hg command line options are far more sane than git. And TortoiseHg is the best gui ever
But I might switch eventually, when even bitbucket is dropping hg support. And many linux distributions stopped including a working tortoisehg.
The more people involved with building Linux use AMD hardware, the fewer AMD-only bugs find themselves into the kernel. It might also help with optimizing the software for higher core counts now that 32 cores without NUMA is feasible for workstations.
In the end I think parts of the core team switching to AMD will make the hardware platform more attractive for people browsing for workstations. AMD provides great value for money but if you want stability, you buy Intel. Now that probability of instability is decreased with major kernel developers testing out changes and encountering some of the same potential bugs, Threadripper becomes more attractive to certain groups of power users.
This is celebrity news that might affect a lot of people in their day-to-day business down the line, which is different from the typical "Paul Graham ate breakfast" or "Linus Torvalds flips off nvidia" celebrity news cycle.
Hopefully more people will purchase AMD as the result of this. Its financially figures are not good. ( As I have been saying in every AMD thread )
Don't forget to keep in touch with reality, a bunch of people on this globe still have to get by using dual-cores.
But where do you test the compiled software?
If I use an analogy from the Internet, it's like web developers using hugely uncompressed pictures, but nobody cares, because everyone's got broadband. But then, the guy with a mobile data rate wants to view the page. Or the poor sod, who for some reason is still stuck on 56k. And except for bloated pictures, I mean system services, tasks and processes, and no one realizes what a drag they may be causing, because everyone's got at least four cores nowadays, anyways. That's a very real danger for any developer, to "lose touch with reality" when it comes to their users.