Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenMandriva, the first Clang-built Linux distribution (openmandriva.org)
148 points by Conan_Kudo on June 16, 2019 | hide | past | favorite | 131 comments


What difference does that make for an end user? You have two copies of the same OS on the 2 PCs (same hardware) but one compiled on gcc, the other on LLVM/clang. How will this affect me? What differences I will notice? Better performance? Like real world performance


You could theoretically see better real-world performance. As someone who is currently working on very high-performance C++, I have noticed Clang being much more 'confident' in optimising code than its counterparts (especially compared to MSVC++).

That being said, in the grand scheme of things any performance improvement would probably only show up on benchmarks.


Might inspire new implementations if people realize there are some spots that can run faster elsewhere.


> You could theoretically see better real-world performance.

That assertion isn't very realistic. Performance benchmarks comparing gcc and clang are mixed, with performance differences being marginal at best.

https://www.phoronix.com/scan.php?page=article&item=gcc9-cla...


The only benefit I can think of is increased security now or over time. Most people developing compiler-based mitigations work on LLVM. Especially the practical ones. HardenedBSD is an example of a project making use of them. SVA is an example of one that might be applied to Linux:

https://github.com/jtcriswell/SVA


More research style projects are indeed based on LLVM, but GCC is extremely serious about security too, (and contains more Linux-specific optimizations & code too).

For the end user, it comes down to - do you care about copyleft and what FSF/Stallman are advocating for, or do you just want a free (as in beer) code, (LLVM).

I'd argue as a user, the GPL cares more about your freedom, so unless you have a specific reason not to use GCC, go with that over LLVM, (yes, am aware of the exception re binaries compiled with GCC not having to themselves be GPL).


I'm a freedom-focused user that prefers BSD licenses.


This is possibly a silly question but... when it comes to the threat of backdoor vulnerabilities inserted in the compiler itself, which would you say is the safer option? From what I can tell LLVM has a simpler codebase so that's a point in its favor, but I think GCC being a GNU project is less likely to have developers who could be pressured to insert malicious code.

Am I crazy in worrying about this? I know initiatives like reproducible builds are supposed to help solve that kind of threat, but it's still not clear to me how it all fits together.


"when it comes to the threat of backdoor vulnerabilities inserted in the compiler itself, which would you say is the safer option?"

Neither. They're equally unsafe with relatively low risk of this specific attack outside maybe distribution. Although clever idea, AsyncAwait's ideology doesn't work since spies will pose as them. Good news is the Karger compiler-compiler attack has only happened two or three times that I know of.

What's burned projects many more times and most worth worrying about are security-related compiler errors. They transform your code in a way that removes safety/security checks or adds a security problem (eg timing channel). So, the real problem is compiler correctness more than anything. That requires so-called certifying compilers that put lots of effort into ensuring each step or pass is correct. CompCert and CakeML are probably the champions there with formal verification. You could also do rigorous testing, SQLite-style, of each aspect of the compiler on top of using a memory-safe language. If you restrict features, then the bootstrapping can be done in an interpreter written and tested the same way before being ported by hand to designed-for-readability assembly.

It didn't stop there, though. Recent work in verification camp is doing compilation designed to be secure despite using multiple, abstraction levels such as source or assembly or mixed languages. Here's a nice survey on that:

http://theory.stanford.edu/~mp/mp/Publications_files/a125-pa...

The group putting it all together the most is DeepSpec. They have both papers and nice diagrams here:

https://deepspec.org/main


"What's burned projects many more times and most worth worrying about are security-related compiler errors. They transform your code in a way that removes safety/security checks or adds a security problem (eg timing channel)."

I'm not sure if the removal of safety or security checks were caused by compiler correctness issues rather than misunderstanding of language semantics / memory model of a programming language. If we take removal of a memset (to erase sensitive data) or of erroneous integer overflow checks (because of relying on undefined behavior) as an example, they are based on language / programmer error than compiler errors. These issues should be fixed at the language level so that first, a programmer can express his intentions more easily and second, that it is hard to write code which doesn't align with the programmers intention.


Yeah, most of them were tied to undefined behavior. Example with detection method:

https://pdos.csail.mit.edu/papers/stack:sosp13.pdf

One of the main ones that can do it without undefined behavior, or at least what I was told was undefined behavior, is optimizations getting rid of "dead" code. It doesn't have to be using memset: just an assignment. That assignment would sometimes get removed because the compiler thought nothing would be done with the assigned data. I never read if that was in C specification since it seemed to be a common problem in optimizations. Here's a recent solution just in case you find it interesting:

https://www.usenix.org/system/files/conference/usenixsecurit...

"These issues should be fixed at the language level so that first, a programmer can express his intentions more easily and second, that it is hard to write code which doesn't align with the programmers intention."

Being a fan of Ada, SPARK, and Rust, I can't agree with you more. The problem is legacy code, esp useful FOSS, that isn't getting ported any time soon. The OS's, web browsers, and media players come to mind. We need ways to analyze, test, and compile them that mitigate risks. Hence, all these projects targeting things like C.


> GCC being a GNU project is less likely to have developers who could be pressured to insert malicious code.

I'd agree with this, since GNU projects tend to have at least some devs who are in it for the ideology.


> From what I can tell LLVM has a simpler codebase so that's a point in its favor

That may have been true at one point, but I don't think it is any more.


Are you referring to the first or the second part of that sentence?


The simplicity of the LLVM source. Edited for clarity.


LLVM sources are equally, if not more so, complicated than GCC's sources. To make matters worse, LLVM functionality is broken up into dozens of libraries, so there's a fair bit more tracing that needs to be done to understand what's going on in Clang vs GCC.


There are security implication sometimes.

https://twitter.com/dakami/status/668888298677407744?lang=en


Several *BSD projects have been using the Clang toolchain by default for many years, and have been a massive driving force in getting these fixes upstreamed so that other systems, incl. Linux distributions can benefit from greater choice of compilers.

FreeBSD since 10.x on i386/amd64, not sure about the status on other platforms.

OpenBSD since 6.1 for arm64, 6.2 for i386/amd64. This is both default for the base system, meaning kernel and userland. And also the ports tree, for compiling 3rd party packages, very few ports still depend on gcc.

And also, while not the default system compiler yet, LLVM/clang is compiled and installed on macppc/sparc64 and mips64 systems.


That’s neat but the large differentiation seems to be PGO+LTO. As pointed out below both of Google’s distributions of Linux are optimized that way (actually with AutoFDO/SamplePGO+ThinLTO), but I don’t think there is a community distribution that is properly optimized. It could be significantly better.


Clear Linux?


They do for at least some libraries. For example, here is a blog post about how they build Python and a few math libraries with PGO: https://clearlinux.org/blogs-news/boosting-python-profile-gu...


That’s interesting but kinda highlights the difficulty of shipping an entire OS with profile-guided optimizations. What they need is very broad sample coverage and SamplePGO instead of instrumented FDO. This is what ChromeOS does with Quipper: they collect perf data samples from the entire fleet of customer devices and they build the distribution with AutoFDO/SamplePGO.


Really great work by the OpenMandriva team. I've spoken to some of their contributors before and even looked into one of the bugs they reported. Now that I think about it, I need to send that patch for fixing asm goto detection in glib.

Android and ChromeOS are also built with Clang. I'm curious about the distinction of "first." Does anyone know the timelines here?


Not sure how wide Apple has adopted it or how much of a 'distro' you could call their operating systems, but much is built using LLVM with clang where publicly visible.


Also, to be pedantic, apple is a BSD, not a linux.


Mac OS is its own operating system with its own kernel. It's neither Linux nor BSD.


oh good point, TIL only the posix and syscall layer is BSD. nonetheless it's not at all a linux.


If you want to see how much different they are, have a look at

"Mac OS X Internals: A Systems Approach"

https://www.amazon.com/Mac-OS-Internals-Systems-Approach-ebo...

Since then, has macOS become even further away from BSD, with its own Network stack, XPC, SIP and many other architecture changes.


It’s a certified UNIX, which Linux & BSD are clones of (UNIX).


this is taking the post USL vs BSD legalized definition of unix, which, w/r/t history, is revisionist.

following the lineage and where active development happened at the time (80s/early 90s), one can easily make the case that BSD is unix.


To be pedantic, Apple is a company, not an OS :)


To be pedantic, the comment you replied to referred to apple which is neither a company nor an OS but a fruit.


> Python has been updated to 3.7.3, and we have successfully removed dependencies on Python 2.x from the main install image (for now, Python 2 continues to be available in the repositories for people who need legacy applications);

This is cool. I wish ubuntu/debian would move towards this.


Debian is moving towards this [1], just at its own speed (i.e.: slowly to try and avoid breakages as much as possible).

[1] https://www.debian.org/doc/packaging-manuals/python-policy/p...


That's basically never, then. Debian "maintains" tons of old, broken software in their repositories.


Clang has added a command line option for automatic initialization of auto variables to zero, hasn't it?

That by itself would be enough to make me choose clang over a compiler that didn't have that option. Gcc deliberately doesn't support that, right?


That seems like a mistake to turn on, given that clang also has options to error on uses of potentially uninitialized variables. Zero is often just as wrong as any other value, so this flag will hide real bugs.

In fact, I'd rather have a flag that clobbers values with random data, to make sure that uses of uninitialized values are caught as soon as possible.


> error on uses of potentially uninitialized variables.

This is absolutely the best option. That said,

> I'd rather have a flag that clobbers values with random data.

That's roughly what happens in practice as is, doesn't it? Barring the first option I'd rather have an option that fails predictably and reproducibly. An arbitrary but deterministic garbage number maybe? Like --set-uninitialized=0xdeadbeef. That might be getting too elaborate, haha.


Why not use static code analyzers? You run them in compilation time and warn you of uninitialized variables.

Initializing values just because seems wasteful. That's why global and static variables are already initialized in this way.

For me it would yet make sense that Gcc allows it even if it's not a very good practice it may have its uses.


Because static analyzers today are imprecise. We're considering where this makes sense to turn on in Android, and even within the Linux kernel.


If you can I would at least advise running tests using memory sanitizer, which is also built in to newer clang versions. They're much more precise, but only catch problems occurring at runtime. also adress sanitizer for the out of bounds accesses, use after free bugs, memory leaks etc.


We do. Also, the people who built msan are literally the ones implementing the initialization patterns.


Nice, sorry, I didn't notice you actually meant working on Android.

Great work btw. I've found a few actual bugs with ASan and MSan already. Not once a false positive.


what’s the reason gcc wouldn’t support it?

[edit] ipad auto correct


Because zero is not a correct value for all variables, and picking a random behavior over no behavior breaks ubsan.

It also adds a data-dependency (zeroing out a stack buffer depends on the length of the buffer) which is insecure.


For those interested, there's a presentation about OpenMandriva's usage of LLVM from EuroLLVM 2019:

* Video: https://www.youtube.com/watch?v=QinoajSKQ1k

* Slide deck (PDF): https://llvm.org/devmtg/2019-04/slides/TechTalk-Rosenkranzer...


Am I correct in assuming you are connected to the project ? If so I was wondering what the packages are where you employ PGO, Firefox, Chromium and x64 are examples of applications with built in compile-support, are you using PGO on other packages as well ?


Isn't Android built with Clang?


Userland has been built with clang since 7.0. The kernels of some Android devices (Pixel 2/3 for example) are built with clang but seems like most still use GCC.


pjmlp said: [clang is] the only choice on Android as of NDK 18

https://news.ycombinator.com/item?id=17617499


Yes.


Now that I think about it, I think Chrome OS is as well?


ChromeOS is based on Gentoo, is it not?


Yes, as far as I'm aware they've been using Gentoo for quite a while.


Also this:

> Python has been updated to 3.7.3, and we have successfully removed dependencies on Python 2.x from the main install image (for now, Python 2 continues to be available in the repositories for people who need legacy applications);


Arch Linux was able to achieve this years ago.


Arch Linux has a package manager written in C, instead of Python, for example.


APT is written in C++ and YUM is written in Python.

Besides the Python2->3 thing, are there actually any practical advantages to using a lower-level language for managing packages?


Pacman is, subjectively, the fastest package manager I’ve ever used. Not sure if it’s C vs python though, apt is also much slower on many tasks.


This could easily be simply because Pacman doesn’t do as much as apt. For instance apt tracks symbols exported by shared libraries. Pacman doesn't.


I've never heard of that. In what situations is that tracking used?


As far as I know, only RPM-based package managers do symbol tracking at dependency resolution time. Neither Debian nor Arch package managers do this.


I think Void's is faster. Less ergonomic though.

Would be fun to see some benchmarks.


Agreed, so is "apk" from Alpine Linux.


Yes, there is:

A smaller binary and no external dependencies is more audit-able, has fewer dependencies for bootstrapping itself on a server or a container image.

When you multiply this by the total number of images, this makes a big difference.


I wasn't saying there was a disadvantage, I'm just giving a potential look into why it took longer than, say, Arch. Yum is a huge Python project, and it took a while to comb through it all.


It wasn't YUM that held back OpenMandriva's switch (OpenMandriva never used YUM), but some of the build infrastructure tools that wound up being replaced as part of the migration from urpmi to DNF.

Those legacy tools never were updated for Python 3 because they had no maintainers or developers. When the distribution switched to DNF, they were able to adopt actively maintained software that replaced those that were ported to Python 3.


ELI5... why is this great news (i.e. why is LLVM/CLANG better than GCC -- speed?)


See, when a daddy loves a mommy very much, they usually get married. However, sometimes the daddy meets another woman, who is faster, uses less memory, with a significantly less complicated code base, and then the daddy decides to compile his Linux with her instead.

Does that help?


Daddy is a dick, he and mommy have grown up together. They share their complex internals with each other and were made for each other. They share even their philosophical stance on code freedom. How can you turn your back on that. Why jump to another woman just because she is thinner and more in demand with with researches?


> They share even their philosophical stance on code freedom

arguable.

BSD and commercial Unices used PCC and PCC derivitives for much of their history; by this token, GCC is the 'other woman', and this is itself ignoring the clear differences in philosophy between MIT/BSD and GPL licening


But this is about Linux.


doh. good point :)


Well, I did say ELI5. So, thanks, um, I guess.


I do strive to be age-appropriate!


> with a significantly less complicated code base

This is probably not true (because clang is in C++ it can't be less complicated than anything), and gcc is complicated in some parts because it uses much better algorithms (the LLVM register allocator is not as good as LRA). LLVM also has some very ugly DSLs like the .td files.

But GCC's codebase does have lots of added complexity from the extremely weird GNU coding style where they want you to pretend you're writing Lisp and all commits have to update a changelog file. Plus terrible GNU software like autotools and recursive make.


I like autotools. So much better than cmake.


Clang has some code base and speed advantages, but the big reason the large players like Apple are grabbing onto it is licensing. A lot of companies really want to move away from any GPL stuff. It's sad since so much of what we have in the Linux ecosystem came from GNU.


They want to get rid of Stallmann's GNU concepts


Which is always ironic, given that without Stallmann's GNU concepts, Linux would never have happened.

And most likely, given the BSD state back then, it would mean we would just keep using either commercial UNIXes, or Windows would have won the UNIX wars.

But lets get rid of Stallmann's GNU concepts.


> given the BSD state back then

you mean being persecuted by over-bearing commercial unices? (e.g. SVr4 and ATT)?

lets not mis-confuse the issues to our personal ends - the argument is just as valid that without BSD UNIX, stallman would also not have a system to base a clone on..

GNU attempts to redefine the existing cultural status quo of open-source software dating from the dawn of computing to its own personal ends


Sure they had a system, commercial UNIXes would been kept being used.

GCC only got manpower when Sun changed the way UNIX SDK used to be given to customers.

And who knows, maybe companies would have been more willing to dedicate manpower to HURD.


Now they just need to replace the kernel and almost all of the remaining userland.


Ah... thx. Found this article by Stallman that gives some context to that:

https://www.gnu.org/gnu/thegnuproject.en.html


I thought it was the *BSD folks who were known for insisting on that. Does, e.g. the NetBSD kernel build cleanly with clang?


Of course https://wiki.netbsd.org/tutorials/clang/

Though NetBSD is a bit behind on switching to the LLVM toolchain, here's a recent update https://blog.netbsd.org/tnf/entry/final_report_on_clang_lld

Meanwhile FreeBSD (on amd64, i386, armv6/7, aarch64) has been buildable with clang since some point in 9.x (2012-13), comes with clang only since 10.0 (01.2014), and since 12.0 the bootstrap linker on amd64/i386/armv7 is LLD (which was the case on aarch64 from the beginning iirc)


That's quite a strong and completely unfounded accusation.


Good riddance


Really? I would prefer more things were GPLv3/AGPLv3. Open source today is just a bunch of middleware, but few end products. People today use open source software to build closed source solutions. It's a far cry from what a lot of us envisioned back in the 90s. I wrote about this before:

https://penguindreams.org/blog/the-philosophy-of-open-source...


Compiling things with clang is faster and uses less memory.

Linux distros, package repositories, etc. are basically giant compilation farms, compiling packages making sure they work well together so that you don't have to.

So switching to clang might impact their resource usage.

---

For you, the user, the performance of binaries compiled with GCC or clang is pretty much on par. Some binaries are a bit faster with clang, others are a bit faster with GCC, often in negligible ways.

If you are doing something that's very resource intensive, recompiling that software yourself tuning it to your use cases is probably going to have a much larger impact on resource usage than whether the shipped package was compiled with gcc or clang.


Apart from compiler redundancy, it eliminates the usage of vendor (GCC or Clang) extensions and UB that the two compilers don't agree on.

Edit: last I checked (more than two years ago), GCC was faster on more micro-benchmarks.


How does choosing one compiler over another "eliminate use of vendor extensions and UB"? Whatever compiler is chosen you'll have as much UB and as many extensions to deal with, as if the other one had been chosen.


Provided you have good code coverage, you'll pick up any UB that falls outside the intersection by running tests on both compilers.


Dependency on a specific toolchain is bad for the ecosystem.

The existence of a distribution using Clang for building itself makes the ecosystem way stronger.


It's not great news. GCC is basically better in every way.

People promote Clang because it has a permissive license and opens the door for Google and Apple to inject proprietary crap into Linux.


Does GCC have a decent WebAssembly backend?

Looking around just now, there's what seems like an initial experiment from someone, but it doesn't seem to have gone anywhere. :(

https://sourceware.org/ml/binutils/2017-03/msg00044.html

Asking because with the LLVM 8.0.0 release (a few months ago) it's one of the standard supported architectures.


No, GCC has serious shortcomings in certain areas. -flto=thin is better, autofdo/bolt is better, the jit is better, but the most important point are C constexpr which are implemented as in C++, but with GCC you cannot decide at compile-time if a constexpr is constant, so it misses out on many optimizations. It only has _Static_assert, but no usable __builtin_constant_p. With GCC it errors at compile-time, with clang it returns 0. clang also has diagnose_if, e.g. to match user-defined compile-time warnings with user-defined run-time warnings.

e.g. the clang memcpy can be 100x faster than the gcc memcpy, when the size and alignment is known.

And gcc-9 added serious regressions on some platforms, that you need to blacklist it. gcc-10 probably not being better.


I have to disagree, -flto=thin is faster in compile time, but in performance I get better results with -flto=n in GCC.

Also with FDO I get better results with GCC over Clang/LLVM, my main test subjects are rendering (Blender), archivers, encoders and emulation.

However with straight up -O2/-O3 I very often get better performance with Clang/LLVM. I haven't benchmarked on ARM though, my results may be very different there.


Where do clang and gcc implement memcpy? That's part of the C library.


> opens the door for Google and Apple to inject proprietary crap into Linux

This doesn't make sense. You can compile proprietary code with gcc without problems already. Clang doesn't enable anything new here.


Hmmmm...

> LLVM/clang 8.0.1

Guess they have a time machine, as LLVM 8.0.1 isn't released.

There's an rc2 available (3 days ago), but that's not really a release:

https://github.com/llvm/llvm-project/releases

Jumping the gun a bit maybe? ;)


FreeBSD also imports RCs, for example

https://github.com/freebsd/freebsd/commit/48cf3d0825d200d26e...

Who cares about "really™ final releases"? :D


Well, there was the infamous case of gcc 4.0.0 (No, I don't have a white beard)


shush, you'll scare the children. The GCC 4.0.0 release thread on slashdot is all of slashdot in one thread https://news.slashdot.org/story/05/04/21/2125235/gcc-400-rel...


In addition to the "why does LLVM matter" asked elsewhere, why would I want to use OpenMandriva?


https://wiki.openmandriva.org/en/4.0/RC/Release_Notes

but I'm not really sure. I guess if you want a recent kernel, KDE Plasma based distro and you work with LLVM/clang.

I really don't know where it fits with Mageia/PCLinuxOS and other Mandrake descendants.

History IIRC -- Mandrake was RH Linux with KDE; Mandriva was a continuation of that which split; OpenMandriva were devs from that split that took ROSA Linux (still doing KDE4 I think) and then continued their project from that base.


Working with LLVM/clang is pretty much the same on a distorted compiled with clang as one compiled with gcc. Even the C++ ABIs are largely compatible if you use libstdc++ instead of libc++.


Background HN comments on the compromises between choosing clang or GCC:

https://news.ycombinator.com/item?id=17617043


Doesn’t mention if the kernel built fine with clang, that would be interesting if so as that used to be tricky. (I’m probably out of date; maybe it’s fine nowadays?)


Good question! This is my day job! Builds are looking pretty green!

https://clangbuiltlinux.github.io/

X86_64 required the implementation of ASM goto, which we just shipped. You'll need to build clang from source, but the feature will be in clang-9.0 release. Other arches should build with clang-8 (technically x86_64 will build pre 4.19 kernels) but we shipped pixel 2 kernels w/ clang-4.0 so older Clang's may work depending on your target arch/tree/configs.


Ah, that's really interesting, many thanks. I'd be building from source anyway, so I'll have to try clang HEAD with Linus' tree and see how it goes.


Great! Please report any bugs you find via our issue tracker on the site linked above (if we haven't already spotted them).


The kernel _almost_ builds fine with Clang; one of the major stopgaps were the usage of VLAIS (which I think have been all removed given that they amount to pure insanity) and `asm goto`, which it has been implemented by LLVM (I think it will become available on stable when 9.0 is released this autumn).


The kernel is not built with clang as that only works with ARM architectures right now (the Android kernel supports it, but the mainline kernel does not).


False! Mainline builds cleanly for most arches with clang.


But is the result stable ? I remember bulding it with some patches a year or two ago, and while it compile fine, it crashed horribly when booting.


Not with any released version of clang, but yes, it works with snapshots of git master...


Again, false! This is highly dependent on:

1. What arch are you targeting.

2. What version of Linux are you trying to build? Mainline, stable, next?

3. What configs are you trying.

4. What version of clang are you using.

For example, pixel 2 kernel is arm64, 4.4 stable kernel, limited configs, and clang-4.

Things for the most part are pretty green with released versions of clang. There are some long tail configs or combos of the above, but it's pretty minimal and we have a good handle on them.


I'm mainly working off our experience in OpenMandriva, so at least for your standard general-purpose Linux distribution, it doesn't work yet.


The kernel still doesn't build with clang cleanly. There's a very active ongoing effort to deal with problems. https://www.phoronix.com/scan.php?page=news_item&px=Clang-Ke... from the Linux Plumber's conference last year, and there's work that will be landing as part of 5.2 to help: https://lwn.net/Articles/788532/


They still use libgcc, ld.bfd, and libstdc++ though.


I was wondering whether they were using `libc++` and `libc++abi` or not. Thank you for clarifying this.


If you're looking for a system that does, OpenBSD adopted libc++ and libc++abi by default.


I have OpenBSD in a VM, sadly, I have many monitors and TVs I need to interface with wirelessly to do presentations, and OpenBSD does not really support that.


A bit off-topic, sorry, but I'm curious about what you mean by interfacing with Monitors/TVs wirelessly, how does that work?


Surely that distinction must to go to Android and/or ChromeOS?


I assume glibc is built with gcc still? Is there a listing anywhere of the packages that aren't build with Clang?


Not sure if this is updated recently but here is a list https://wiki.openmandriva.org/en/Packages_forcing_gcc_use


That list is definitely out of date, because it still references the old ABF instance...


https://www.phoronix.com/scan.php?page=article&item=gcc-clan... :

"On the AMD side, the Clang vs. GCC performance has reached the stage that in many instances they now deliver similar performance... But in select instances, GCC still was faster: GCC was about 2% faster on the FX-8370E system and just a hair faster on the Threadripper 2990WX but with Clang 8.0 and GCC 9.0 coming just shy of their stable predecessors. These new compiler releases didn't offer any breakthrough performance changes overall for the AMD Bulldozer to Zen processors benchmarked.

On the Intel side, the Core i5 2500K interestingly had slightly better performance on Clang over GCC. With Haswell and Ivy Bridge era systems the GCC vs. Clang performance was the same. With the newer Intel CPUs like the Xeon Silver 4108, Core i7 8700K, and Core i9 7980XE, these newer Intel CPUs were siding with the GCC 8/9 compilers over Clang for a few percent better performance."


Amazing news :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: