In the early days, clang was significantly faster in compilation than GCC. They also barely implemented any code optimization. Now that clang generates code which is about 90% as fast, generally, as C++, its compilation speeds and memory usage have understandably bloated considerably.
Note that I say 90% as fast generally. It still hasn't caught up completely.
Clang pioneered LTO, but GCC does it better now.
Other people have mentioned gcc's previously terrible error message and inability to dumb ASTs.
I don't think this is up to HN standards.
Sorry, what? VC++ already had link-time code generation by 2005. (No clue exactly which year it was introduced though.)
I don’t recall if HP actually shipped it, but I know they had a tech report or research paper around 1994 that did LTO.
I have a vague recollection that DEC may have shipped it in the 90s as well.
It's possible some spaces can't have more than one player due to network effects (like network protocols, such as the Web); the history of the Internet looks like a Pod People or Borg plot where a more diverse ecosystem is consumed and replaced by a single all-consuming entity that gradually assimilates all distinct individuals. What we lost in diversity we gained in losing bizarre email gateways, I suppose. But languages are meant to be written to actual, real-world, written down standards, right? No possibility of friction when moving from one compiler to another, right?
I don't believe clang is any better than GCC anymore. But other commenters have talked about it, and I don't want to be redundant.
Looks like this hasn't been updated in a while. As of Clang 9.0 they migrated everything to the Apache 2.0 license, which is not nearly as permissive as BSD. Apache 2.0 mixes US Contract law with Copyright law, and that is considered wholly "not permissive enough" by many, most notably OpenBSD which is stuck on Clang 8.0.1. They also migrated the libc++/libc++abi C++ standard libraries from MIT to Apache 2.0 as well (which was a real dick move), but they don't care.
Addressing the patents question seems to be the main reason.
"1) Some contributors are actively blocked from contributing code to LLVM."
> These contributors have been holding back patches for quite some time that they’d like to upstream. Corporate contributors (in particular) often have patents on many different things, and while it is reasonable for them to grant access to patents related to LLVM, the wording in the Developer Policy can be interpreted to imply that unrelated parts of their IP could accidentally be granted to LLVM (through “scope creep”).
> This is a complicated topic that deals with legal issues and our primary goal is to unblock contributions from specific corporate contributors."
Legally dubious relicensing was not only unnecessary, it is now preventing 9.0> use and future contributions, OpenBSD, which has a long history of opposing Apache 2.0. And using LLVM/Clang as the default compiler for the kernel/userland and a ports tree with 10,000 software packages.
At this point, OpenBSD has decided that two very popular open source licenses (GPLv3 and Apache 2) are unacceptable to them. That has walled them off from a lot of open source software. They seem to think that it's incumbent on everyone else to adopt licenses they like. They are going to continue to be disappointed.
And also too permissive by many other. It does not protect from tivoization, freeloading and yet it's not compatible with GPLv2.
haven't checked recently, because gcc sped up and still produces better binaries
It can dump AST and much more:
Most developers who compile from source (eg working on LLVM itself) don't need that.
Building in release mode is much smaller probably ~5GB or so (from rough memory, it's been a while).
LLVM 9.0 with Clang, a huge array of tools, etc, is 1.9GB on macOS.
This shouldn't be on the frontpage of HN. If anything, Clang should take this down or revise it.
> The PCC source base is very small and builds quickly with just a C compiler.
> PCC doesn't support Objective-C or C++ and doesn't aim to support C++.
It also shipped with many historic operating systems, Plan 9 from Bell Laboratories being one of them, if I remember correctly.
To make it easier to share code with other systems, Plan 9 has a version of the compiler, pcc, that provides the standard ANSI C preprocessor, headers, and libraries with POSIX extensions. Pcc is recommended only when broad external portability is mandated. It compiles slower, produces slower code (it takes extra work to simulate POSIX on Plan 9), eliminates those parts of the Plan 9 interface not related to POSIX, and illustrates the clumsiness of an environment designed by committee. Pcc is described in more detail in APE—The ANSI/POSIX Environment, by Howard Trickey.
I'm obviously aware that it wasn't the main compiler/compiler suite for Plan 9 (of which I've submitted links to papers about quite a few times), but it was there.
> The pcc command acts as a front end to the Plan 9 C compilers and loaders.
"Clang can serialize its AST out to disk and read it back into another program, which is useful for whole program analysis.
GCC does not have this. GCC's PCH mechanism (which is just a dump of the compiler memory image) is related, but is architecturally only able to read the dump back into the exact same executable as the one that produced it (it is not a structured format)."
Clang, you had me at 'hello'.
The point I outlined above is just icing on the cake!
However, it's intended for diagnosing issues in the compilers or plugins ; it's definitely not meant to be used as an interoperable format to be loaded back into a program.
gcc doesn't even provide a way to specify the output path for the dump file (too bad, as reliable AST dumping could enable implementing ast-based-ccache (instead of preprocessed-code-based-ccache, for compilation-caching of preprocessor-less languages)).