Instances like this is why the GPL has the teeth people complain about. Without them, GNU would've been cannibalized decades ago.
To put some context on this, I work for companies that don't release source. One of those 'companies' was the U.S. Government, and sorry, but you can't release classified work (plenty of other Government code is open, and I mailed plenty of CDs in the day). I'd rather they use my work and remain closed source, than not use my work, or use it without my knowledge. As it is, they can legally and ethically use my work, then can file problem reports, push fixes if they want, and otherwise participate.
It's not a bad thing; it's a choice.
By your reasoning, when somebody makes money on anything, everybody in the whole production chain should get a cut. Obviously, that's not how economy in general works.
I'm tempted (as they are providing no evidence otherwise) that this is snake-oil -- they are just calling stuff which already exists in clang, maybe with a bit of ccache on top. I'll be happy to be proved wrong.
For example, some might say that you can't automatically cache headers because of code like this:
#define float double
But if you're a bit clever about it you should be able to cache opportunistically and then detect if your cache is invalidated.
and more tech details in the cfe-dev mailing list
To the point, zapcc will not reprocess foo.h. This is a limitation. However, zapcc does deal with templates instantiations and such correctly and fast, such as boost various libraries, Eigen and LLVM source code.
Header file preprocessing is about having some faster representation of included material which can be loaded instead of scanning header files. This can be used even when dependent files change.
In my case, the build result, in _release_ mode, was a DLL file of about 40MB, and corresponding debug info of about 100MB. In _debug_ mode, multiply debug info by a factor of ca 5-10. (Individual PDBs on disk collected together were ~GB, but final PDB was smaller.) Release parallel builds on a 4-core machine, with optimizations and all, (with HT) were noticeably faster because they generated less IO.
Link time of _another_ DLL consisting of many (thousands) separate object files took more time than compilation itself.
There's also another product (I won't name it because it created more problems for me than it solved) which integrates in VS and seamlessly distributes the compilation over machines in the local network. I stopped using it because it often generated corrupt PDB files and messed up IntelliSense in VS.
To summarize, I have two issues with their system (apart from "good enough" solutions for distributed compilation already existing): 1) they say nothing about speeding up the linking stage, and related to that 2) if they mess with linking, I'm not confident that they are reliably handling debug info. Sure, everything works in "toy projects", but.
zapcc works well for C++ templates found in all modern C++ libraries.
How about ld.gold?
Did you benchmark gold to find out how much MT speeds up linking?
One trick to know: never make ld.gold the default system linker, because there would always be awkward kernel modules which will be corrupted at some point, because they would rely on ld.bfd.
Instead, it's recommended to specify -fuse-ld=gold in the command line for your code, see https://gcc.gnu.org/onlinedocs/gcc/Link-Options.html
TCC is an extremely fast C compiler because it doesn't do any optimising at all.
I guess you'll get things like this when you use a BSD license.
When it comes to large, fast moving projects the BSD license allows you to close the source but you're only going to obsolete yourself.
This is what is happening with several vendors who took FreeBSD closed source: Isilon, Citrix, etc. Their FreeBSD forks are so far behind they are struggling and can't backport changes anymore. So they're forward-porting their proprietary enhancements or rewriting them again (eg NFS performance fixes, Xen Dom0) by pushing them back upstream to FreeBSD so they can move their product to a modern OS release.
The fantasy you people have with "BSD lets them steal code and never give back again!" doesn't really work in practice. It saves them tons of time and money to cooperate with the community.
That is by design. As the other commenter said, they will either have to release fixes upstream, or be forced to re-apply them over and over just stay up to date.
>Initial compilation took 28.53 seconds in Zapcc, and 41 seconds in Qt Creator.
>Thereafter, re-compilations took 1.15 seconds in Zapcc, and 12 seconds in Qt Creator.
This doesn't fill me with much fate in their other statements. Qt Creator is not a compiler, it is an IDE and can be set up to use a variety of compilers.
even their first compilation (the one you still need to perform when using ccache) is twice as fast as GCC.
In any case, I think the primary audience for this is people using clang, not GCC.
Even in first compilation, zapcc will cache between the first and second files compiled. The only case where there is no caching is when zapccs starts or re-startes, for example when compilation flags are changed.
Also, ccache generates the preprocessor output, even in those cases when it deduces that compiling is unnecessary (and just pulls out a previously made object file). It makes that deduction based on the hash over the preprocessor output.
So a cache of what was generated last time, which will have to be replaced if the input files or compile command changes. So, a lot like ccache. Which is why I asked.