Hacker News new | past | comments | ask | show | jobs | submit login
Zapcc: A faster C++ compiler (zapcc.com)
61 points by turrini on May 23, 2015 | hide | past | web | favorite | 38 comments



How history repeats. A company takes a BSD-licenced work, closes the source, and makes money selling "improvements" while the original authors get nothing.

Instances like this is why the GPL has the teeth people complain about. Without them, GNU would've been cannibalized decades ago.


And, instances like this is why some of us use licenses like BSD, CC-BY, and so on. It's not an abuse to do this, it is allowed by the license, and we choose licenses based on how we want our work to be used.

To put some context on this, I work for companies that don't release source. One of those 'companies' was the U.S. Government, and sorry, but you can't release classified work (plenty of other Government code is open, and I mailed plenty of CDs in the day). I'd rather they use my work and remain closed source, than not use my work, or use it without my knowledge. As it is, they can legally and ethically use my work, then can file problem reports, push fixes if they want, and otherwise participate.

It's not a bad thing; it's a choice.


The original authors spent exactly zero hours on designing and implementing the improvements. Why should they get anything? They work for satisfaction and money provided by apple and other companies sponsoring the project.

By your reasoning, when somebody makes money on anything, everybody in the whole production chain should get a cut. Obviously, that's not how economy in general works.


is it really that unusual in our industry to work for money :) ?


How is it different than someone selling barely modified WordPress websites? I'm not into that line of work but obviously, many small businesses in America rely on selling custom open source software.


The really big (obvious) missing comparison here is with clang's pre-compiled headers code, and also using ccache.

I'm tempted (as they are providing no evidence otherwise) that this is snake-oil -- they are just calling stuff which already exists in clang, maybe with a bit of ccache on top. I'll be happy to be proved wrong.


I don't think this is snake oil. The reasons that are typically given for not automatically pre-compiling headers (we're not talking about manually specified pre-compiled headers here) can all be worked around.

For example, some might say that you can't automatically cache headers because of code like this:

// file1.cpp

#include "foo.h"

// file2.cpp

#define float double

#include "foo.h"

But if you're a bit clever about it you should be able to cache opportunistically and then detect if your cache is invalidated.


I am the prinicipal developer of zapcc. We have added a FAQ

http://www.zapcc.com/faq/

and more tech details in the cfe-dev mailing list

http://lists.cs.uiuc.edu/pipermail/cfe-dev/2015-May/043174.h...

To the point, zapcc will not reprocess foo.h. This is a limitation. However, zapcc does deal with templates instantiations and such correctly and fast, such as boost various libraries, Eigen and LLVM source code.


ccache caches the entire translation unit to prevent the recompilation of a new .o file. It doesn't prevent the inclusion of all the header files; it still has to process all that material to determine the hash. Then if the translation unit has changed, it is compiled, conventionally.

Header file preprocessing is about having some faster representation of included material which can be loaded instead of scanning header files. This can be used even when dependent files change.


It seems they're benchmarking on toy programs to create media hype. In my previous job I found out two non-removable bottlenecks in C++ builds: 1) linking (I don't know of any parallel linker), 2) IO (writing of debug information). None of this is relevant in their benchmarks.

In my case, the build result, in _release_ mode, was a DLL file of about 40MB, and corresponding debug info of about 100MB. In _debug_ mode, multiply debug info by a factor of ca 5-10. (Individual PDBs on disk collected together were ~GB, but final PDB was smaller.) Release parallel builds on a 4-core machine, with optimizations and all, (with HT) were noticeably faster because they generated less IO.

Link time of _another_ DLL consisting of many (thousands) separate object files took more time than compilation itself.

There's also another product (I won't name it because it created more problems for me than it solved) which integrates in VS and seamlessly distributes the compilation over machines in the local network. I stopped using it because it often generated corrupt PDB files and messed up IntelliSense in VS.

To summarize, I have two issues with their system (apart from "good enough" solutions for distributed compilation already existing): 1) they say nothing about speeding up the linking stage, and related to that 2) if they mess with linking, I'm not confident that they are reliably handling debug info. Sure, everything works in "toy projects", but.


What do you mean by "toy"? In this example GCC takes 3+ minutes:

https://drive.google.com/file/d/0B1WlUivCnXhyRVk0LUl6akxVU2s...


We've used a distributed compiler product (incredibuild) with great success.


We (zapcc) do nothing for link acceleration or debug info. The standard system linker is used.

zapcc works well for C++ templates found in all modern C++ libraries.


> 1) linking (I don't know of any parallel linker)

How about ld.gold?


Oh, I wasn't aware that it was multithreaded. I just found out about it now.

Did you benchmark gold to find out how much MT speeds up linking?


I didn't make precise benchmarking, but when used on a real-world codebases, it speeds up linking about 2x.

One trick to know: never make ld.gold the default system linker, because there would always be awkward kernel modules which will be corrupted at some point, because they would rely on ld.bfd.

Instead, it's recommended to specify -fuse-ld=gold in the command line for your code, see https://gcc.gnu.org/onlinedocs/gcc/Link-Options.html


I didn't find any mention about the speed of the output... so I wonder if they just disabled the optimiser completely.

TCC is an extremely fast C compiler because it doesn't do any optimising at all.


The optimizer is not disabled, few global optimizations are. The speedup is mostly the result of not having to reparse the headers, reinstantiate and codegen the generated templated code rather than skip minor opts. This is really slow process that frequently needlessly repeated on multiple compile units. We are looking into this tradeoff, maybe making the decision user-configurable or based on opt. level.


The reason C++ builds so slowly is mainly headers and templates iirc, they seem to just be dealing with caching header processing.


So they forked Clang into their own closed source version?

I guess you'll get things like this when you use a BSD license.


If they have the manpower to stay in sync with clang's upstream that is great for them. If they don't, they're going to have to open source it to stay in sync with upstream.

When it comes to large, fast moving projects the BSD license allows you to close the source but you're only going to obsolete yourself.

This is what is happening with several vendors who took FreeBSD closed source: Isilon, Citrix, etc. Their FreeBSD forks are so far behind they are struggling and can't backport changes anymore. So they're forward-porting their proprietary enhancements or rewriting them again (eg NFS performance fixes, Xen Dom0) by pushing them back upstream to FreeBSD so they can move their product to a modern OS release.

The fantasy you people have with "BSD lets them steal code and never give back again!" doesn't really work in practice. It saves them tons of time and money to cooperate with the community.


> I guess you'll get things like this when you use a BSD license.

That is by design. As the other commenter said, they will either have to release fixes upstream, or be forced to re-apply them over and over just stay up to date.


We frequently merge with llvm svn, last time three days ago. It is a pain but that's the price to keep up with such a dynamic project. We do upstream enhancements and bug fixes for clang and LLVM code, see the {cfe,llvm}-commits mailing lists.


From their test cases, Qt 5 to be specific:

>Initial compilation took 28.53 seconds in Zapcc, and 41 seconds in Qt Creator.

>Thereafter, re-compilations took 1.15 seconds in Zapcc, and 12 seconds in Qt Creator.

This doesn't fill me with much fate in their other statements. Qt Creator is not a compiler, it is an IDE and can be set up to use a variety of compilers.


Here's some info on clang mailing-list from zapcc author: http://lists.cs.uiuc.edu/pipermail/cfe-dev/2015-May/043174.h...


see the zapcc developer's post on clang.devel:

http://article.gmane.org/gmane.comp.compilers.clang.devel/42...


This is mostly a waste of time. C++ modules will do the same thing when they are completely finished (folks i work with are the ones doing it, and it is very far along), and will be standardized, etc.


Looking forward to this. By "very far along" do you mean in terms of specification, implementation, or both?


Both :)


I typically use ccache on Linux, which does the headline feature of this. If there's a Zapcc rep here, how does Zapcc compare? Is the precompiled header feature a big improvement?


I don't know anything specific about zapcc but based on the numbers here:

http://www.zapcc.com/case-studies/compiling-boost/

even their first compilation (the one you still need to perform when using ccache) is twice as fast as GCC.

In any case, I think the primary audience for this is people using clang, not GCC.


You are correct, zapcc should be compared only to clang from the same svn. clang to gcc is another matter altogether.

Even in first compilation, zapcc will cache between the first and second files compiled. The only case where there is no caching is when zapccs starts or re-startes, for example when compilation flags are changed.


ccache doesn't speed up your compile time; it avoids compiling when that is possible. Avoiding compiling is not possible when you've changed files and need them recompiled. In that case, a ccache build proceeds using the usual compiler command line.

Also, ccache generates the preprocessor output, even in those cases when it deduces that compiling is unnecessary (and just pulls out a previously made object file). It makes that deduction based on the hash over the preprocessor output.


The headline feature to which I referred, from http://www.zapcc.com/product/features/, is "Zapcc caches compiled headers and generated code between compiles".

So a cache of what was generated last time, which will have to be replaced if the input files or compile command changes. So, a lot like ccache. Which is why I asked.



When warp was released, Clang's built-in preprocessor was already faster. I haven't seen any updated benchmarks.

https://news.ycombinator.com/item?id=7489532


how interesting, thank you very much.


Note warp commits, the project is not too active. Maybe it's mature enough and requires few updates, I don't know as zapcc uses the clang Preprocessor and not warp.

https://github.com/facebook/warp/commits/master




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: