A Chrome build is truly a computational load to be reckoned with. Without the distributed build, a from-scratch build of Chrome will take at least 30 minutes on a Macbook Pro--maybe an hour(!). TBH I don't remember toughing out a full build without resorting to goma. Even on a hefty workstation, a full build is a go-for-lunch kind of interruption. It will absolutely own a machine.
How did we get here? Well, C++ and its stupid O(n^2) compilation complexity. As an application grows, the number of header files grows because, as any sane and far-thinking programmer would do, we split the complexity up over multiple header files, factor it into modules, and try to create encapsulation with getter/setters. However, to actually have the C++ compiler do inlining at compile time (LTO be damned), we have to put the definitions of inline functions into header files, which greatly increases their size and processing time. Moreover, because the C++ compiler needs to see full class definitions to, e.g., know the size of an object and its inheritance relationships, we have to put the main meat of every class definition into a header file! Don't even get me started on templates. Oh, and at the end of the day, the linker has to clean up the whole mess, discarding the vast majority of the compiler's output due to so many duplicated functions. And this blowup can be huge. A debug build of V8, which is just a small subsystem of Chrome, will generate about 1.4GB of .o files which link to a 75MB .so file and 1.2MB startup executable--that's a 18x blowup.
Ugh. I've worked with a lot of build systems over the years, including Google's internal build system open sourced as Bazel. While these systems have scaled C++ development far further than ever thought possible, and are remarkable in the engineering achievement therein, we just need to step back once in a while and ask ourselves:
Damn, are we doing this wrong?
Starting with a full build that initially takes hours and it shrinking to < 15-20 minutes and better seems pretty par for the course for truely large C++ projects. You don't get a fast build process for free, but if the team makes it a priority, alot can be done.
EDIT: Times mentioned were for a full build, often you rarely due a full build, incremental builds should be majority. Places that don't make incremental builds 100% reliable drive me crazy and waste so much developer time. This is common, but it's a lame excuse. Just do the work and fix it.
Here's a post about what I did:
Personally, this meant the error messages I got were helpful enough that my first meson-built project was working after a half-hour of deciding to port it over despite using several system libraries and doing compile-time code generation.
Meson's language is not Turing-complete, so it's easy to analyze for errors. Unlike CMake and autotools, Meson's language looks like a real (pythonish) programming language, and it isn't string-oriented; dances of escaping, substitution, and unescaping are uncommon.
Compared to autotools or hand-rolled Makefiles, CMake is a step in the right direction; meson is a leap.
CMake tries hard to to do better, but then introduces its own layers of craziness. So it's fine as long as I am not doing anything unusual, but as soon as I need to understand what is going on, I find a dizzying array of barely working moving parts beneath me.
I think a lot of people don't estimate correctly just how huge Chrome is.
OS are much larger than kernel, I'd guess all the driver code exceeds the actual kernel.
People always think they code base is large, but having built most of the Call of Duties and many Unreal games, all the OS code I've worked on is trivial in size comparison. There is probably something bigger, but games seem bigger than many major apps in my experience.
So, lots of work has been done to deal with the build times. And probably not a lot of low hanging fruit to be found. But, still more work is being done. Support is being added for 'jumbo' builds (aka unity builds, where multiple translation units are #included into one) which is helping a bit with compile and link times.
EDIT: I'm sure lots of work has been done, not trying to degrade that. Just sharing my experience on my projects, never worked with Chrome.
Yes, and I mean "we" as in "this industry".
I just recently talked to someone whose Swift framework(s) were compiling at roughly 16 lines per second. Spurred on by Jonathan Blow's musings on compile times for Jai, I started tinkering with tcc a little. It compiled a generated 200KLOC file (lots of small functions) in around 200 ms.
Then there are Smalltalk and Lisp systems that are always up and pretty much only ever compile the current method.
We also used to have true separate compilation, but that appears to be falling out of favor.
Of course none of these are C++ and they also don't optimize as well etc. Yet, how much of the code really needs to be optimized that well? And how much of that code needs to be recompiled that much?
So we know how to solve this in principle, we just aren't putting the pieces together.
 https://www.youtube.com/watch?v=14zlJ98gJKA He mentioned that a decent size game should compile either instantly or in a couple of seconds
Also consider the leverage factor. Improvements to the compiler benefit all users of the programming language, so it's worthwhile to invest in high quality compilers.
Yes, we can use caching compilers (https://wiki.archlinux.org/index.php/ccache) to speed up builds with few changes. We can lower optimization levels (although that barely gives you an increase in compile speed compared to the runtime speed you lose and the fact that it makes your program do slightly different things).
There's no slider from "pessimum" to "optimum". You need to do wildly different things to optimize past this point for compile speed. Erlang hot-reload and at-runtime-code-gen from other langs come to mind. But that will almost definitely slow down your program because of the new infrastructure your code has to deal with.
I have observed that there can be a nice balance with Java and the auto-reloading tools that are available for it. But I am unaware of their limitations and how a web browser might trigger those limitations.
D has very fast compilation times compared with C++.
Rust is another option.
* Everything up to and including typechecking (and borrowchecking) takes a third to a half of the time, with lowering from there to a binary taking the rest of the time; that means (a) you can get a 2-3x speedup if you only need to check the code is compilable, and (b) overall speed isn't likely to improve a lot unless LLVM gets a lot faster.
* Rust doesn't currently do good incremental compilation, so there are potential big wins for day-to-day use there.
* There is a mad plan to do debug builds (unoptimised, fast, for minute-to-minute development) using a different compiler backend, Cretonne . If that ever happens, it could be much, much faster.
Have you actively tried to find one?
> You can have both.
Pretty bold claim, without proof.
A million lines of code represents an AST with a few million nodes in it, which compiles to a binary of a few megabytes. To do this we have computers with a dozen cores running at 4ghz each, 100GB of memory and blazingly fast SSD drives.
It's easy to forget, but computers themselves aren't slow. The software we write is just inefficient.
Most C++ devs have to work with tools that currently exist and so we are stuck with what the compiler devs give us. Believe it or not C++ compiler devs are pretty smart people and have largely optimized it as much as possible without a language redesign.
That language redesign is in the works with modules, but the dust hasn't settled yet, so that is also a discussion for the future. In the mean time no other language delivers the performance C++ does right now. So if I want to ship product right now the very real dilemma is fast product with slow builds (and a bunch of tools for dealing with that) with C++ or use some other language for a faster compiler and a slower product.
Then there is Rust, but that is another whole can of worms and not in use in most shops yet (Just switching to something has a huge cost).
C compiles a _lot_ faster than C++, so that's always an option. And as other people have pointed out, you can get C++ code to compile much more quickly by being very disciplined about what features you use and how your code is laid out.
So I agree that if you want to ship something right now all your options have significant downsides. I think software engineers as an industry don't take tooling nearly as seriously as they should. Tools are performance amplifiers and we currently waste a staggering number of manhours working with poorly designed, unreliable, poorly documented and agonizingly slow tools.
Where did I write "who cares about performance"? And why do you think any of what I said is going to cost 2x-3x performance? Performance has been either a major part of or simply my entire job for most of my career, and I usually make projects I run into at least an order of magnitude faster. For example by switching a project from pure C to Objective-C. Or ditching SQLite despite the fact that it's super-optimized. Or by turning a 12+ machine distributed system into a single JAR running on a single box.
The Web browser and WWW were invented on a NeXT with Objective-C. It wasn't just a browser, but also an editor. In ~5KLOC written in a couple of months by a single person. NCSA Mosaic took a team of 5 a year and was 100KLOC in C++. No editing. So pure code-size is also a problem. And of course these days code size has a significant performance impact all by itself, but also 20x the code in C++ is going to take a significantly longer time to compile.
In terms of performance, the myth that you need to use something like C++ for the entire project is just that: a myth. First, the entire codebase doesn't need to have the same performance levels, a lot of code is pretty cold and won't have measurable impact on performance, especially if you have good/hi-perf components to interact with. See 97:3 and "The End of Optimizing Compilers". Or my "BookLightning" imposition program, which has its core imposition routine written in Objective-Smalltalk, probably one of the slowest languages currently in existence. Yet it beats Apple's CoreGraphics (written in C and heavily optimized) at the similar task of n-up printing by orders of magnitude.
Second, time lost waiting for the compiler is not "convenience", it is productivity. If you get done more quickly, you have more time to spend on optimizing the parts of the program that really matter, and thoughtful optimization tends to have a much larger impact on performance than thoughtless performance. The idea that this is purely a language thing is naive. See, for example, https://www.youtube.com/watch?v=kHG_zw75SjE
Third, you don't need to have C++ style compilers and features to have a language that has fast code, see for example Turbo Pascal mentioned in other comments. When TP came out, we had a Pascal compiler running on our PDP-11 that used something like 4-5 passes and took ages to compile code. TP was essentially instantaneous, so fast that our CS teacher just kept hitting that compile key just for the joy of watching it do its thing. It also produced really fast code.
Use high-level interpreted languages to make your life easier, but also use them to make the page cache's life easier.
Although the lower levels are written in a mix of C and C++, the OS Frameworks are explicitly designed for Java, C# and VB.NET.
Trying to use C or C++ for anything more than moving pixels or audio around is a world of pain.
The Android team even dropped the idea of using C++ APIs on Brillo and instead brought the Android stack, with ability to write user space drivers in Java (!).
No, I will switch to firefox or something. I'm a user, I don't care how hard it is for developers I can about my workflow which is using a browser on various machines, some of which are very slow.
A faster compiler isn't just some frivolity. It's a power tool. A force multiplier.
Now we can see it as a big mistake, but on those days probably it was one of the reasons why C++'s adoption took off.
Also while C lacked modules, most Algol and PL/I derived languages supported them since the late 60's.
Swift's case has the issue of mixing type inference with subtyping, so lots of time is spent there.
All in all, I really miss TP compile times and at least on Java/.NET, even with AOT compilers are close enough.
EDIT: some typos
I know "Design and Evolution of C++" quite well, and have been a C++ user since Turbo C++ 1.0 for MS-DOS.
Stroustrup made the decision on purpose and consciously, but it turned out to have disastrous effects.
1. OCaml generates additional information that it stores in .cmi/.cmx files.
2. OCaml does not allow for mutual dependencies between modules, even in the linking stage. Object files must be provided in topologically sorted order to the linker.
3. OCaml supports shared generics, which cuts down on the amount of code replication (at the expense of requiring additional boxing and tagged integers in order to have a uniform data representation).
> 1. OCaml generates additional information that it stores in .cmi/.cmx files.
On this point I'd say that it could probably embed the cmx file as "NOTE" sections in the ELF object files, but likely they didn't do it that way because it's easier to make it work cross-platform. Every "pre-compiled header" system I've seen generates some kind of extra file of compiled data which you have to manage, so I don't think this is a roadblock.
> 2. OCaml does not allow for mutual dependencies between modules, even in the linking stage. Object files must be provided in topologically sorted order to the linker.
I believe this is to do with the language rather than to do with modules? For safety reasons, OCaml doesn't allow uninitialized data to exist.
Although (and I say this as someone who likes OCaml) it does sometimes produce contortions where you have to split a natural module in order to satisfy the dependency requirement. I've long said that OCaml needs a better system for hierarchical modules and hiding submodules (better than functors, which are obscure for most programmers).
> 3. [...] at the expense of requiring additional boxing and tagged integers [...]
I think this is fixed by OCaml GADTs: https://blogs.janestreet.com/why-gadts-matter-for-performanc... However this is a new feature and maybe not everyone is using it so #3 is still a fair point.
Both, sort of. The problem is that mutually recursive modules are tricky. So, it's a limitation of the language, but one that is there for a reason.
> I think this is fixed by OCaml GADTs
No, GADTs solve a different problem. Essentially, normal ADTs lose type information (due to runtime polymorphism). GADTs give you compile time polymorphism, so the compiler can track which variant a given expression uses. Consider this:
# type t = Int of int | String of string;;
type t = Int of int | String of string
# [ Int 1; String "x" ];;
- : t list = [Int 1; String "x"]
# type _ t = Int: int -> int t | String: string -> string t;;
type _ t = Int : int -> int t | String : string -> string t
# [ Int 1; String "x" ];;
Error: This expression has type string t
but an expression was expected of type int t
Type string is not compatible with type int
module F(S: sig type t val f: t -> t end) = struct ... end
I am curious how it is done in a portable way across all OSes, specially crude system linkers and OSes without POSIX semantics.
For example, I imagine this can be made via ELF sections, but not all OSes use ELF.
The cmx data could be converted to ELF note sections, but the whole thing has to work on Windows as well, so I guess they didn't want to depend on ELF.
In most projects, you can add this to your Makefile and forget about it:
ocamlfind ocamlc $(OCAMLFLAGS) $(OCAMLPACKAGES) \
-c $< -o $@
ocamlfind ocamlc $(OCAMLFLAGS) $(OCAMLPACKAGES) \
-c $< -o $@
ocamlfind ocamlopt $(OCAMLFLAGS) $(OCAMLPACKAGES) \
-c $< -o $@
rm -f *.cmi *.cmo *.cmx *.cma *.cmxa
More generally speaking: The trick must be to not generate identical instantiations multiple times. So you must have a way to check, if you already generated it. Of course, the devil is in the details (e.g. is equivalence on the syntactic level enough?).
In module based languages, the symbol table is expected to be stored on the binary, either to be directly used by tools or to generate human readable formats (.mli).
So if one uses the system C linker, it means being constrained to the file format used by such linker.
> Now we can see it as a big mistake, but on those days probably it was one of the reasons why C++'s adoption took off.
No mistake, just no choice - the original (1986 or so) C++ cfront compiled C++ to C which it fed to the C compiler and linker chain.
It is a mistake with 2017 eyes, because build times are now insupportable.
Of course it was the right decision in 1986 when trying to get adoption inside AT&T.
Also I have read "Design and Evolution of C++" back when it was published, and know C++ since Turbo C++ 1.0 for MS-DOS, so I grew with the language.
Which is also a reason why I still select it as member for my Java, .NET and C++ toolbox trio.
However the first ANSI C++ was approved in 1998, and many of us were expecting to get some kind of module support in C++0x.
C++ has been by next loved language after Turbo Pascal, since then I learned and used countless languages, but C++ was always on the "if you can only pick 5" kind of list.
Since 2006 I am mostly a Java/.NET languages guy, but still keep C++ on that list.
Mostly because I won't use C unless obliged to do so, and all languages intended to be "a better C++" still haven't proved themselves on the type of work we do, thus decreasing our productivity.
I dream of the day I could have an OpenJDK with AOT compilation to native code with support for value types, or a .NET Native that can target any OS instead of UWP apps.
Until then C++ it is, but only for those little things requiring low level systems code.
Check out http://www.scala-native.org/en/latest/
Maybe it will have value types after java gets them
The presentation at Scala days was interesting.
Fintech, HPC, aeronautics, robotics, infotainment,....
The only industry where devs are badly paid is games industry, but that is common to all languages.
They're slowly improving it to return to pre-1.5 performance, but last I checked, it wasn't there yet. The impact is insignificant on small projects, of course, but easily felt on larger (100Kloc+) ones.
While the recent optimizer improvements are great, my wish is for Go to switch to an architecture that uses LLVM as the backend, in order to leverage that project's considerable optimizer and code generator work. I don't know if this would be possible while at the same time retaining the current compilation speed, however.
Sometimes people don't realize this because they always use `go build` which, as the result of a design flaw, discards the incremental objects. When you use `go install` (or `go build -i`) each subsequent build is super fast.
The annoying thing is that "go install" also installs binaries if you run it against a "main" package. I believe the only way to build incrementally for all cases without installing binaries is to use "-o /dev/null" in the main case.
Go being bootstrapped is a good argument against people that don't belive it is suitable for systems programming.
Depending on C or C++ as implementation always gives arguments it could not have been done differently.
Also we should not turn our FOSS compilers into a LLVM monoculture.
How does that law improve the language without falling into the trap mentioned elsewhere in this thread? (Optimizing for a pleasant "compile experience" at the cost of everything else)
The justification, then, seems to be that if you legitimately need new features, then the language has failed and you should start over anyway. I think Python 3 is sort of an offshoot of this idea, except that many of the new features keep getting backported to 2.7 anyway.
There is also Active Oberon and Component Pascal, but he wasn't directly involved.
This specific issue prob impact other batch workloads with lots of small tasks (processes). There's no reason this should be happening on a 24 core machine.
With mainstream languages, code generation is done by the build system which can avoid repetition. Caching generated code feels like a good idea to me. Doing it with compile time execution is (unnecessarily?) hard.
What slows some Common Lisp native code compilers down is more advanced optimization: type inference, type propagation etc, lots of optimization rules, style checking, code inlining, etc.
I think large parts of chrome actually belong into the OS. The network parts, the drawing library (skia), the crypto implementation, the window and tab management, and so on.
Video and audio would be deferred to DirectShow, Quartz, VLC, Mplayer, ...
Ideally, what remains is just a layout engine and some glue code for the UI. It's the monolithic kernel vs microkernel debate all over again.
Plugins have a bad rep in the context of browsers, but I think this "microkernel browser" where everything is a plugin or OS library can be potentially more secure than the current state, since we can wall off the components between interfaces much better.
I also think it would be much "freer". Browsers like Firefox and Chrome are open-source, but they are free in license only. I can't realistically go ahead and make my own browser. The whole thing is so complex that you have to be Google or Apple or Microsoft to do that. The best I could achive is a reskin of WebKit. I think that would be different with a more modular browser.
The problem is cross-platform support. Depending on the OS is obvious if every OS supported the required features.
When I say e.g. skia should be part of the OS, I don't neccessarily mean MS should ship it and update it yearly. I mean Google should still ship and auto update it, but also go though the effort of documenting it, maintaining strict backwards compatible APIs, and letting other programs consume it. I don't care who the actual vendor is. I know that's a lot to ask for, but OTOH it is insane to statically link that kind of code. Especially if you have multiple electron apps, that would work fine with a shared runtime.
Ideally, I'd want critical code (encryption, code signing, bootloaders, kernels, runtimes) to be from a trusted vendor, and preferably simple and open source. I trust the MS, Apple, Google of 2017 not to completely fuck it up. (We already trust them as browser vendors.)
I don't care if the keep calc.exe stable for 10 years, but I expect them to patch crypto.dll immediately. You could do that stealthily, outside of major updates, as it has no user facing changes.
The benefit of this model is that it allows third party apps from small vendors to profit from the up to date security that only the tech Giants can provide.
The downside is of course that it is quite hard to maintain perfect backwards compatibility while pushing updates, but if the components and APIs are small enough I think it is possible.
I really like elinks, it's a shame I can't use it to view blackboard and other sites I have to use...
There is no point in speeding up the raw compile time by a couple of minutes if you are increasing the development and testing time by a couple of weeks.
Yes: Not isolating different modules sufficiently to allow you to avoid including most headers when compiling most modules.
Patterns to do this in C++ has been well understood for two decades:
Strict separation of concerns coupled with facades at the boundaries that let all the implementation details of the modules remain hidden.
Yes, it has a cost: You're incurring extra call overhead across module boundaries, and lose in-lining across module boundaries, so you need to choose how you separate your code carefully. But the end-result is so much more pleasant to work with.
Bullshit. Code needs to be compiled, but it isn't required to build everything from scratch whenever someone touches a source file.
Additionally, not all code is located in any hot path.
Except for C++, where a tiny change in a single object will require recompiling every file that transitively includes that object's header.
PIMPL, forward declarations, pre-compiled headers, binary libraries are all tools to reduce such dependencies.
I suppose you could have some extra aggressive optimizations that force inlining, but I haven't seen a need for this, even in game dev.
Generally, I find when people crow about performance, the product they're talking about usually has some questionable architectural/design/implentation decisions that dominate the performance issues so I have to do my best not to roll my eyes.
Yes, you can write performant C++ using well-understood compiler firewalls, interfaces, etc that reduce your compile time.
People very rarely has any clue about this at all.
If a few hundred or few thousand people each have to build Chrome from scratch a couple times, and making their compilation process much slower makes each of a trillion pageviews a millisecond faster...the break-even point seems to be about a 27 hour build time sacrifice.
They're relying on the compiler working its magic to make non-hand-optimized code run pretty fast. That's fine, but it requires you to expose a lot of stuff in headers and that slows down compilation.
I'm fairly sure, like other commenters, that they could speed up compilation a lot and impact performance very little by carefully modularizing their header files. But that's a really big job.
Compilation is impressively quick, even though it goes through C.
Not saying Chrome could and should just switch to Go, it definitely would not be the right fit! But it's interesting that these sorts of builds still occur and consume a lot of developer's time.
He covers all the same points about header files and how Go addresses those issues.
Give me a tool that's: (i) as fast, (ii) as mature and well supported, (iii) as powerful as C++ and I will switch in a heartbeat. But until there is such an alternative it's futile to complain about the shortcomings of C++, because if you want the powerful, zero-cost abstractions, the mountains of support, and access to billions of existing lines of code, you pretty much have nowhere else to go.
Reasoning starting from conclusions to lead to initial constraints is backwards reasoning. For example you don't talk about maintenance or productivity, and yet you end up making a choice without factoring this. Chances are, the choice in most codebases is made because of existing code and culture, not because of rational reasons.
But from the engineering side it's the only "everything" language. (There aren't any good GUI kits for C, and NVCC is C++)
Then how does C++ improve?
We are likely getting modules (and reflection) with the next iteration (C++20), which -- if it moves like the last two versions -- will be almost completed and already supported by GCC, VS and Clang in two years. Clang and VS2015 even support modules experimentally already.
It keeps adopting D features.
So, I have a thought: if we're spending all this time to compile functions (particularly template functions) that are just thrown away later, why are we performing all our optimization passes up-front? Surely, optimization passes in a project like Chrome must eat up a lot of compilation cycles, and if that's literally wasted, why do it in the first place? Can we have a prelink step where we figure out which symbols will eventually make it, and feed that backwards into the compiler?
Maybe a more efficient general approach might be to simply have the optimizer be a thing that runs after the linker, so that the front-end compiler just tries to translate C++ into some intermediate representation as fast as possible. The linker can do the LTO thing, then split the output into multiple chunks for parallelization, and finally merge the optimized chunks back together. With LLVM, it feels like the bitcode makes this a possible compilation approach...
Hmm...not "abnormal" in the sense that we've gotten used to it: yes. Heck, last I heard building OneNote for Mac takes about half a day on a powerful MacPro.
But I'd say definitely abnormal in terms of how things should be.
/LTCG also doesn't seem to parallelize well - last I checked it still ran all the codegen on one core. Maybe that's different now?
The trick is you've got to reduce your "saturation level" of #includes in header files, by preferring forward declarations over #includes, and using the PIMPL pattern to move your classes' implementations into isolated files, so that transitive dependencies of dependencies don't all get recursively #included in.
When it comes to templates, one has to be very aggressive in asking "Does this (sub-part) really have to be in template code, or can we factor this code out?" Any time I write my own template classes, I separate things between a base class that is not a template, and make the template class derived from it. Any computation which does not explicitly depend on the type parameter, or which can be implemented by the non-template code if the template just overrides a few protected virtual functions to carry out the details, gets moved to the non-template base class.
If your problem is not with template classes which you have written, but with templates from a library, consider that in most (all?) cases there is still some "root" location (in your code) which is binding these templates to these user-types. This root location will either itself be a (user-written) template class, or it is an ordinary class which "knows" both the template and the bound-type(s). Both of these cases can be dealt with either by separating it into non-template base and derived template, or using the PIMPL idiom, or both.
The general principle is that what you allow in your headers should be the lower bound of the information needed to specify the system. Unfortunately this takes active work and vigilance to maintain, and a C++ programmer is not going to understand the need for it until they reach the point of 30 minute builds and 1.4GB's of .o files.
The SICP quote comes to mind here: "Programs must be written for people to read, and only incidentally for machines to execute." I greatly prefer to have my code organized in a sensible way. I want to know that "here is where the FooWidget code is".
It's not the end of the world, and people can adjust, but part of what I hate about working on just about anyone's Java code is this constant mental assault of "no, you need to be in the FooWidgetFactoryImpl file to find that code". Just let me have "customer.cpp" or whatever, and I'll live with grabbing coffee during the build.
Admittedly, I don't work on truly large applications. I can imagine priorities change when builds take two hours instead of the 15 minutes I might have to live with.
Your comment is great but I have spent enough time working on Chromium to know that they have people working on the build who know all of this stuff and much more. They understand the build from the top to the bottom of the toolchain stack. (@evmar used to be one of these people and he actually commented in this thread at https://news.ycombinator.com/item?id=14736611.) I am sure your parent commenter is a great developer but I get the impression he/she is not one of the Chromium build people.
I once work for a rather big accounting software company and the full build of their accounting product took about 4 to 5 hours to complete on the build server.
We ran the build at the close of business every day and the build engineer had to log remotely just to make sure the build worked, other wise the QA team would have nothing to test in the morning.
It too was written using the C++ language.
Then we tried to attach the OO paradigm to it and we got the monstrosity that is C++ (and as a consequence of that - Java - which has fixed some issues but still suffers)
And don't get me started on templates
I'm so glad that paradigm is starting to die out and hopefully Rust, Go and others will take over (their object model still doesn't get around my head but it will eventually)
Pascal could have been a better choice (sigh)
C without typedefs also compiles faster
C++ is like plugging an engine to a skateboard to make it run faster
Your comment, and essentially every other single one on this entire thread thereby makes me wonder if anyone on this subthread read the article :/.
OK, I decided to search for the word "process", and found one person responding to you who did read the article, and depressingly only a handful of people even responding to the top-level article who apparently read the article. This entire post is such a great example of "the problem with this kind of discussion forum" :/.
What was described in this article wasn't "Chrome's build is too slow", it was "there is a weird issue in Windows 10 (which apparently wasn't even a problem with Windows 7, and so we could easily argue is a regression) where process destruction takes a global lock on something that is seemingly shared with basic things like UI message passing". The fact that he was running a Chrome build to demonstrate how this manages to occasionally more than decimate the processing power of his computer was just a random example that this user ran into: it could have been any task doing anything that involved spawning a lot of processes, and the story would have been exactly the same.
Now, that said, if you want to redirect this to "what can the Chrome team do to mitigate this issue", and you want the answer to not be "please please lean on Microsoft to do something about this weird lock regression in Windows 10 so as to improve the process spawn parallelism for every project, not just compiling Chrome"... well, "sure", we can say you are "doing this wrong", and it is arguably even a "trivial fix"!
Right now, the C++ compiler pipeline tends to spawn at least one (if not more than one) process per translation unit. If gcc or clang (I'm not sure which one would be easier for this; I'm going to be honest and say "probably clang" even though it feels like a punch in the gut) were to be refactored into a build server and then the build manager (make or cmake or ninja or whatever it is Google decided to write this week) connected to a pool of those to queue builds, you would work around this issue and apparently get a noticeable speed up in Chrome compiles on Windows 10, due to the existence of this process destruction lock.
One could even imagine ninja just building clang into itself and then running clang directly in-process on a large number of threads (rather than a large number of processes), and so there would only be a single giant process that did the entire build, end-to-end. That would probably bring a bunch of other performance advantages to bear as well, and is probably a weekend-long project to get to a proof-of-concept stage for an intern, come to think of it... you should get on it! ;P
However I suspect that as soon as that lock regression in Windows is fixed, that monster CPU load is coming back home to roost, and the workstation is going to be just as dead as my 64 core Linux workstation has been when I've actually '-j 500' without gomacc up and running correctly.
So, by all means, Microsoft should fix this lock regression.
But there's this...elephant...in...this room here.
Just let people talk about what they want to talk about, maybe? The main problem in the article is interesting but far less actionable than the overall situation of slow compilation.
Do you want things like rampant speculation and insulting windows 10? Do you expect everyone to pull out kernel debuggers to be able to make directly relevant comments? It's okay to talk about a related issue. Concluding that they didn't read the article is kind of insulting.
I have! Every time you guys bump a snapshot, my Gentoo boxes whirl away and heat my house, compiling a new version from scratch. On an octocore Skylake Xeon laptop, this takes 2 hours 48 minutes.
As the largest issue is the throwing away of duplicate work, I'd see it as a kind of reverse binary tree: machines working on files that depend on others talk together, then when finished send the condensed work up the chain (and signal their availability for the next workload chunk or phase) until everything collapses down back to the original machine.
I used that 10+ years ago on Gentoo, and never saw anyone using it since. Don't know how often is used now days.
It is faster than IncrediBuild, even faster than SN-DBS and has multi-platform support, the only problem is that it requires its own build script.
So what kind of cryptographic guarantees would you need for that? And if you can only verify the build results by trusting signatures from upon high, then what is the point? Perhaps those builds could be turned into work in a proof-of-work blockchain. Do compilers contain any hard-to-do, but easy to verify steps?
Whole shelves full of useless PhDs thesis are just waiting to be written on this topic.
Of course, thats another separate problem to begin with. I still remember dabbling with D and vibe.d and replacing the default GNU linker with ld.gold because over 90% of the build time was due to the linker...
Wasn't large compilation time a driving force behind coming up with Go? Is a garbage-collected language not suitable for a web browser? I am just curious because I absolutely love writing Go
Go has sub-millisecond GC pauses, and even at that minimizes the need to do stop-the-world pauses (previous HN discussion https://news.ycombinator.com/item?id=12821586) I think it would be a very interesting exercise to give a crack at it. If anyone is interested, let me know.
At the cost of throughput:
> Go optimises for pause times as the expense of throughput to such an extent that it seems willing to slow down your program by almost any amount in order to get even just slightly faster pauses. - https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...
yes, that was one of the tenants.
> Is a garbage-collected language not suitable for a web browser?
In theory its fine; but there's a lot of historical baggage that comes along with garbage collection and the majority of languages that support it (e.g. almost no value types).
Golang fairs pretty well latency wise for a GC'd language. I'd be curious for someone with more experience than me to talk through a deep dive of instances where go's latency/throughput characteristics are and are not good enough for specific applications.
With the exception of Smalltalk and its derivatives, all the GC enabled languages that came before Java had value types, even Lisp.
Not using modules? Yeah I know C++'s made the mistake of not using them since the beginning and it is a long road until they are here (202x ?).
However Google was showing their modules work at CppCon 2016, so I guess Chrome does not make use of clang modules.
I can't think of a component in a browser that would require inlined function calls in order to be performant. To really matter it would have to be many millions of calls per second.
Moreover, because the C++ compiler needs to see full class definitions to, e.g., know the size of an object and its inheritance relationships, we have to put the main meat of every class definition into a header file!
So let me suggest that you revisit those inlined function calls again. Once you start putting them into proper .cpp files and make use of forward declarations where possible the whole header dependency graph will probably simplify quite a bit.
Don't even get me started on templates.
Right. Don't use them unless there's no other way. Especially avoid for large-ish classes that do complicated stuff. If you can spare a couple CPU cycles (and most of the time you can) determine things at run-time instead of compile time.
Of course all of this is theory, not taking into account practical matters like deadlines, code readability or developer turnover.
Full disclosure: I worked at Google, but not on the Chrome team :)
As the first post here in this thread mentions, going down the C++ road of header files might have gotten us some short term wins, but ultimately it hits a brick wall. Incremental compilation is inescapable.
It does incremental compilation and incremental linking.
I would be quite happy if cargo was half as fast as my UWP projects.
I'm personally writing a game in Rust but the main logic is written in a compile-to-JS language and uses V8, so the issue doesn't affect me.
But as of now: C++ incremental build times (with the right build system) are a lot better than Rust's.
Are we doing it wrong? In the case of C++ - yes, absolutely, 100% certainly, wrong. I'm not suggesting that Chrome would be better off under Golang - I'm just saddened that 30 years after the C++ abomination was born in Stroustrup's head, nothing else has come out to challenge it.
Any C++ replacement needs the love of Apple, Microsoft, IBM, HP, Google and everyone else that sells operating systems.
The implication you're making is that because computers have weird little glitches that pop up to cause havoc every once in a while, then it must be laughable to imagine they could rival the marvels of human intelligence. What that tells me for certain is that you haven't paid much attention to human intelligence.
There are flat-earthers, anti-vaccine nuts, and people convinced we faked the moon landings. We'll happily argue for thousands of years over whether the kid a virgin had is or is not the same person as his dad, but not that dad, the other dad. Show me someone who intuitively understands probabilities, and I'll show you someone who incorrectly assesses how people understand probabilities. I'm pretty sure the odd bug here and there doesn't disprove the ability of machines to outthink us.
With all the time-loss, money-loss, opportunity-loss, C/C++/Js must have been rule out as the worst tools for our job decades ago, by ANY RATIONAL ENGINER.
And instead of band-aid them, the sane path is to freeze them and only apply critical patch and use something else. And other options are already know, have been proved and could have be a better foundation, except because developers are irrational like hell and don't wanna true progress at all.
For example, pascal is way faster to compile than C. Most pascal-variants are like this, yet provide the ability to do low-level stuff.
The main thing is that for some reason, we pretend is not possible to improve a language, or is sacrilege to remove problematic features, or to left-behind some things.
Programming languages are like any other software. It have UX problems, and them can be solved. Then, why not?
The mind-limiting answer is because break the language is costly and the programmers are so dedicated that expect them to never again check for null for millons of lines of code is too traumatic. Or maybe add namespaces to C++ is too hard.. but templates are ok.
Not bad. FreeBSD takes about the same, and it's plain C… oh wait, no, a large part of the compile time is LLVM/clang which is… not C.
Though there is now a clever incremental sort of thing called meta mode https://www.bsdcan.org/2014/schedule/attachments/267_freebsd...
From someone regularly working on multi hour builds (on build servers), this imo sounds like the light kind of build I'd wish I worked with ;)
That said, this article was about a weird behaviour in Windows when destroying processes, not about heavy builds.
Luckily ninja and ccache do a good job of meaning you only ever do that once (and rsync solved that problem for us). Not that a 20-second compile for a one line change is something I should be content with, but it's certainly workable.
That violates encapsulation. Getters should be rare and setters almost non-existent.
(I predict that in 10 years all the various soft cores will be variants of RISC-V each with its own old, unmaintained and proprietary fork of LLVM).
And eventually some guys would be playing with a BSD variant.
Ideally with Java 10 or C# 8 planned features, it won't be any longer the case, but until then we have quite a few years.
also, would it kill you to like, coordinate something with the gmail team so their page doesn't kill my machine after being open for a couple of house?
come on guys.