LLD developers made significant progress over the last quarter. With
changes committed to both LLD and FreeBSD we reached a major milestone:
it is now possible to link the entire FreeBSD/amd64 base system (kernel
and userland) with LLD.
Now that the base system links with LLD, we have started investigating
linking applications in the ports tree with LLD. Through this process
we are identifying limitations or bugs in both LLD and a number of
FreeBSD ports. With a few work-in-progress patches we can link
approximately 95% of the ports collection with LLD on amd64.
Further evidence that it is time for LLD to be distributed along with LLVM: http://lists.llvm.org/pipermail/llvm-dev/2017-February/11030...
More seriously, being a package maintainer myself I know that the goal is to serve the users and create packages that users expect. So, the best way to get package maintainers to include LLD is to get users to expect it.
I started working on a proof of concept operating system, and according to the OS Dev Wiki, the first thing you have to do to get started is to build a cross compiler. This is because by default the C compiler and linker are compiled only for the current system. When I started making the OS in Zig, however, I did not have to build a special cross compilation version of Zig, because it supports cross compilation out of the box for every target and OS. For example, unlike in C, compiler_rt/libgcc are shipped with the compiler in source form and built lazily for the desired target.
So not having to build or obtain a special gcc was nice, however, I did have to build a cross compiled GNU binutils so that I could use an ARM linker on an x86_64 host. If I can depend on LDD, then I can support cross compiling for every target with no other dependencies needed.
Using the system linker means throwing away those benefits, in name of lowest common denominator.
On the majority of other OSes, each programming language used to have its own linker.
This is why although I dislike some of Go's decisions, I am quite supportive of its uptake.
- Bostraped language, with very few dependencies on C
- Runtime library makes direct use of syscalls, instead of taking the easy way and depend on libc
- It has its own linker
- Despite the GC hate and the whole discussion what is "systems programming", people are indeed slowly using for such purposes
I can tell you that even when I am using clang and LLVM to target another system, except for minor variations where I am targeting the same operating system and the same vendor with the same target executable format across only different CPU vendors (i.e.: Apple), I still have to end up with different full cross-compiler tool chains, often based on entirely different forks of the code base for LLVM.
Even if it were possible, and it is not clear to me that it is or that it would even be desirable (as you still need a set of highly platform-specific libraries and header files, even for weirdly quirked-up core libraries like the moral equivalents of libgcc that are needed for every subplatform, due to symbol visibility), in practice we just aren't there yet, as there are too many extremely subtle platform differences... the only way we could even get close to this dream would really be to have a mindset that a platform is a massive file of configuration warts that you pass to the compiler, as opposed to some trivial flags like -mllvm -arm-reserve-r9.
Not all system linkers have this problem. For example, Solaris ld(1) is perfectly capable of cross-linking any valid ELF file.
>> (usually more than 2x faster than GNU gold).
I switched from bfd to gold to realize a ~6x performance gain on a project (~1 minute linking -> ~10 seconds linking, times 10-20 subprojects, times a few build configurations - really adds up!) I'll keep in mind that I now also have the option to switch to LLD for another 2x (12x total?)
As an example, many of the C++'s drawbacks regarding build systems was because Bjarne wanted that the generated object files and libraries could be transparently linked by a linker that only understood C.
And shouldn't the name be llld?
Seriously though. I guess the majority of major FOSS software out there have a goal of retaining GNU compatibility, so I'd be surprised if this turned out to be an issue.
I am really surprised that there was this much room for optimization. Ian Lance Taylor who wrote Gold is a really smart guy, and speed was one of the primary goals.
lld is still slow, it is just less slow than the other linkers.
This is not to disparage anyone working on linkers or say they are not smart. I think they just don't tend to be performance-oriented programmers, and culturally there has become some kind of ingrained acceptance of how much time it is okay for a linker to take.
$ time ld.lld <omit> -O0 -o bin/clang-4.0
$ time cp bin/clang-4.0 foo
Whereas I have not done your specific test, I know that for the file sizes of executables I deal with in everyday work (around 10MB), the amount of time I wait for linking is woefully disproportionate.
A debug build of clang is normally hundreds of megabytes (~600mb IIRC, and normal linkers go bonkers dealing with it), so if LLD is actually only 2.5x slower than 'cp' at -O0, that's quite good I think.
The next question is how much memory LLD uses vs the competition in this test...
So you end up with big projects where most of the time taken by incremental debug builds is spent linking - relinking the same object files to each other over and over and over. Awful. I don't use Windows, but I hear Visual Studio does the right thing and links debug builds incrementally by default. Wish the rest of the world would catch on.
Incremental linking is not so easy under that constraint, since the output depends on the previous output file (which may not even be there).
(and considering the previous output file to be an "input file" follows the letter of the requirement but not the spirit; the idea is that the program invocation is a "pure function" of the inputs, which enables caching and eliminates a source of unpredictable behavior)
We have had to reject certain parallelization strategies in LLD as well because even though the result would always be a semantically identical executable, it would not be bit-identical.
See e.g. the discussions surrounding parallel string merging:
https://reviews.llvm.org/D27146 <-- fastest technique, but non-deterministic output
https://reviews.llvm.org/D27152 <-- slower but deterministic technique
https://reviews.llvm.org/D27155 <-- really cool technique that relies on a linearly probed hash table (and sorting just runs of full buckets instead of the entire array) to guarantee deterministic output despite concurrent hash table insertion.
> That said, it's definitely possible to get some speedup from incrementality while keeping the output deterministic; you'd have to move symbols around, which of course requires relocating everything that points to them, but (with the help of a cache file that stores where the relocations ended up in the output binary) this could probably be performed significantly more quickly than re-reading all the .o files and doing name lookups. But admittedly this would significantly reduce the benefit.
Yeah. It's not clear if that would be better in practice than a conservative padding scheme + a patching-based approach.
"move symbols around, which of course requires relocating everything that points to them" sounds a lot like what the linker already spends most of its time doing (in its fastest mode).
In its fastest mode, LLD actually spends most of its time memcpy'ing into the output file and applying relocations. This happens after symbol resolution and does not touch the input .o files except to read the data being copied into the output file. The information needed for applying the relocations is read with a bare minimum of pointer chasing (only 2 serially dependent cache misses last I looked) and does not do any hash table lookup into the symbol table nor does it look at any symbol name string.
Not sure exactly what you mean by this. If you give up determinism, it can be O(changes) - except for time spent statting the input files which, at least in theory, should be possible to avoid by getting the info from the build system somehow. I can understand if LLD doesn't want to trade off determinism, but I personally think it should :)
One practical problem I can think of is ensuring that the binary isn't still running when the linker tries to overwrite bits of it. Windows denies file writes in that case anyway… On Unix that's traditionally the job of ETXTBSY, which I think Linux supports, but xnu doesn't. I guess it should be possible to fake it with APFS snapshots.
> In its fastest mode, LLD actually spends most of its time memcpy'ing into the output file and applying relocations. This happens after symbol resolution and does not touch the input .o files except to read the data being copied into the output file.
Interesting. What is this mode? How does it work if it's not incremental and it doesn't read the symbols at all?
Not quite. For example, a change in the symbols in a single object file can cause different archive members to be fetched for archives later on the command line. A link can be constructed where that would be O(all inputs) changes due to a change in a single file.
Even though a practical link won't hit that pathological case, you still have to do the appropriate checking to ensure that it doesn't happen, which is an annoying transitive-closure/reachability type problem.
If you need a refresher on archive semantics see the description here: http://llvm.org/devmtg/2016-03/Presentations/EuroLLVM%202016...
Even with the ELF LLD using the windows link.exe archive semantics (which are in practice compatible with traditional unix archive semantics), the problem still remains.
In practice, with the current archive semantics, any change to symbol resolution would likely be best served by bailing out from an incremental link in order to ensure correct output.
Note: some common things that one does during development actually do change the symbol table. E.g. printf debugging is going to add calls to printf where there were none. (and I think "better printf debugging" is one of the main use cases for faster link times). Or if you use C++ streams, then while printf-debugging you may have had `output_stream << "foo: " << foo << "\n"` where `foo` is a string, but then if you change to also output `bar` which is an int, you're still changing the symbol table of the object file (due to different overloads).
> Interesting. What is this mode? How does it work if it's not incremental and it doesn't read the symbols at all?
Compared to the default, mostly it just skips string merging, which is what the linker spends most of its time on otherwise for typical debug links (debug info contains tons of identical strings; e.g. file names of common headers). 
To clarify, there are two separate things:
- the fastest mode, which is mostly about skipping string merging. It's just like the default linking mode, it just skips some optional stuff that is expensive.
- the part of the linker profile that the linker spends most of its time doing in its fastest mode (memcpy + relocate); for example, I've measured this as 60% of the profile. This happens after symbol resolution and some preprocessing of the relocations.
Sorry for any confusion.
 The linker has "-O<n>" flags (totally different from the "-O<n>" family of flags passed to the compiler). Basically higher -O numbers (from -O0 to -O3 just like the compile, confusingly) cause the linker to do more "fancy stuff" like string deduplication, string tail merging, and identical code folding. Mostly these things just reduce binary size somewhat at a fairly significant link time cost vs "just spit out a working binary".
MSVC incremental link is really not a model I would take as an example: the final binary is not the same as the one you get from a clean build, which is not a world I would want to live in.
Anyway, an incremental link should take a small fraction of a second and be O(1) all the way up to something like chromium. Well, there's the need to stat the input files to check for changes, but that's something the build system also has to do, so ideally there should be some sort of coordination.
It's true that the output of an incremental link will generally be nondeterministic, unless you add a slow postprocessing step. After all, the whole point is to take advantage of the fact that most of the desired content is already in the output binary. Ideally you never have to touch that content, even just to do a disk copy; you should be able to just patch in the new bytes in some free space in the binary, and mark the old region as free. But of course the ordering of symbols in the binary then depends on what was there before.
I don't know why that's particularly problematic; incremental builds are mainly useful during development, not for making releases, which is where reproducible builds are desirable.
I don't know for you, but not having to chase bugs that would only show up in an incremental build (or symmetrically: having bugs hidden by the incremental build that would show up in a clean build) is making "reproducible builds desirable" in my day-to-day development...
I would design any incremental system to provide this guarantee from the beginning.
Anyway, many people already develop without optimizations and release with them, which is far more likely to result in heisenbugs even if it's technically a deterministic process. For that matter, I'm only proposing to use incremental linking in debug builds, so most of the time you'd only end up with nondeterminism if you were already going to get an output binary substantially different from a release-mode one. The only exception is if you have optimizations enabled in debug builds.
Actually, the page you linked seems to be about an older ELF and COFF specific implementation, despite its URL.
Here's an overview of what makes the new one special: https://lld.llvm.org/design.html
The site says that "atoms" are an improvement over simple section interlacing. But I don't get how you are going to make this leap without changing the object file format. Linkers work on the section level because that is how object files work. Object files have sections, not atoms. Compilers emit sections as their basic, atomic unit of output. Within a section, the code will assume that all offsets referring to other parts of the section will be stable, so you can't chop a section apart without breaking the code.
How does the new linker work in terms of this new "atom" abstraction without changing the underlying object file format?
"The atom model is not the best model for some architectures The atom model makes sense only for Mach-O, but it’s used everywhere. I guess that we originally expected that we would be able to model the linker’s behavior beautifully using the atom model because the atom model seemed like a superset of the section model. Although it can, it turned out that it’s not necessarily natural and efficient model for ELF or PE/COFF on which section-based linking is expected."
But maybe, they require you to create special versions of object files where even references internal to each library are referenced there as if they live in a different object file? Is that even possible?
Edit: corrections welcome, but https://github.com/llvm-mirror/lld/tree/master/lib/ReaderWri... seems to indicate that this only sopports Mach-o (and a yaml-based format used for debugging and testing)
The extra information that is needed for an ELF linker (any ELF linker; nothing LLD specific) to operate on functions and global data objects in a fine-grained manner is enabled by -ffunction-sections/-fdata-sections.
For more information, see https://news.ycombinator.com/item?id=13673589
If you want to map ELF to the atom model, you need somehow to build with -ffunction-section so that the compiler emits a single function per section (and similarly with -fdata-section) or model it by mapping one section of the object to an atom.
What am I missing?
I do note that on OS X, -ffunction-sections appears to do nothing.
Patches are definitely welcome for MachO improvements in LLD (as always in LLVM!). You should be aware though that the Apple folks feel strongly that the original "atom" linker design is the one that they want to use. If you want to start a MachO linker based on the COFF/ELF design (which has been very successful) you will want to ping the llvm-dev mailing list first to talk with the Apple folks (CC Lang Hames and Jim Grosbach).
While conventional linkers work at the compilation-unit level (one source file, usually), placing that whole source file's functions adjacently in memory , an atom-based linker is able to take the smallest linkable units (individual functions, each static/global variable...), and arrange those optimally.
As I recall, the OS X ld is based on this model. However it remains more limited as it doesn't support GNU ld's linker scripts and only limited command-line parameters, so it's not exposing all the power of the flexibility it would provide.
As far as I know, AtomLLD remains an experimental project with only 1 or two people working part-time on it.
 although modern linkers also add LTO (Link-Time Optimization) to rearrange things after everything's been integrated/
This isn't quite right. It's just that traditionally (i.e. when not using -ffunction-sections/-fdata-sections) when compilers output an ELF relocatable object, they group all the functions into a single "smallest linkable unit" (a single .text section) and so the linker can't actually do any reordering because the information that the functions are distinct has been lost.
I describe this in more detail here: http://lists.llvm.org/pipermail/llvm-dev/2016-December/10814...
This was actually one of the key technical issues that prevented the Atom design from being used for ELF and COFF.
the ELF and COFF
notion of "section" is a strict superset of its [the Atom LLD's] core "atom" abstraction (an
indivisible chunk of data with a 1:1 correspondence with a symbol name).
Therefore the entire design was impossible to use for those formats. In ELF
and COFF, a section is decoupled from symbols, and there can be arbitrarily
many symbols pointing to arbitrary parts of a section (whereas in MachO,
"section" means something different; the analogous concept to an ELF/COFF
section is basically "the thing that you get when you split a MachO section
on symbol boundaries, assuming that the compiler/assembler hasn't relaxed
relocations across those boundaries" which is not as powerful as ELF/COFF's
notion). This resulted in severe contortions that ultimately made it
untenable to keep working in that codebase.
It's horribly slow.
But it produces smaller binaries, I've got 2x smaller binaries on my projects with that option.
By the way: using LLD helped massively with linking all these small Haskell function objects. It's really much faster than even gold!
Hopefully all the llvm projects (lldb, libc++, ...) mature for windows in order to have a replacement for msvc's tools. Until then (and beyond then) GNU tools gcc + ld will be my go to tools if msvc compatibility is not a constraint. They work perfectly well with Golang. But it happens people to release msvc link libraries. Even dmd for windows-x64 needs microsoft linker to work). Unilink is an option, but its closed source. And openwatcom-v2 will take time to get there.
:( The one feature I need most for embedded development
What I'm curious about, though, is memory usage. On relatively constrained systems, linking large projects can take ages due to swapping - anecdotally, linking is responsible for upwards of 80% of the compilation times I see for a certain very large software project on my developer machine (which, at 16GB RAM, isn't huge, but fairly typical). Worse, memory pressure from linking makes the process rather non-parallelizable, which also hurts throughput. Having a memory-efficient linker could significantly speed up compilation beyond just a 2x improvement in such environments.