Hacker News new | past | comments | ask | show | jobs | submit login
A tool to make Windows builds reproducible (github.com)
80 points by BuuQu9hu on Dec 6, 2016 | hide | past | web | favorite | 24 comments



Are modern compilers truly that deterministic when processing arbitrarily complex code, shoot-yourself-in-the-foot example aside?

In the reconfigurable world, place and route engines run metaheuristics[1][2] to find good-enough solutions in an intractable search space. I'm imagining if there's a compiler stage which implements even a single optimization heuristic, any prospect of hash-wise reproducibility goes out the window?

[1] https://en.wikipedia.org/wiki/Simulated_annealing

[2] https://en.wikipedia.org/wiki/Genetic_algorithm


If your random numbers are generated with a fixed seed and the algorithm runs for a deterministic number of steps (e.g. not using a wall-clock timeout) then the output should be deterministic.

One potential for nondeterminism is if the compiler or linker is multithreaded, and multiple threads could update a shared data structure in no fixed order.


OP actually pointed to an interesting reference[1] which briefly touches on the impact of pseudo-random number generation, amongst other things.

Diving a little bit deeper down the same rabbit hole, David A. Wheeler[2] (as he emphatically insists on being cited) also made note of threads with non-deterministic scheduling. Lots of notes on hardware-centric implications as well.

Definitely turns my perspective around on what it means to trust software. I suppose the effort is a proper graduation from masturbating[3] to safe sex, so to speak.

[1] https://reproducible-builds.org/docs/

[2] http://www.dwheeler.com/trusting-trust/

[3] https://www.youtube.com/watch?v=4XpnKHJAok8&feature=youtu.be...


If you call GCC modern, then yes:

https://tests.reproducible-builds.org/debian/


In theory, no. For practical purposes, mostly yes.

See: Buridan's Principle. Also: http://research.microsoft.com/en-us/um/people/lamport/pubs/b...

This was a real issue in hardware design in the 60's.


Thanks for the paper link...I feel like I've encountered it before somewhere, maybe here or back in uni as an undergrad. Wiki[1] noted its relation to metastability in digital architectures. That's a phenomena I'm very much familiar with.

[1] https://en.wikipedia.org/wiki/Buridan's_ass#Application_to_d...


As long as the heuristic is run the same way, you will get the same output.

Most compilers aim for reproducibility (except for things like timestamps) anyway, as your customers would rightly complain if compiling their programming produced different speed executables each time, and debugging compiler problems would become even harder!


> Most compilers aim for reproducibility...

I suppose that's what I was seeking to gain clarity on.

The remark on execution speed is a bit confusing though, considering most software applications do not operate in a strict cycle-accurate, real-time environment, which is further exacerbated by the caching schemes of modern processors.

From your perspective, given the same repeatable test input, what threshold qualifies an executable's speed as being different?


It's common for small changes in code, or even the alignment of code (moving it all up/down memory) to have surprising results in the performance of the resulting code, due to things like page alignment.

Here is a random example (you can find lots of others):

    http://stackoverflow.com/questions/17896714/why-would-introducing-useless-mov-instructions-speed-up-a-tight-loop-in-x86-64-a


Hmmm...perhaps a rephrasing of the question given this particular example.

Which would you consider to be the surprising part: ~0.75% average execution speed slowdown, or on the order of tens of milliseconds slowdown?

W.r.t. page alignment, thought you may find this to be an interesting counter example[1].

[1] http://danluu.com/3c-conflict/


Last year I reversed engineered a (36KiB) Windows PE binary[1] to obtain C++ source code that compiled to almost the exact same bytes. What stumped me was an undocumented section that contained metadata, which I gave up on figuring out.

Assuming you compile with the same options and in the same environment, the machine instructions should be the same.

EDIT: The compiler in question though is whatever Visual Basic 6.0 uses, so I wouldn't call it modern today.

[1] https://github.com/qwename/quirky-protector


One of the longstanding issues the Tor project had with their reproducible builds was that the windows builds, built with Visual Studio, would always have a small piece of data in the same place that differed for each build. It took months to figure it out, but it turned out to be a bug in the MS compiler where they were including some uninitialized memory from the compiler into the resulting binary. IIRC it was fixed after being reported to Microsoft.

Could be the same or similar issue you are facing, given the age of the compiler.

I don't have a reference to link to though, Roger Dingledine told me this in person at a conference when discussing issues related to reproducible builds. But there should be an issue in the tor bug tracker.


The fact that best practices when building an application don't suggest running the build multiple times, benchmarking the results and choosing the fastest, suggest that compilers are generally assumed to be deterministic.


Best practices are not universal.

The build-benchmark-repeat process is effectively what place and route metaheuristics in the FPGA world automatically do.

I'm imagining not all compilers are created equally, so I threw all handwaving general assumptions on determinism out the window.


Windows compilers aren't deterministic in some areas.

Roslyn already has a switch to do this, /deterministic.

There is a spec for this though, https://www.microsoft.com/en-us/download/confirmation.aspx?i...

> A stamp that is used for different purposes in several places in a PE or COFF file. In most cases, the format of each stamp is the same as that used by the time functions in the C run-time library. For exceptions, see the descripton of IMAGE_DEBUG_TYPE_REPRO (section 6.1.2). If the stamp value is 0 or 0xFFFFFFFF, it does not represent a real or meaningful date/time stamp.

and

> The presence of an entry of type IMAGE_DEBUG_TYPE_REPRO indicates the PE file is built in a way to achieve determinism or reproducibility. If the input does not change, the output PE file is guaranteed to be bit-for-bit identical no matter when or where the PE is produced. Various date/time stamp fields in the PE file are filled with part or all the bits from a calculated hash value that uses PE file content as input, and therefore no longer represent the actual date and time when a PE file or related specific data within the PE is produced. The raw data of this debug entry may be empty, or may contain a calculated hash value preceded by a four-byte value that represents the hash value length.


unless the key to reproducible builds is hidden somewhere inside "How to Implement Compatible IDs in Printing Devices", that link may be wrong


hit download and it pops up a window where you can select the correct doc.

It's PECoff.docx


It may be a helpful tool from an engineering viewpoint, but I think it misses the big picture of _why_ reproducible builds matter:

https://gnu.org/software/guix/news/reproducible-builds-a-mea...


I remember being new to Information Technology in 1998 when Windows 95 & 98 were the norm. I had more than one client who would take delivery of 10 - 12 new, identical workstations and it was very common to have half with one problem and half without.

I'm not familiar with the concepts in this article. Would this explain the sort of issues I had?


Probably not, no.


On the one hand, my knee-jerk reaction to this involves a meaningful amount of Windows-hate. In short: there is a tight limit to the amount of trust I can give to any build on a platform that I can't build.

On the other hand, reproducible builds are /important/. The better we can trust our tools and verify that they're exactly what we think they are, the less we need to worry about the security of our tools, and the more we can get on with /using them/.

In the end, despite enabling the continued use of a deeply proprietary and anti-freedom platform, this is a good thing.


Agreed. The GNU Guix project (of which I am one of the developers) sees reproducibility as a means to an end: empowering users with verifiable, trustable software that they can control. Thus, reproducible builds on a proprietary system, while better than non-reproducible builds on a proprietary system, don't accomplish very much for user freedom.

https://www.gnu.org/software/guix/news/reproducible-builds-a...


Very nice. But the title imposes that Windows builds are unique without a tool.


> Git is used to get the current commit hash.

that seems a bit of a problem for every product not using git (which is still a lot, and whoever said git was the final solution anyway?)




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: