Is the difference then that reproducible Arch or say Debian would ensure that the source when compiled would produce the same binary output on _any_ the machine of the same type. As opposed to say maybe NixOS would just make the output different if it runs on a machine with a hard drive mounted on a different directory or the cpu has more cores?
Distributions like Debian and Arch need to put in a fair bit of effort to reach the same level as Nix. But, in a sense, reaching that level is the "easy" part - it's an interesting technical challenge.
The hard part of the reproducible builds effort, the part which Nix doesn't give you, and the part which developers from Debian, Nix, and other distros are still actively working on, is ensuring that bit-for-bit identical on-disk inputs, along with sandboxing, will give you bit-for-bit identical outputs. That is a task which requires cleaning up the build process of the entire open source world.
I still have hope for flatpak. I want a system where the folder is the application and deleting the folder is the way you remove applications. The devil though is in the details of how or what is shared.
Sadly flatpak and snap don't seem to have much in community support. MacOS and OSX seem to be the closest to this idea of the folder being the application. Trying Nix it is just impractical, but I liked the idea.
I am looking at those as well. One appeal is they allow precise tracking of dependencies and seem to support Mac as well, so could set things up easier on developers machines.
It isn't fine for developers or for the Linux ecosystem.
Sure it works for users for installing what has been a complex development and packaging system.
The problem is exactly that it is hard to distribute applications in Linux.
> Flatpak is the next-generation technology for building and installing desktop applications. It has the power to revolutionize the Linux desktop ecosystem. https://flatpak.org/
> The problem is exactly that it is hard to distribute applications in Linux.
For who? Distributions generally seem to like their own packaging processes. You can distribute a package on the AUR in a few lines of bash and it's filled to the brim with software because of it.
Me and the lack of software on Linux. I am talking about making it so that Linux can finally get some killer apps that we can't get our hands on.
Flatpak is the replacement for applications and not OS specific packages. You would still use a package manager for OS related packages and for flatpak/snap and then flatpak/snap installs and uninstalls applications. Much better system for everyone.
The complexity for deployment and its complexity and diversity is the number one issue with Desktop Linux and applications. This is why we don't have Adobe applications or many others. I use several commercial programs and to them deb = Linux and I use OpenSUSE so I constantly have to go through hoops until they start supporting rpm, which they normally will start doing if they get enough traction in Linux.
Now this is the key difference I feel has been left out of the messaging surrounding flatpak. It's not meant for libraries and OS packages, just end-user applications? That makes more sense now.
Well the achiles heel for Python is also deployment. The reason why we have so few applications is that each system has differences. RPM, DEB to other Distros or forget you just read this but (SystemD and not SystemD). With a flatpak you have a runtime and the runtime makes the hooks for the development.
Linux has a great tool for deploying different packages and it is hardly used. You build the package here and then it will rebuild it for other distros and systems. It could even make Windows and MacOS if they extend it. It is complex to do these things. https://build.opensuse.org/
With a flatpak or snap you just build the one package and it runs everywhere.
If I had a Euro for everytime I heard that, or some variation on that, I would have a lot of Euros
Package manager have nuances, and so have flatpak and snap installations.
Also, add Windows and macOS into the mix. Down the drain it is. It's basically what Docker was trying to get out of the way. Funny thing is that actually Go and Rust might be more the solution to the whole packaging dilemma that and packaging solution.
Even with containers, being able to reproducibly rebuild the containerized application from its sources if desired is still a challenge.
1) flatpak runs on a runtime and the application would than be exactly the same on all systems. Just like a Java program or any other programming language that runs on a runtime.
2. flatpak and snap are not for servers, which containers makes more sense because they are solving a different problem than desktops'. So they are more like a sandbox like what we find on tabs in Chrome.
> Is Flatpak a container technology?
"It can be, but it doesn't have to be. Since a desktop application would require quite extensive changes in order to be usable when run inside a container you will likely see Flatpak mostly deployed as a convenient library bundling technology early on, with the sandboxing or containerization being phased in over time for most applications. In general though we try to avoid using the term container when speaking about Flatpak as it tends to cause comparisons with Docker and rkt, comparisons which quickly stop making technical sense due to the very different problem spaces these technologies try to address. And thus we prefer using the term sandboxing." https://flatpak.org/faq.html
Reproduceable builds are not about software distribution, they are about compilation. The Arch Linux maintainers would like to be able to verify that the Java bytecode that they are sending to their users was indeed built from the Java source files that they were expected to be built from. With reproduceable builds anyone can double check that the Java bytecode that is being distributed is the correct one by recompiling it on their own machine. Without reproduceable builds you need to trust that the person who compiled the file didn't do anything malicious (either intentionally or unintentionally)
Why though? There is no difference in complexity between `package-manager uninstall package-name` and `rm -rf /apps/package-name`. If you implement the latter, it will probably be aliased to the former.
The biggest problem in any case is not the application's assets (which all package managers manage competently), but its state and configuration files. It took Windows about a decade of carrot and stick to get (most) applications to put their files in a few defined locations instead of all over the place.
The issue is reproducible and developer complexity. But in terms of complexity you are confusing user complexity and developer complexity. Developer complexity is still high and is the big barrier to Linux development.
Also we have the crazy apt-get update vs upgrade fiasco that chocolatey has gotten caught up in. They even made choco update be replaced by upgrade.
There's a causal connection between those two things....
ROX has been doing that for like 15 years. http://rox.sourceforge.net/desktop/AppDirs.html
Thus some of the issues that apply to Arch or Debian do not apply to Guix and Nix, but many apply to both (timestamps are an obvious example.)
For Guix we are currently at ~80% reproducible packages: https://gnu.org/software/guix/news/reproducible-builds-a-sta... . That's not an entirely fair comparison because, for example, Python packages in Guix include .pyc files whereas on Debian they don't (not sure about Arch).
- Deterministic: the same input always gives the same output source, no exceptions.
- Reproducible: The steps to *create* some build result are always the same, but the results might not be bit-for-bit identical.
NixOS is always reproducible. Any build can be exactly reproduced by another person, and the build will run the exact same steps. But it is not always deterministic.
This seems strange but it really isn't. For example, although I may always take the same steps to compile some software (let's say "./configure; make; make install"), that does not guarantee determinism -- for example, perhaps the 'make' stage will run `date` and then embed the date in the source (e.g. `gcc -DBUILD_DATE="$(date)"` or something). That breaks determinism.
However, Nix is always reproducible. If you run 'nix-build' today, tomorrow, or 5 years from now -- if you have the same description of the package, the build always proceeds the same way. In practice, this means "You need to have the same git revision of the nixpkgs source code". But if you have that -- nothing should ever change.
Of course you can monkey-wrench this and make up various weird counter examples to this ("My build system only compiles stuff when rand() % 4 == 10, hah!"), but that's the general idea. NixOS will always take the exact same steps -- down to the letter -- in order to build your software. But that doesn't guarantee its deterministic.
Most other systems are, in a sense, also reproducible, because you could just "run all the commands the same way, every time" -- but not in the highly automated way Nix is. In general a single hash identifies everything, to the point that I can literally recreate exact copies of a machine with a single command. I can reformat my laptop and have it back to the exact same software in 10 minutes, etc. It is also more nuanced than that: on something like Arch Linux, you and I may be running two Arch machines. They may feel identical. We might even both run './configure; make; make install' and the software will work similarly. But if I haven't run 'pacman -Syu' in week, and you ran it yesterday -- that isn't the same thing! Our environments are not reproductions of each other; they are only vaguely similar. Maybe you got a bugfixed glibc for example that I did not. If that was the case in NixOS, it would cause a rebuild (because one of the inputs -- some of the steps you must take to create something, including installing its prerequisites -- have changed).
In this case, "reproducible" does not mean "this piece of software is obtained by running this exact set of steps", that's too narrow. It means "this result is obtained by running the exact set of steps, for every single thing this result has ever depended on, all the way up the chain".
> As opposed to say maybe NixOS would just make the output different if it runs on a machine with a hard drive mounted on a different directory or the cpu has more cores?
No, this won't happen. CPU cores and the "location" of the source code are not included in the hash, they are not part of the input "source" to a build result.
 What actually happens is that the source code is copied into a location inside the "Nix store", and it is built from there. Same source code? Same destination. So in other words the first thing Nix really does is "create an build result from the input source code", i.e. a directory containing a copy of all the code, and that is given as the input to another step that actually builds the results, using the source as input. You could think of "building software" as two packages: one package, which is built by unzipping the source code and keeping the results, and another that is created by building the previously unzipped code.
So it uses the same mechanism as it always does: source code is just an input to something we want to "build", in this case, the executables. It's no different than "zlib" the library an input to "libpng" the library (because zlib needs libpng). It is just an "input" in an abstract sense.
Which I would argue makes fall more sense.
actually I think a better definition would be that determinism is bit-for-bit identical under controlled inputs. A reproducible build is the act of controlling those inputs to feed into a deterministic build.
I'm curious why you chose the definition you did, as proving a build is reproducible sounds impossible hard.
Of course I've always looked at it from the standpoint of "reproduce this binary from source, thus proving your source code (modulo compiler trust of course)".
The hash doesn’t contain the output at all, it only contains the inputs. As such you know the hash of all packages before you start building anything. It would be difficult to do anything different as the package may contain an absolute path to itself.
Given the goal of Tails to provide a private computing environment that leaves no traces on the host system after shutdown, it's especially valuable to be able to confirm that the source truly produces the same binary image that the site offers for download.
(If anyone from Tails is reading, please update libccid to 1.4.27!)
Currently at 76% of 70% built packages.
From limited experience I had with AUR - for not so rigorously maintained packages it's not much better than building from the source yourself.
The ability to try out a new package and safely downgrade has been exactly as it says on the tin.
So far I have primarily used it as an add on to other OSes and put it in the PATH before everything else. That way if something is missing in the nix packages I'll fall back to the other OS.
And yes the AUR has no standard of quality, but the coverage is greater than anything I have ever seen. Figuring out exactly what build system and assumptions a package might have made is something I only do once the AUR package has failed.
I've been living in Gentoo world for over a decade, so of course, every Gentoo system is going to be a little (or a lot) different. :-P