The idea that you can easily type a command and be quickly dumped into a development environment for some software your system runs is amazing. I think it’s something that actually brings people closer towards some of the ideals of GNU or Open Source and better lets you take advantage of everything being open source. (Aside: I think this is also great about Emacs; you can easily jump to the source of most things and there’s usually a way to hack or modify or inspect or debug them immediately).
One thing I couldn’t work out how to do in guix was to go the whole way through. Something like:
1. Install / use some program
2. Find some bug or potential improvement
3. Run the command to be dumped into a dev environment for that
4. Fix or investigate the bug; or implement the improvement
5. (The step I don’t know how to do) Use your fixed/improved version in your operating system
6. (Also don’t know how to leverage guix for this) Share your improvements with others
It feels like 5 is somewhat at odds with how guix is meant to work as step 4 is so mutation-heavy, but it also feels like something that an OS which wants to be as gnu as possible should really want to support. And gui should be able to make it safer too by giving you rollbacks. Maybe there’s an easy way to do it and I just don’t know it?
(note: usually packages are installed as regular user into your user profile)
See also section "Package Transformation Options" in "info guix".
For step 6, send a mail with the patch to the package definition (in an existing scm file in https://git.savannah.gnu.org/git/guix.git ) to the guix-devel mailing list. But whatever you did manually to get yourfixedversion_src.tar.gz you now should automate by programming the patching process in Scheme instead--editing a guix checkout (usually just to add: (add-after 'unpack 'patch-problematic-stuff-4711 (lambda _ (substitute* "somefile" (("regex1") "replacement"))))). For more complicated fixes, add a patch to the guix checkout and make Guix use it in Scheme (and contribute it to the actual foo upstream project--which we usually do anyway).
See section "Contributing" in "info guix".
Further automation welcome-- and probably not that difficult to add. Just keep in mind that each package build is in its own container (it's kinda like Docker would be)--so no mixing text editor and building and weird user-defined pauses (for example there is no text editor in the build container).
I guess I wish that the custom install were as easy as guix shell (and that it would also do sensible things like recompiling dependencies if you modified a library). But maybe it’s actually hard to do for reasons I haven’t thought about.
>I guess I wish that the custom install were as easy as guix shell (and that it would also do sensible things like recompiling dependencies if you modified a library).
The reason for the seemingly duplicate "foo" in the "--with-" options is that those "--with-" can apply to dependencies if you specify the right package there (so in the example, something else than "foo"--like something foo depends on). In that case it will recompile everything that needs recompiling.
> But maybe it’s actually hard to do for reasons I haven’t thought about.
It can easily be that we all made our own workarounds and so while it would be easy to add a nice frontend, nobody has a need.
So if you do think of a nice way, please try to do it.
For example we have some bias for emacs: inside emacs the guix development is very comfortable (we have and ship our own extensions for emacs). But that doesn't help you if you are not an emacs user.
The Guix package repo is a monorepo (see below)--all those packages have to work (and be tested) together.
From time to time I do think it would be nice to reduce barrier of entry by automating existing package editing inside a shell script we could ship:
(the guix edit is NOT in the guix shell environment, therefore you have an editor)
... but so far it never makes it out of my head to an actual script shipped with guix :(
If we did make it--what would the package it would be in be called? Or should it be a subcommand of the "guix" command?
That said, if you want to make your own NEW packages, that's easy and well-supported. You can put a guix.scm whereever you like and lots of guix stuff will pick it up automatically.
> The idea that you can easily type a command and be quickly dumped into a development environment for some software your system runs is amazing. I think it’s something that actually brings people closer towards some of the ideals of GNU
This is exactly right, and I think it's one of the coolest things about Guix. Guix System (the full OS based on Guix) seems almost like the GNU that was dreamt of before Linux. It even has an option built on the Hurd kernel, and it uses an init system and initrd images based on Guile Scheme, which is both the language used for Guix itself and as the extension language for GNU projects.
Guix definitely advances the whole
> Learn one simple, high-level language and you can hack on every aspect of your system
idea further than we've seen on any modern/contemporary system.
> Aside: I think this is also great about Emacs; you can easily jump to the source of most things and there’s usually a way to hack or modify or inspect or debug them immediately
I think Emacs is definitely an inspiration, or a model of the virtues intended.
The thing that really bums me out about Guix is that Emacs debugging tools for Guile aren't anywhere near as good as you get with Edebug for Emacs Lisp or similar functionality for Common Lisp, Racket, or Clojure. With any of those languages you set breakpoints, display results, evaluate expressions, etc all with a very nice interface.
With guile, you get none of that, there's seemingly no debugging tooling at all. The best Emacs integration you get is Geiser which is not a GNU project and doesn't offer the debugging functionality mentioned above.
This is very surprising considering that Emacs and Guix are under one roof, and Guix is arguably GNU's flagship new software. Why do the development tools suck?
Guile is severely lacking in manpower, people on the mailing list have talked about possibly improving the situation; but it's difficult to get any major movement going.
I'm definitely not qualified to know how hard it would be to improve the tooling, I just know greg hendershott single handedly did racket's tooling for Emacs. I'm just surprised there's not more interest considering how cool it would be to be able to spelunk through Guix internals with good debugging tools.
I use nix but, as a user of Common Lisp, I’d much rather use guix: the main blocker for me is that nix supports macOS and guix refuses to for ideological reasons.
No, Guix does not refuse to support macos and not for ideological reasons either. Where do you get your info?
There is no way to build the package graph on macos. There is no glibc port for macos, and there's no free toolchain we could legally use.
There is little value in building Guix when we are forced to use a different proprietary toolchain, a different C library, and throw away the from-source bootstrap in exchange for Gigabytes of proprietary blobs. You might as well use Guix System in a virtual machine on macos.
The value of Guix on macOS is that your users can use the same project definitions on macOS and Linux. I don't personally care about the "from-source bootstrap" all that much, I want tooling that I can use to express my system's configuration as code and nix does this just fine on macOS.
As far as the ideological reasons goes, last time I looked for this, someone on a mailing list asked how you'd go about porting guix to macOS and some core team member answered they had no interest in supporting a non-FOSS system.
Guix has recently landed support for building Guix System containers on top of WSL2. Emacs (another GNU package) also runs just fine on Windows.
This is not about supporting a non-free system; I bet some nuance was left behind on the way from writing that mailing list post to the comment I'm responding to. Building packages for macOS would require an entirely different underpinning --- or cheating but cutting the package graph and grafting it on top of the huge proprietary blob that is the macOS development environment. It would also require that we buy macOS licenses and operate macOS build farm nodes to build binaries.
And for what? The resulting packages would in no way be equivalent to their GNU+Linux counterparts. The "same project definitions" would not describe the same thing at all.
> This makes me wonder if Guix is ready for the real world.
ML and GPUs are a smallish subset of the real world, and a subset that's particularly plagued by proprietary software (especially the GPU side); I don't think it's a good litmus test for "is guix useful for real work".
The stance on non-free stops me from using it. Amdgpu is open source down to and including the Linux kernel. But each card has a couple of kilobytes of binary firmware file, so no deal.
That firmware could literally ship on the card totally invisibly and that would be fine for guix, but because it can be patched in the field, no.
That's just.. so counterproductive. They are ok with proprietary software, but only if they come builtin.
But there are some spectre and meltdown mitigations that comes with CPU microcode, right? It's probably not a great idea to not apply microcode updates in this case
Ok, s/k/m. I remembered small and didn't check. Doesn't change the underlying point though - so what? The same guys who shipped the silicon also shipped some magic numbers with it. If you trust one, may as well trust the other, since they're equivalent.
There are a bunch of things that come up regularly on HN that aren’t particularly relevant to the real world so I don’t think this logic is good. Though I think you are correct that ML is relevant to the real world. However when you wrote ‘the real world’ you didn’t mean the thing for which ML is relevant but rather ‘being a general purpose operating system that is useful [to software developers]’ and for this I think the lack of hardware-accelerated PyTorch isn’t great but I don’t think training/running ML models is particularly relevant to being a general purpose OS (even for software developers). I’m not sure it’s even relevant to ML practitioners as I would expect they’d be farming out training to a dedicated compute cluster.
One needs to jump through a bunch of hoops to get hardware-accelerated PyTorch on Apple’s latest computers. Does that mean they aren’t ready for the real world?
I mean, to run the software developed by ML practitioners, you basically end up in dependency hell every time you try out something new. This is exactly where an autobuild/package system such as Guix is supposed to shine.
ML is also increasingly relevant for OSes, think OCR and speech recognition, but also painting programs. Image classification and segmentation are used in industrial systems; lots of ML happens on servers. And this is where Linux shines, and thus support from Guix would be great. Categorically excluding this group of GPU users is a mistake.
> One needs to jump through a bunch of hoops to get hardware-accelerated PyTorch on Apple’s latest computers. Does that mean they aren’t ready for the real world?
I think Guix is more than ready for providing development environments, even for proprietary apps on open source technology stacks (but Nix will probably be easier to pitch). As an operating system much less so.
ld: /tmp/guix-build-python-pytorch-1.12.0.drv-0/source/build/lib/libtorch_cpu.so: undefined reference to `glslang::TShader::setNanMinMaxClamp(bool)'
ld: /tmp/guix-build-python-pytorch-1.12.0.drv-0/source/build/lib/libtorch_cpu.so: undefined reference to `spvtools::CreateCompactIdsPass()'
[...]
The reason is because pytorch 1.12.0 depends on shaderc-2020.4/lib/libshaderc_combined.a--and static libraries do not have automatic dependency resolution, so spirv, glsl etc is never linked.
There's a shaderc_combined.pc in shaderc, but it doesn't lists glslang or spirv as dependencies, even though shaderc uses glslang and spirv data structures and functions. Errrrr. How's that supposed to work?
If Guix uses Nix under the hood, is it "just" Nix with an S-expression syntax (admittedly much better than Nix syntax) and fewer packages? If so, is there a really compelling reason to prefer Guix over NixOS? The only substantive difference I'm aware of is the use of GNU Shepherd instead of systemd.
Same backend, but different frontend, so to speak. Hence, it used to be possible to have nix and guix share a store and local state directory. Not sure if that's still possible today, though...
Probably because I’m not a lisp person but the syntax is just ugly and not intuitive. It would look much better as a yaml file and I don’t even really like yaml either.
I felt the same but after reading this[1] and following along with the REPL it finally started to click.
There are actually much less rules than "normal" programming languages. I had started to write my own basic functions that would be in a "standard library" (trim(), trunc(), remove() etc) and got the hang of it after a couple hours.
The only thing I don't like about the language for "serious" work on massive projects is missing static types (may make it less readable though..), and the tooling (most people lean on Emacs) could be better.
So I'm guessing there's some additional conventions for function naming, besides appending ? to the name for boolean return and ! for functions that mutate.
If you know Lisp or Haskell then Emacs and XMonad are great. If you don't, then you just get lost trying to configure them. Nix lang is a much simpler DSL with affordances like nice list syntax. It may be less powerful than lisp but it's easy to edit and (relatively) quick to learn. Similar to vimscript, which sucks, but it's amenable to easy edits for the unwashed masses.
My issue with Nix-lang is that it's somewhere between declarative and uhm…not declarative?
Certain sections of nix files have special meaning (config), certain functions only work in those sections. Nice features like `//` for merging dicts goes out of the window as soon as you have nested dict, which is nearly every time.
It's poorly documented, “standard library” documentation is even worse. It's impossible to navigate because everything is mashed together in `nixpkgs`. Often show no examples on how to use a function and just say `it takes blahblah`, but what is blahblah?
Well, yes, it fixed it when I've discovered this function a while ago. That's just an example.
That's what I'm bitching about - discovering documentation sucks because nix-lang is a very niche language and `nixpkgs.lib` is mostly undocumented. That makes finding how to do something in it is harder compared if to any other language.
With homebrew, it's just ruby scripts. You write ruby, you can ask some ruby question from someone who never wrote a formula and get an answer because that's a general purpose language.
With guix, it's not the best, but it's still just GNU Guile, it's been around for almost 30 years. Yeah, it's probably not very popular, but it's still a general purpose language that has uses outside Guix.
How familiar are you with Lisps? I heard somewhere that it's easier for a complete beginner to start learning programming with lisps than with traditional languages. I started with more mainstream languages, but my experience tends to agree with that observation. The small set of various lisp forms and their semantics are easy to learn and reason about, compared to synatx and semantics of more traditional languages. Same when s-expressions are used for representing data. The syntax closely matches the logical structure of a program or data.
Keeping this in mind, I find the Guile scheme syntax used by Guix quite intuitive and pleasant. It reduces the cognitive overhead to configure the system, at least for me.
I am not a lisp person, but it is just a configuration file with key value pairs and lists, mostly copy pasted from the documentation or other ppl configurations. A file with all configs in the same place in an unified way
Also each configuration file in Linux has already has their differences and quirks
- the module paths a bit weird (I’d find slash or dot separated more obvious)
- some values needed to be constructed with a call to list
- weird dsl for building was mixed in there
- it’s probably not obvious what should be string/symbol/whatever
I’m not sure anything should change though. It still feels a bit weird to me that it’s a program generating the package rather than a more direct ‘just data’ format.
>- the module paths a bit weird (I’d find slash or dot separated more obvious)
There's no special syntax for any of those dot slash whatever stuff. (Program item) paths are always lists (like in CSS selectors). Likewise we don't use shell strings and then have to weirdly escape them, we just use an argv list to begin with. And so on. What's the use in pretending it's something it's not?
>- some values needed to be constructed with a call to list
It depends on how good you are with quoting; if you want to avoid quasiquote, "list" is easier to understand. Otherwise, (almost) no values need to be constructed with a call to "list".
(list 1 2 3) is equal to `(1 2 3)
(list 1 2 b) is equal to `(1 2 ,b)
(list 1 'b b) is equal to `(1 b ,b)
>- weird dsl for building was mixed in there
If you mean what I think you mean then it's a program running in the build container (in an extra process) (so it's quoted in the scm file of guix).
>- it’s probably not obvious what should be string/symbol/whatever
>I’m not sure anything should change though. It still feels a bit weird to me that it’s a program generating the package rather than a more direct ‘just data’ format.
It will always end up like that, even in packaging systems that start with a "just data" config format. They will always grow their own weird language for #if, #cfg, function calls and so on in the config format. Then they have weird generator scripts generating the config files, and there they are right where we are, just shittier.
To the last point: since it is a lisp, data = code and all that. Fundamentally I think that is at the heart of a lot of the power here with package definitions and the system itself being the same.
Ok, so mostly just variations on the “unfamiliar with lisp” theme. Though I have to say that it would be a bit silly to invent a new way to write lists just to name packages, when Lisp already has an excellent way of writing lists. :)
Oh, and how can a package manager avoid dealing with build systems? There are almost as many different build systems as there are packages!
I was trying to answer the question of why the lisp config might be difficult to read so I deliberately tried to not rely on my own intuition about how lisps work. In particular, I think it is possible to design a format for s-expression package definitions that is more approachable to people who don’t know lisp.
I continue to find it slightly odd that the package definition is not some serialisation of a package object but rather a full program that outputs a package object, and I think this is the root of a bunch of the confusing things about the definition. Consider that the commenter at the top of this thread thought yaml might be preferable. I think that means they didn’t realise they were looking at a program outputting the package definition but rather that they thought they were just looking at the definition itself.
If the whole thing were just data instead of data that’s actually code whose evaluation produces data, I think things would be more obvious. For example, some symbols are just syntax (e.g. in the alist syntax for constructing the package definition) whereas others are references to values of which some are other packages, some are just scheme functions, and some are guix-specific functions. And symbols have other meanings in the build-steps reader macro.
Building package definitions in this way can also make errors harder to pinpoint as the author can write any program they like rather than being restricted to a more limited configuration language. Needing to evaluate programs to get packages can also, I think, make the collection of all packages as a whole pretty opaque and hard to reason about compared to e.g. being able to have a database that contains all the package definitions.
I don’t think the ‘unfamiliar with lisp’ argument is good. I’m pretty familiar with lisp (though not so much guile/scheme) and I’ve been writing it for over ten years and I think this way of specifying packages is not great in the average case, though perhaps there are some complexities that cannot be handled without this general mechanism.
YAML’s excesses are exactly why a real programming language is preferable to a mere serialization format. Aliases and overrides are only useful in YAML because it is not a real programming language.
As for error messages, I don’t think that using a real programming language forces the error messages to be bad, or to be too generic. If the error messages are not good it is only because nobody has put in the time and elbow grease to make them good, not because of any fundamental limitation of turing–complete languages.
Macros and function calls look the same and have completely different semantics. And everything else looks the same too. Lisp is a language that is nice to work with when you want to write an interpreter for it, but horrible to read as a normal user. Half your time is spend counting parenthesis and trying to figure out how and what to quote.
I do actually prefer s-expressions to JSON, but only when used as pure data. Once you turn it into scripting language it gets ugly fast.
> but horrible to read as a normal user. Half your time is spend counting parenthesis and trying to figure out how and what to quote.
Which is why a good text editor will balance parentheses automatically and highlight macros. That being said, it is well known that macros can be abused and overused. It is considered bad style to use macros if a function would do the job too.
> That being said, it is well known that macros can be abused and overused.
Which is what Guix is doing. Just look at (package), why isn't that a function taking an associated list? It's this kind of ad hoc DSL magic that makes Lisp-like languages incredible hard to read, since you never know what you are looking at.
Notice how the alist has to be backquoted so that you can put a comma in front of everything that needs to be evaluated, and how every entry in the alist has to be a pair rather than a list which requires a dot on every line. Which would you rather type out a hundred thousand times? Which would you prefer to teach to new contributors? Which would you prefer to answer questions about on a mailing list?
> Which would you rather type out a hundred thousand times?
Neither. But that proofs my point, Lisp is unreadable for the average user. If your first instinct is to not use the language itself, but write your own DSL on top of it to make it readable, maybe that language ain't such a good choice.
But the whole point of Scheme is that the syntax is simple enough that you _can_ do metalinguistic abstraction. That was a deliberate design choice, not an accident. In most other languages you have to stoop to using some other parser and interpreter for data files, rather than reusing the excellent parser and interpreter that you already have for Scheme itself.
Plus, you can’t assume that using JSON or YAML will give your users some kind of magical intuition bonus; you’ll still have to train them to use JSON/YAML correctly. I have a friend (who isn’t a software engineer) who needed to do some work on JSON files, and it was pretty clear that he was just cargo–culting it. He was only doing it to accomplish some larger goal, not for the joy of knowing the JSON syntax. That’s a hurdle you will face no matter what you do.
I am not complaining about those abstractions being possible, I am complaining about them being necessary for doing basic stuff that every other modern language solves with a builtin dictionary datatype. The more you use those abstractions the worse your error messages will get and the harder it will be to understand what is going on.
JSON would at least work consistently and don't randomly redefine the meaning of the language midway through a config file. YAML numerous issues of its own, so I'd avoid that if possible.
This isn't a Guix specific problem, the GNU project has tried to make Guile a thing for the last 20 years or so, and it just never looked especially elegant to me in any context and never really gained any real adoption in the wild either.
And when it comes to package configuration, it's just an unnecessary issue, we already had Nix, which comes with it's own JSON-like language specifically build for this task and that is much nicer to work with than Scheme, because it's configuration language first, not a general purpose programming language turned into one via macros. Using Scheme instead just doesn't improve the situation in any way in my eyes, it just creates a lot of additional problems.
I don't understand how the Guix documentation can refer to #:uninterned #:symbols as "keywords"; was that written with a straight face? Yikes. Common Lisp's single : sigil is perfect for keywords; two characters is excessive.
Why does gnu-build-system use %standard-phases (a symbol with %)? You'd think it could just use standard-phases. What is the big namespace concern. The % prefix is a Lisp convention for implementation internal symbols, not to be used by user code. Kind of like double underscore in C.
It could simply be the mplicit default. If most rules use %standard-phases, it's silly to be repeating it all over the place.
modify-phases could have a syntax like:
(modify-phases {phases-object | clause }*)
If an argument is a phases-object, then it's taken as the current object on which the remaining clauses operate. Otherwise it's a clause like (add-after ...). The initial current object is understood to be %standard-phases. Now you just do:
either add-after performs the transformation to lambda + invoke, or else the object stays as a list, and is later treated as a command to execute.
A macro like (cmdf "sh bootstrap") could do the parsing and generate the lambda + invoke. The "f" reminds you that it doesn't run the command but returns a function.
It's written in Guile, which is a variant of Scheme. #: is used as the keyword indicator, not an uninterned symbol like in CL.
> Why does gnu-build-system use %standard-phases (a symbol with %)
AFAIK %symbol indicates a constant in Scheme.
It's unfortunate that it isn't written in Common Lisp. CLOS would've been immensely useful there instead of the ad-hoc object system via Scheme records.
AFAIU, asdf is for providing particular versions of tools (e.g. python, etc.). In the original post, guix is also used to provide library dependencies.
Guix is the free software oriented, lisp flavored cousin of Nix. -- I've used Nix. It's got enough rough edges that writing nix code is pretty much only accessible to enthusiasts.. though, those who can get past the rough edges do recommend it.
The added complexity of guix/nix allow for having an OS distribution where the whole OS config is declared/reproducible.
One thing I couldn’t work out how to do in guix was to go the whole way through. Something like:
1. Install / use some program
2. Find some bug or potential improvement
3. Run the command to be dumped into a dev environment for that
4. Fix or investigate the bug; or implement the improvement
5. (The step I don’t know how to do) Use your fixed/improved version in your operating system
6. (Also don’t know how to leverage guix for this) Share your improvements with others
It feels like 5 is somewhat at odds with how guix is meant to work as step 4 is so mutation-heavy, but it also feels like something that an OS which wants to be as gnu as possible should really want to support. And gui should be able to make it safer too by giving you rollbacks. Maybe there’s an easy way to do it and I just don’t know it?