Hacker News new | past | comments | ask | show | jobs | submit login

> The host system and isolated environments will all be managed declaratively and reproducibly using Nix, a purely functional package manager

Nix seems cool, but all of my forays into it have been so unpleasant as to be unworkable. It seems to work well enough if all of your dependencies are already in the Nix store and have been thoroughly tested, but as soon as one has to start writing Nix packages themselves it's a train wreck: the peculiarities of the Nix expression language, the chaos and lack of documentation in nixpkgs, the immense understanding one must have of the common low-level libraries used to create Nix packages, and you'll still spend tons of time trying to package obscure C dependencies with their own bespoke build systems.

My constructive feedback is:

1. Add types to the Nix expression language so someone digging through the code can have some idea about what needs to be passed into various functions. This would probably help people traverse nixpkgs as well, since the client code will need to "import" the types of its arguments.

2. Make Nix more syntactically familiar. Familiarity here seems like it should be more important than innovating on programming language syntax. Thanks for not going full-Haskell on us, but it would be nice if looked more like JavaScript or Python or something that virtually any programmer could look at and recognize (I'm no great fan of either of those languages).

3. Very controversial, but the whole industry needs to minimize the number of C and C++ dependencies. Not only are these languages fundamentally insecure, but projects in these languages have their own bespoke build systems which assume dependencies are already installed at the correct versions and in the correct paths. Packaging these projects is painful, and it's largely the reason we have package maintainers who specialize not in building programs of a certain language, but in building certain dependencies.




I switched to NixOS about a year ago.

It's been amazing for me. Especially when combined with home-manager [1], which provides declarative user environments and flakes [2], which provide a even more declarative and more easily shareable package format.

Getting my exact setup on a new device - including lots of GUI and terminal app customization - takes a single `nixos-rebuild-switch`, and works every time. Everything is configured in a single config file, from hardware and system service setup, over window manager setup, all the way to installed Vim/VS Code/Chrome plugins and lots of application configs.

The ability to easily roll back to previous configurations and boot into old configs from the boot menu in case something breaks is also brilliant.

You can also just install and run pretty much all software on demand, similar to `npx`, without polluting the system. ( nix run some-app). Declarative, reproducible development environments for each project are the cherry on top.

That said, the onboarding experience is horrible. There are A LOT of things to learn, and the documentation is bad. I also agree that the language, while somewhat fine, is not great and very undiscoverable.

The Nix ecosystem is a diamond in the rough. Sadly it would take a lot of effort to simplify everything and make it more polished.

I'm afraid Nix will continue to remain very niche.

[1] https://github.com/nix-community/home-manager [2] https://nixos.wiki/wiki/Flakes


Low skill myself at installing Linux, but I found installing a Guix to disk relatively straightforward. Managed to easily add available builds. Apparently-good support for virtual machines as well.

http://guix.gnu.org/


Even trying to get VS Code working with a couple of plugins (and the help of the good, experienced people on the Nix discord) was impossible (I was just using the package manager, not NixOS--apparently VS Code requires a different incantation depending on which platform you use, which is another problem.

> Sadly it would take a lot of effort to simplify everything and make it more polished.

Yes, and while it would take a lot of effort to make most things nicer (improving the language, etc), other things require effort that scales linearly with the number of packages in the universe. For example, making the package definitions in the first place, adding documentation for the packages, testing packages, etc. Other things are even worse: testing various combinations of dependencies at different versions.

> I'm afraid Nix will continue to remain very niche.

Unless it gets some serious corporate support, I'm afraid I'll agree. The Nix folks have done some impressive work, but I get the impression that they're more interested in making something cool for themselves and other like-minded people at the expense of others (and there's nothing wrong with that! it's just incompatible with growing out of a niche).


> Thanks for not going full-Haskell on us

It would have been so much better if they went full Haskell. Haskell is better designed as a language, it has types and could support all Nix features without modification as far as I can tell. On top of that it's a common language that most have used at least in university and has loads of friendly tutorials online.

Totally agree on the packaging of C/C++ projects is a drag of our entire industry and basically a complete embarrassment.


Haskell is a fantastic language, and it's becoming one of my favorites, but it is by no measure a friendly one. Nobody wants to have to learn a complex language for a task as essential as using their own computer.

Also, I think your claim that "most have used at least in university" is skewed to your experiences. I doubt the majority of programmers have any experience in Haskell whatsoever with even fewer having enough experience to be comfortable in it.


I don't really buy the "Haskell isn't friendly" thing - it's way more work and way less pleasant to become an expert in, say, C++, than to become an expert in Haskell.

It depends on what, precisely, you are measuring the difficulty of. Are you writing a quick script? A complex software project? Trying to master the language? Is it a beginner programmer? A narrowly focused coder who only knows one or two similar languages? An experienced programmer with a wide breadth of knowledge?

In pedagogical terms, I found learning Haskell much friendlier when I was contemporaneously learning Java, C++, Prolog, etc. in university.


But the choice isn’t between Haskell and C++, the choice is between Haskell and any other language. I agree that C++ is a bloated, error-prone nightmare, and I avoid it for that reason. There are plenty of languages that are widely considered easy to learn/use (python, Java, JavaScript).

One of the big reasons that Haskell is hard to learn is because it is purely functional. Writing idiomatic Haskell is dramatically different from traditional imperative C-like languages.

Just to give an example - a few years ago I was taking an AI class which used Common Lisp (a functional language, but not purely functional). The first few tests/projects which required functional lisp programs led many of my peers to drop the class.

I think part of this is because my school, like most, started students off with an imperative/procedural language (C++). If you have a clean slate (e.g. someone with zero programming knowledge) it’s probably just as easy to teach them Haskell as it is C++ or Java.

Functional programming is great, and I love it, but many programmers don’t even have a firm grasp on recursion which makes pure languages like Haskell like a mountain to climb.


> I think part of this is because my school, like most, started students off with an imperative/procedural language (C++). If you have a clean slate (e.g. someone with zero programming knowledge) it’s probably just as easy

It's definitely not.

Source: I know a lot of people who had Caml classes before C classes, C was much easier. Also in my school we learned C and LISP at the same time, most people also found C easier.


That's a really interesting data point! It's entirely possible that functional programming languages are just objectively harder to learn.

I suspect that if someone has a strong background in mathematics they might take to functional programming easier than they might other languages. I could definitely be wrong though!


C++ is an extreme example, but almost all students learn it or similarly unpleasantly complicated stuff like Java.

> many programmers don’t even have a firm grasp on recursion

I can’t say I ran into more than a few people like this and I don’t think any of them stayed in CS.


I think anecdotes about other programmers that we (people who use Hacker News) know is a rather biased subset of all programmers. Most of of the programmers on this site are likely above average in terms of skill.

There are plenty of programmers out there that learned at boot camps, universities with poor CS programs, or who are self-taught, who likely have a much less solid understanding of CS fundamentals. Many of those programmers have never needed to understand recursion. I went to a school with a very average CS program, and many of my peers struggled with recursion. There were more than a few who passed the class (Data Structures) and graduated alongside me despite not fully grasping the concept of recursion.


> There are plenty of programmers out there that learned at boot camps

Some of the best (functional) programmers I know learned (functional programming and programming in general) at boot camps. I think the predictor for FP proficiency is not education (and I hesitate to share my predicted predictors here). In any case, they certainly understand recursion.


Haskell can be unfriendly and still more friendly than some of the least friendly languages. I still posit that Python or JavaScript would be far more familiar (especially syntactically) to far, far more developers than Haskell or Nix.


JavaScript and Python are both horribly unfriendly if your end goal is to write complex, reliable software. I say that as a big fan of python.


I largely agree, but the problem isn't the syntax, which is the bit I'm proposing borrowing into Nix. I'm not saying "Nix should have Python's performance or package management" or "Nix should be dynamically typed like JavaScript" (it already does) or any of the other things that make these languages a bummer for significant software projects. I'm saying Nix should be familiar and intuitive to as many programmers as possible, and most programmers can intuit their way around Python or JS syntax.


I'm relatively new to NixOS but this is my sense of things too - it might have been better to use a subset of Haskell for the Nix package manager, instead of a new untyped language, for the reasons you give.

One simple thing I like about Haskell is its readability. Instead of cramming both type signature and function definition into the same line, Haskell splits those two into separate lines. Having the type signature in its own line, as the first line of every function, is so much more readable.

It's also interesting that you can outline/pseudo-code an entire program structure using only type signatures and no function definitions, check if the type signatures are all correct, and then fill in the function definitions later.

That said, I'm still new to Nix, and there are probably good reasons it was done this way, buried in past discussions, the creator's PhD thesis, etc. that I haven't read yet. And I hear that types are being looked into for future upgrades to Nix.


I'm not a Haskell user, but my assumption has been that the Nix language being something of a DSL has meant that it's able to include certain kinds of convenient optimizations such as directly referring to files by their relative path.

Is this bogus, or would you imagine that a substantial number of additional wrapper/helper/noise functions would be needed to make a general-purpose language do what the Nix language does?

I guess Guix is probably an opportunity to look at a practical instance of this first-hand.


Not the parent, but I think if you tried to do Nix in Haskell, the syntax would be seriously gross. For starters, you couldn't use Haskell's record syntax for Nix's sets, because they're not Haskell records. Types wouldn't help that much when you come to do lookups in these sets: if you get the name of a key wrong, it's a runtime error.

The ability to use file paths in Nix gives you a very cheap mechanism for splitting code across files. In Nix, the expression `import <filepath>` will evaluate to the result of evaluating the file `<filepath>`. But in Haskell, every such import would need to be declared at the top of the file, and the module name would need to be duplicated at the use-site, almost certainly fully qualified to avoid nameclashes.

I'm not overly bothered by type systems for things like Nix. In Nix, you're mostly just coding the construction of a single, reproducible value, which is trivial to exhaustively test: just construct it and see if it's right. If there's something wrong, you'll either

1) get the error immediately when you try to evaluate, and you might even be told helpful things like a list of strings was expected, not a list of numbers.

2) get an error when the derivation is built, due to an environment error that is unlikely to be detected by a type system.

Splitting this into two phases, compile then evaluate, wouldn't help me much.


Okay, yeah, that makes sense and aligns with my instinct about it.

I think the main costs of Nix being its own thing are probably in areas like documentation and performance. But a lot of the documentation issues with Nix aren't actually issues with the documentation of the language (which is fairly minimal) but rather with the document of nixpkgs, which contains mountains of magical helpers and other tricks with horrendously bad documentation and discoverability.

That infrastructure (and the corresponding documentation gap) would likely exist regardless of the base language.


> have used at least in university

That’s showing a bit of bias.

Personally Ive been in the industry for roughly 20 years and I’ve yet to meet anyone IRL who has even dabbled in Haskell.

Basing your packaging-format on Haskell is a great way to alienate most Linux-users without a specialised university-education, and I’m wagering that’s a fairly big portion of desktop-Linux end-users.


>Personally Ive been in the industry for roughly 20 years and I’ve yet to meet anyone IRL who has even dabbled in Haskell.

See that's the bubble ;) If you work at a company who makes archival, language-detection and ocr you will see C/C++ Haskell and OCaml..well and perl in no time.

http://wiki.haskell.org/Haskell_in_industry

https://serokell.io/blog/top-software-written-in-haskell


> See that's the bubble ;) If you work at a company who makes archival, language-detection and ocr you will see C/C++ Haskell and OCaml..well and perl in no time.

I think your scenario sounds a lot more like a bubble. It's certainly my experience that very few people have encountered Haskell whether in a university setting or a professional setting. Yes, there are some niches where Haskell is "not rare" (especially the programming language theory corner of computer science departments), but in general I think it's quite rare.


>I think your scenario sounds a lot more like a bubble.

Depends on the job you have, and how far you go out of your bubble.

>but in general I think it's quite rare.

Like Fortran...but not if you work at Cern.

And Cobol...but not if you work at a Finance institute.

And PL/1...but not if you work for IBM.


Whether or not something is rare globally doesn't depend on whether or not it's rare locally. I think most folks at Cern could agree that Fortran is not widely used even if they use it regularly and many others in their field use it regularly. Indeed, Fortran can be rare even though it's likely used more frequently than many programmers are aware!


I started on my Nix journey a few months ago, and in some ways, I've had the worst-possible experience— I had basically no ramp-in period of being just a "Nix user" before attempting to write my first package, but was instead thrust immediately into a massive project templating out generated definitions for 1000+ interlinked packages with a bunch of oddball requirements, funky build tools, etc. I've needed to manage version and patch overrides, obscure linker conflicts, stuff that broke at runtime due to unpropagated buildInputs. I'm also a functional programming novice, so that recursive combinator magic which makes the overlay system work— also something completely new that I've had to wrap my head around.

So yes, all this has been... somewhat harrowing. But overall I would say I'm quite pleased with the end result and impressed with what Nix has enabled. Despite its long evolutionary growth, many of the things that I've encountered feel thoughtfully considered and well made. The various ways that builds and binaries can be pushed around and shared is awesome, and I'm excited for the optimizations coming with the content-addressed store.

I don't think that making the Nix language or package definitions feel more conventional would necessarily be a good thing— I think there's a risk that those kinds of changes obscure the underlying realities and lead to expectation mismatches.

That said, one thing that I do feel is pretty unfortunate is the interplay between the shell and Nix in package definitions. Basically you pass a dict to mkDerivation, and all the stringy items in that dict become available to your build's environment as envvars. This means when writing out strings for your configurePhase, buildPhase, etc, you have to be hyper-aware that ${thing} will be templated by Nix in the Nix context, whereas $thing will be templated much later, by Bash. It's something that is learned quickly but my goodness is it ever a hell of a tripwire.


Nix needs a shell language integrated into nix. Nix should be usable as an os shell. That would make complex scripting so much easier than the weird passasfile bizarreness


What's the "weird passasfile bizarreness"?


Per https://nixos.org/manual/nix/stable/#sec-advanced-attributes:

> passAsFile: A list of names of attributes that should be passed via files rather than environment variables.

Looks like it's basically just a hack for envvars that do/might exceed whatever bash's internal limits are for them.


This doesn't address the dynamic vs static typing issue you brought up, but for the rest, you may be interested in exploring guix.

Guix is essentially a reimplementation of nix packaging system using a pre-existing language - guile scheme.

Some may react adversely to them using a lisp (<s>well, enlightenment is not for all ... </s>), but leaving that aside, guix has continued to evolve in slightly different ways to nix, but still the core (and the guarantees it provides) remains essentially the same


The Lisp part is great, but sticking to only libre software makes it a hard pill to swallow.


Right, and I think (I hope I'm wrong) that guix makes it difficult to boot from ZFS, even though ZFS is libre. NixOS makes ZFS very easy.


I think you're wrong. Guix has a zfs package. The build farm just won't give you a pre-built binary due to license conflicts.


> 1. Add types to the Nix expression language

Interestingly, this was one of the first issues that Edolstra (creator of Nix) raised himself, but unfortunately he closed it again as unrealistic a number of years later [0].

[0] https://github.com/NixOS/nix/issues/14#issuecomment-37820190...


Thanks for linking this - reading through the thread made me discover a cool related project! -> https://github.com/tweag/nickel/

See esp.: https://github.com/tweag/nickel/#related-projects-and-inspir...


I'm not sure what kind of C/C++ projects you install, but most of them are still ./configure && make && make install.

Cmake can be worse, Meson versionitis can be annoying. It is also annoying to install Meson and Python in stage 1 in Linuxfromscratch now. Yuck!

The languages aren't fundamentally insecure, there are tons of reliable projects out there.


For the kind of reproducibility nix needs, ./configure && make && make install is not enough. It needs to track and manage the artefacts created by the build system, not just run a script.


Except, most of a nixos system is built by simple Autoconf scripts and some env cars being set correctly.

I mean I built a whole nix distribution for embedded devices with runit as init and cross compiling and everything and nix made it easy and fun. And I learned that it's just Autoconf and shell all the way down. There's no magic to nix. The most voodoo non standard thing they do is patchelf stuff, but most of the nixos specific stuff is just setting paths to configure scripts or in the environment.


I'm interested in your work (nix for embedded). Do you happen to have some blog posts or repos somewhere about this?


I'm building a small personal server device that lets you self-host services in containers and then access them via proxy over WebRTC (so you don't even need a dedicated IP).

I'm in the process of moving my website over from AWS to my own infrastructure (I was using S3 and their CDN for my blog, but unfortunately, the Parler debacle made me rethink my association with them), but the repositories (which are old and need to be updated... this is my hobby project) are on github.

This is my nixos extension to use musl and runit instead of glibc and systemd: https://github.com/intrustd/appliance

built off of my own fork of nixpkgs: https://github.com/intrustd/nixpkgs (forked from a the 19.* releases I think... some changes need to be merged upstream).

Currently, I have it running on an O-Droid HC2 and have used it to share my photos with family. Unfortunately, most home internet connections just don't have enough bandwidth even for modest photo and video sharing, so I'm exploring other means of data transfer rather than a strict client/server thing (maybe using BitTorrent over WebRTC to share larger files). In either case, all the data remains owned by my and physically with me. The system can generate URLs that can be used to access the device for limited amounts of time. So, for example, I can ask my photos app to generate URL I can send to my aunts to show them pictures of my daughter, and this URL will last for X many days until the token expires.


Very cool and ambitious project! Props to you for standing your ground for Freedom of Expression.

I know this is a non-issue for your case but I wonder how much more expensive would self-hosting a blog (+ a few web-services) on a SBC be w.r.t cloud solutions? Have you calculated the numbers (electricity usage, traffic, etc).


This isn't really 'designed' for public blogs, more for sharing content with close friends. It's still in an exploratory phase. The apps (which are just containerized nix expressions) can share content with one another, and with apps on devices that you've paired with your server. Then your friends can see the content you've posted and you can see theirs. That's the intended use case. It's not really intended to host a super traffic heavy blog.

My Odroid HC2 takes very little electricity. Haven't run the numbers on it.


The builds are not guaranteed to be reproducible.


Nix doesn't guarantee reproducible builds. That's why there's a whole signing framework


There is no disagreement there. Reproducibility is definitely a goal though.

I don't really understand your line of argument.


It's a goal but not a fundamental design requirement for nix packages. It seemed like you were implying that you had to make nix builds reproducible in order to get them working with nixpkgs, which wluld greatly complicate things.


Yeah, if every language had their build systems in order, we wouldn't have needed nix as much, so definitely grateful for what nix does. But some problems can only be elegantly solved a level below nix.


I think I might be misunderstanding you, but that's completely false. Nix sandboxes things and every single C project I've used with "./configure && make && make install" works with zero boilerplate.


Yeah, it definitely works, and nix works around the problem, and lets you, the user, install the packages, but the builds are not guaranteed to be reproducible.


They are as reproducible as you can get with C builds and far more reproducible than most other build systems across many different languages.

https://r13y.com/


Yeah, that's the point, as far as you can get with C builds isn't far enough. We're criticizing C build systems here, not nix.


I mean is there a build system not named "bazel" that does a better job of making reproducible builds than a C autoconf project built with Nix?


There's a lot of wiggle room in "better", but there are a decent number of distros for which ~95% of the packages are built in a reproducible way, including Debian, Arch, and Yocto: https://reproducible-builds.org/citests/


And a lot of those packages use autotools; I don't see the C build system as being particularly hostile to reproducible builds.


Yep, and those distributions function just fine without nix. The point is that that's despite the build system rather than assisted by it.


scons has entered the chat


About point 3:

I don't think we should just ignore the immense technological knowledge encoded in C/C++ projects throughout the decades, only because they came from an era which having a package manager for every language was not fad of the day.

Let's see how those "memory safe" languages would fare 20 years down the line...but I predict there would be a cool_lang and people would advocate for rewriting every algorithm known to man, yet again


I sympathize with this point, and I tried to be quite clear that I'm not suggesting we drop everything cavalierly today, but rather we decide that our aspiration is to deprecate C/C++ and we move our dependencies gradually from C-based to something else. Some dependencies will be more pernicious to move because they require a lot of specialty knowledge (e.g., Harfbuzz) or simply due to their size (e.g., Chromium) but many others could be moved more readily. It will be a long, hard road which is why we should start now.

> Let's see how those "memory safe" languages would fare 20 years down the line...but I predict there would be a cool_lang and people would advocate for rewriting every algorithm known to man, yet again

This just reads like you've taken personal offense. My argument is quite clearly not "C isn't cool enough" or "C isn't memory safe", it's "C lacks a standard, sane build system such that packaging C projects is a nightmare".


I'm not experienced enough to take a side in language wars...It's just my personal opinion, as a non-CS, that deprecating working solutions because of non-theoretical problems (like difficulty of packaging) is not a sane engineering approach.

I mean, yeah, right now, Rust/Go have much better tooling than C/C++ for packaging and dependency resolution it's not even a debate but still one may argue that C/C++ library management is decentralized :) and maybe in future people converge to some unique method of dependency resolution that would deprecate Rust/Go way of doing things (like Nix/GuiX).

I came to this view because I see lots of cool projects in very specialized fields, that are practically abandoned and no one continues working on it on the sole reason of community thinking its not worth continue maintaining it in old lang/tech.

Again, consider I'm mainly a embedded guy...so idk, maybe a fresh re-write of every classical cs problem every 10 years is better in practice...


Fear not, I too come from an embedded background. I appreciate the "you can pry C from my cold dead hands" culture in embedded. I understand that many of the arguments against C feel like "C isn't sexy enough". I understand that many have promised that Java, C#, C++98, etc are a better fit for embedded systems than C and failed to deliver.

I'm not saying that we should deprecate C because it's old or unsexy. I'm saying there are practical problems with the C ecosystem that are unlikely to work themselves out which makes packaging downstream software really, really hard. This is a bad fit for domains (like SaaS) where the pace of software development is quite rapid.

> so idk, maybe a fresh re-write of every classical cs problem every 10 years is better in practice

In this case we wouldn't even need to rewrite things if the maintainers of these software projects would opt into a sane package manager (there exist sane package managers for C, for example, Conan) but most maintainers of C projects are militantly opposed.


Not ignoring, but minimizing the surface, for example:

Should the default "webserver" really be written in c? Or maybe just the one for embedded or high performance?


Yeah, best language for the task...but the parent was talking about not using a dependency just because it is written in C or C++


No, he was talking about "minimizing" c/c++ dependencies.


To your point 3, i don't think it's too controversial, some tools are absolutely ok being written in c like all the core-utils, but a monolithic kernel, written in c is probably the biggest problem:

https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...


I haven't tried it myself yet but Dhall is able to output nix files which then makes it possible to write with types in Dhall and then output the nix files (without types annotations) when they would be used.


> you'll still spend tons of time trying to package obscure C dependencies

but my question would be: is this exactly the same issue that all other package maintainers have e.g. rpm, deb etc?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: