A great dev can even be great within fixing problems/performing ops/maintenance, by my same qualification. _They ship fixes._ Perfection has very little to do with greatness. Aspiring for it might, but that's a second discussion about unrealistic goals and setting oneself up for disappointment.
You're the one who said you could describe "great" in two words that didn't include maintenance or quality. If it turns out those two words aren't enough, or need lots of asterisks and clarifying words, that's on you.
I don't get the feeling that you and the parent are taking my comment in good faith; clarification is of course necessary if the parent response is effectively a non-sequitur/strawman vs what the other respondents took my meaning to be; but I'd rather not point fingers here, as that's not useful to a productive discussion.
I chose not to use additional clarification because that unnecessarily constrains "shipping" to me. One can deliver value, and have a track record for delivering value, across a very wide set of variables, and I've found my heuristic to be far more elegant (if perhaps not precise enough to bear the rough seas of internet discourse) at mapping post-factum "Was this a successful business relationship" than a much more hair-splitting definition, as well as helping me keep personal biases out of my judgement re: someone else's success.
> I don't get the feeling that you and the parent are taking my comment in good faith.
I'm only playing by the rules you yourself set out. People in this thread were discussing how measuring "greatness" is subtle and very difficult, and then you came in asserting you could solve it with a snappy 2.5-word manta. If you're now claiming that additional clarification is needed, well, yeah, that's what everyone was saying to begin with.
The problem honestly seems like less of a debate about defining greatness and more about defining shipping, at this point.
Maybe this is nitpicking, but I've had this conversation in person more times than I can count during loops, review cycles, and over beers, and I'm hard pressed to think of the last time I got such pushback against something that seemed pretty cut-and-dry; namely "did you get done what you needed to get done without undue pain and suffering."
I'll openly concede that I very well could be in a "communication bubble" where words like shipping have loaded context. I'd still defend my point that if one chose the isomorphic terms for within their space, the "intent" of my message holds water as a useful heuristic, if a rather reductionist one. Tautologically, someone who I can trust to fulfill their role without "fires everywhere", two thumbs up in my book.
That being said, I'm honestly blown away by the amount of downvotes I've been getting for what I typically saw as the pillar of "meritocracy", that you get your job done without burning down the house. I wish more of the opposition at least take the time to express _why_ as opposed to just burying this. At this point I feel like I'd do better to "Save my account" and stop commenting but alas this is a topic close to my heart.
1. Lack of clarity as to how language/application package managers interact with Nix (pip, stack, Vundle). Pretty much every time I've asked about this, I've been told to go use `nix-shell` or to install things through Nix. Increasingly, when I get odd behavior with applications I install through Nix, my first resort is to uninstall the Nix version and install it from apt; it might be a bit older, but I'm sure it'll work as expected. I've went through the apt package -> Nix package -> apt package cycle three times from what I remember off the top of my head, with python/ipython, Haskell tooling, and Vim.
2. Nix on Ubuntu feels like a second class citizen. Things that interact with graphics drivers often don't work properly, e.g. video players. (I understand that there are technical reasons why this is the case, but there's no warning that this is the case.) There only appears to be online package search for NixOS (https://nixos.org/nixos/packages.html#) and not for Nix on other platforms. Nox helps, but it doesn't seem to be the same feature wise (no ability to see the info you get by clicking the package name) and is also slow.
3. In general package quality is not great for less frequently used packages. Inkscape was missing potrace for a while. Rarely used packages go unmaintained.
4. Poor CLI. Needing to pass `-A` to a lot of commands to use them the "right way" smells of a poorly thought out design. No feedback or suggestions if you type the wrong thing to `nix-env -e`. It looks like there are major changes to this in 2.0, so this might have been improved.
Despite being someone who's gotten their toes wet contributing to nixpkgs, I'm likely not going to be installing Nix when I upgrade from Ubuntu 16.04 to 18.04.
> Needing to pass `-A` to a lot of commands to use them the "right way" smells of a poorly thought out design.
That stems from the namespace problem that Nix devs seem to think doesn't exist.
Packages that have the same name, or belong in a different namespace (hackage, pip, etc.) are confusing to find from a user's perspective. The real problem is that every package in a channel has to go into the same global namespace. Practically every package manager has this problem, but, in my opinion, Nix deals with it the most poorly.
To make matters worse, it isn't obvious which namespace .nix files are supposed to reference, or what functions exist globally. Nix really does need a lot of UX work, and I'm glad to hear it is getting some.
I have a related pain-point. I want to maintain nix packages on their own long-lived branches. I also want to base these branches on the latest stable release rather than unstable.
This would require upstream nixpkgs to treat my package branches as equal peers and to pull their changes with 'git merge' like in the Linux kernel workflow.
Status quo seems to be that upstream expects people to work directly on the unstable branch and to make contributions using short-lived topic branches that can be rebased at any time.
I spent a lot of time packaging Pharo Smalltalk for nixpkgs but I found it too complicated to stay in sync with upstream and so I ended up orphaning that package and redoing it on a downstream overlay repo that insulates me from the upstream nixpkgs workflow.
This was discussed a bit over at https://github.com/NixOS/nixpkgs/issues/27312 but perhaps didn't get enough visibility because I also stubbornly prefer to work on Github and don't follow the mailing list.
If I may rush to the commenter's rescue: Darch addresses your issues with NixOS by combining a familiar set of tools (Arch Linux) with stateless architecture. Its a fantastic welding of extreme package availability and best-in-class documentation with declarative dependability.
It's not clear to me that this really solves the same problems that Nix does.
I'd imagine this solution inherits any problems that pacman has. The Arch wiki states that "if two packages depend on the same library, upgrading only one package might also upgrade the library (as a dependency), which might then break the other package which depends on an older version of the library." (https://wiki.archlinux.org/index.php/System_maintenance) This is one of the problems Nix does not have by design; in fact Nix lets you mix and match packages painlessly.
This also doesn't seem to allow unprivileged users to install packages, which is kind of a side benefit of using Nix.
Docker-esque solutions' issues with reproducibility are well known; I look at https://godarch.com/concepts/images/, particularly how packages are installed:
#!/usr/bin/env bash
pacman -S steam --noconfirm
and I see the same issues that https://blog.wearewizards.io/why-docker-is-not-the-answer-to... warns of. Heck the author even admits this is the case in an adjacent comment: "I want a machine that can be declared and rebuilt deterministically (at least semi-deterministically, rolling distro and all)" (emphasis mine).
Also, frankly, I don't want to run my personal computer like a server, with complete immutability and the need to build fat image files every time I want to try out an additional program. That seems to me like a workflow that's better suited for servers where spending a few minutes for a deploy isn't a blocking operation, and where stateless service design is considered best practice.
> if two packages depend on the same library, upgrading only one package might also upgrade the library
Yes, you are inheriting the nuances of whatever package manager you choose. Maybe another distro can give you a truly fixed package versions? You can also use apt to pin versions. This isn't something that Darch introduced, but also, it isn't something it solves. So yes, if you need 100% deterministic, Nix is your guy. I don't think this is a big issue though on Ubuntu systems, or most non-rolling distros. Their apt updates are typically well tested and don't bump major versions.
> need to build fat image files every time I want to try out an additional program
You can install your packages and use them when you like, without requiring a rebuild. Hell, that is even part of my workflow for some applications like docker. I have dotfiles with a "install-docker" and "install-vmware" aliases that installs it whenever I need it, instead of baking it into images.
> That seems to me like a workflow that's better suited for servers where spending a few minutes for a deploy isn't a blocking operation, and where stateless service design is considered best practice.
I disagree. If this was the case, then why Nix then? Obviously, stateless is valuable. I have multiple machines that I perform common tasks on. I hate having to always manage the updates when I get back to each one of them. With Darch, I can deploy one single image to all devices, and be confident they will never drift in installed packages/configuration. I never again have to ask my self "what machine am I on again?". Stateless may be the definitive way to run servers, but that by no means restricts it to only servers. I have been running Darch for a few months now, and I find it incredibly useful and calming.
> So yes, if you need 100% deterministic, Nix is your guy.
There are two orthogonal problems here: a lack of package isolation and nondeterminism.
Nix isolates packages, such that updating one package has no impact on any other packages (with unavoidable exceptions like the graphics driver, presumably).
You're right that package isolation isn't much of a problem on non-rolling distros. One of the benefits of Nix is that you get some of the stability and predictability of an LTS distro with the freshness of a rolling distro when desired, without having to deal with package conflicts.
Incidentally, Nix doesn't need to be used in a deterministic manner. In fact, I don't think most desktop users of Nix care too much about determinism for most packages they run. I certainly don't; I'm happy to follow along with whatever arrives in my channel. Nix has features that support determinism, and I'm certainly glad they exist for when I end up needing them, but they're not necessarily why people use Nix.
> Obviously, stateless is valuable.
When I said "stateless", I was referring to the whole "cattle, not pets" view of servers, where the running state of any particular server is unimportant, with nothing in the filesystem being of value. I was arguing that needing to build a new image and reboot in order to change which packages are installed are a poor fit for the desktop use case, where frequent reboots are much more inconvenient than for the server use case.
I'm not sure what taohansen meant when they used the word "stateless"; they seem to mean something different when they say that.
Anyways, this point is not really applicable anymore, since you've stated:
> You can install your packages and use them when you like, without requiring a rebuild.
Presumably if you install additional packages uncontrolled by your tooling, then your systems can start to drift away from each other.
Nix does not have this compromise; there's no build step. At any given point in time you can reproduce whatever configuration you have on one machine on another machine, regardless of how piecemeal you arrived at that configuration.
Do you have more of the 'why' then the 'what' described there?
The samples seem to be targeted at endusers/personal computers (with images like 'gaming' and Steam mentioned), while most immutable systems I see are deployment/server environments.
I've dabbled in Nix before and Darch _could_ be interesting. Can I read more somewhere?
The "why" is exactly the same reason NixOS exists. I want a machine that can be declared and rebuilt deterministically (at least semi-deterministically, rolling distro and all). I looked into NixOS, but the DSL was too much, and the npm/pip/etc stuff was a mess. I am a fan of Arch because of it's package availability and documentation, so I figured out a way to combine the two, using a "Docker-ish" approach.
My entire machines are built on Travis-CI and pushed to Docker Hub. Once I make a change to my recipes, ~20 minutes later, I can pull a fresh image and boot again.
Another thing I didn't like with non-declarative OSs (non-NixOS) was that if I wanted to just test a package out, after removing it, it would leave shards of config/package dependencies still on my system. With Darch, each boot has a tmpfs overlay, which means I can uninstall/install to my heart's desire, knowing that only things I commit to my recipes will be persisted. For example, I was trying to setup Ruby, and I had to try many ruby environment managers before I found one I liked. After a reboot, I was certain that the other ruby packages I tried where 100% scrubbed from my machine.
I also like the Docker approach, because using layers, I can quickly switch our the "desktop environment" layer to i3/plasma/gnome/etc, or my base image from Ubuntu/Arch/VoidLinux. This makes distro and DE hoping a breeze.
As for using Darch as a server, I would wait until I get the Ubuntu image done. That way, the builds will be more deterministic (instead of using rolling distros). I can see using that for servers, or IOT devices. I also intend to add PXE support to boot these images from the network, making it easy to manage the operating system on a fleet of devices. In summary, it is really up to your recipes and what operating system you choose.
The Chrome backspace changes were justified, I think, by the large numbers of accidental actuations due to text input boxes not being in focus. Plus they added some text indicating which replacement shortcut to use.
This article prompted the following questions for me:
1. How well did the historical systems do, and why? Presumably non-negligible key travel is not a modern invention.
2. How much of the latency is key travel? (i.e. what korethr said)
3. How exactly were the keys pressed/how fast were they pressed? All the article says about the experimental setup is "The start-of-input was measured by pressing two keys at once – one key on the keyboard and a button that was also connected to the logic analyzer."
1. i suspect it comes down only a single program having full reign of the system at a time, and everything living in either RAM or ROM (no page faults etc).
The patent in question is a patent on the obvious use, not the underlying method. "Deep convolutional neural nets trained using backprop or adversarial networks" is not the thing being patented here. "Using deep convolutional neural nets trained using backprop or adversarial networks to solve a specific problem" is what's being patented. The article argues that the known techniques aren't being applied in a particularly novel or interesting way.
I agree, and yes that happens. But the author concluded based on one example that the whole system is bringing down a whole industry. I didn't title the piece, Joe did.
And in the article he argued "the [patent] office seems prepared to give out patents on using machine learning in obvious and expected ways."
That is not and cannot be the criteria for or against issuing patents.
A design that technically violates a law, but is not actually dangerous - suppose the law constrains designs in a way that requires them to be less effective or safe - is not necessarily a poor design, unless "complies with law" is a higher priority design objective than "functions safely as intended".
Not to be too flippant, but "complies with law" sort of seems like a requirement for non-government services too. But I may be reading this too broadly?