Hacker News new | past | comments | ask | show | jobs | submit | midchildan's comments login

I don't want to login to github.com when I'm on my work computer. That's going to take me one step closer to uploading company internal stuff by accident.


It's not only code search that is affected. Issue and pull request search are affected as well.

Here's a screen recording of that happening:

https://imgur.com/a/BT6uRIe

It's been quite a while since GitHub started gating code search behind a login. However, they recently started gating other types of search as well. The worst part is, you don't notice it immediately for two reasons. First, it doesn't happen on all repositories. I didn't experience this yet with NixOS/nixpkgs for example, possibly because the high volume of activity going on there prevented the switch. Second, the search results do show up right after you hit the search button. However, it shows up a login screen as soon as you start to navigate around the search results. To me, I can't help but feel like they're testing the waters by not making it immediately obvious that this is happening.

The inconvenience doesn't stop there. As you can notice from the screen recording, once you get shown the login page, GitHub will continue to show the login page even if you hit the back button. On top of that, the search query is sometimes not included in the URL, making search results difficult to share.

I get GitHub wanting to require logins for code search, since it takes up computing resources. However, there's something to be said about gating search for issues and pull requests of open source projects without the project maintainers being notified.


My thoughts exactly. Container-based development environments has a Cygwin-like clunkiness to it. But even clunkier because of the default lack of access to the host system, and the additional burden of having to manage multiple containers.

Just to be clear, Cygwin is a great project. But it's not something I'd actively choose over a native Linux system if I had the choice of operating systems.


It's really not. Docker is a tool for binary distribution. Docker images are those binaries. Dockerfiles used to create Docker images don't have any mechanism to ensure reproducibility whatsoever. Unlike Nix, you can't take a Dockerfile and expect builds to succeed.

Furthermore, Docker images are monoliths that's isolated from the system. Instead of system administrators choosing the isolation boundaries that best fits their requirements, the image packagers put walls around each and every image. As a result, Docker containers are inflexible compared to traditional Unix applications. Piecing together multiple Docker containers is a clunky experience.

In contrast, Nix is an integrated build/configuration/deployment system. Entire systems down to the package level can be reproduced from Nix expressions.

It also gives system administrators power over the whole process. Administrators have control over things like how each components are pieced together, and where isolation boundaries are placed. They can even make tweaks to the package build steps.

Docker can't substitute Nix. It's not a matter of elegance. Their features and capabilities are wildly different.


I agree for the most part except I don't find that this matters in practice:

> Dockerfiles used to create Docker images don't have any mechanism to ensure reproducibility whatsoever

simply generate it once and keep it as an immutable artefact


I think Nix is relevant here, because being able to run software across different machines reproducibly is one of its major selling point. I particularly like that it doesn't rely on virtualization or containerization to do that. It's up to the user to decide how to isolate the runtime environment from the host or whether they even should. Alternatively, tools building upon Nix can make that decision for them. Either way, it allows for a more flexible approach when you have to weigh the pros and cons of different isolation strategies. Development environments defined by Nix tend to compose well too, as a result of this design.


I think there's a bit of confusion caused by equating Nix "derivations" with "packages" of traditional package managers.

Nix mainly concerns itself with derivations [1]. They're build recipes for creating binary artifacts that are meant to be consumed by the Nix daemon. The Nix daemon instantiates derivations by building the artifact and storing it to a store path under /nix/store. Store paths are unique to each derivation.

When people say Nix is reproducible, they mean that derivations are reproducible [2]. This is because anything that might cause the build to change is captured as inputs to the derivation. Every input is explicitly specified by the author of the derivation. This means that when a dependency gets updated, the resulting derivation and store path would change. The new derivation might fail to build, but the old one would still continue to build regardless of how much time has passed since it was first built. So if a latest package in Nixpkgs is broken, you can always go back to a known good commit to get a working derivation while waiting for the package maintainer to fix it [3].

Traditional package managers don't have a concept of a derivation. Instead, they have packages. Those packages have no reproducibility whatsoever. Even if they built successfully in the past, they might not build today. That's because a traditional package is only identified by its name and version, as opposed to a Nix derivation which is identified by its content (= the build recipe) [4]. Traditional package managers see two incompatible builds with the same name and version as the same package, replaceable with each other. Worse, most package managers don't require versions to be specified as part of dependencies. Whether a package builds or not is then dependent on the current state of the central package repository. Again, this isn't the case with Nix derivations.

[1]: Internally, Nix doesn't even have the concept of a package. A package is a concept that we humans use to group related derivations together.

[2]: To be clear, derivations aren't bit-by-bit reproducible. For example, CPU caches would be observable during builds because in general, process sandboxes don't prevent hardware information leakage. However, it's reproducible in a practical sense because people would have to go out of their way to make software builds dependent on things like CPU state. People might do that as a joke, but not for any serious reason.

[3]: Ideally, tests and reviews should catch any breakage but sometimes it happens. Hence the rolling release branch is marked "unstable." Fortunately, it's also easy to apply fixes locally before they're available in Nixpkgs because Nix makes it straightforward to create a custom derivation by extending existing ones.

[4]: Not to be confused with content addressed derivations, which identifies derivations by the resulting binary artifact.


for 2 there is sandbox from facebook to isolate tests (and builds) from cpu non determinism. i have raised ticket on nix. so really it is just another derivation sandbox.


> has been well worked out for Chinese and Japanese.

Not really. Typing CJK text is a tedious and frustrating experience. People cope with it because no one has come up with a better alternative.

Both Chinese and Japanese alphabets have way too many characters to fit on a keyboard. To deal with this, CJK users type in the English spelling and convert it to CJK characters using software called input method editors (IMEs).

The problem is, this conversion process is often not straightforward. Unlike English, CJK sentences don't have spaces in them so IMEs have to guess how to split sentences into words. Then, it has to identify the right words, which is complicated due to the existence of homophones.

To deal with all the ambiguity involved in this conversion process, IMEs rely on predictions to provide users with a list of possible conversion candidates.

This leads to an awful typing experience in which users have to constantly choose from the IME-provided list for every little snippet of text. This is distracting, slows down typing, and makes key presses non-deterministic. To make matters worse, IMEs sometimes get things wrong and users have to get "creative" to work around it.

In comparison, typing English text is a breeze. You can just type what you want directly.


Treating packaging boundaries and runtime isolation as the same thing is exactly the problem with Docker and similar solutions. Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime. Yet Docker conflates the two, introducing all sorts of unnecessary friction all over the place.

This is why something as simple as getting more than one process to work with each other on Docker is such an overcomplicated mess. The runtime isolation boundary set by Docker doesn't represent any sort of logical component or security boundary in your system. It merely reflects how the underlying image was built.

This is a classic anti-pattern of mixing up policy with implementation. Runtime isolation policy should be independent of build time implementation. Nix gets this right with better design and composable packages. It's trivial to create a container that includes only the packages you want, with dependencies handled automatically by Nix. Docker, on the other hand, leaves you with a binary blob (i.e., Docker image) that's neither composable nor customizable.


> Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime. Yet Docker conflates the two, introducing all sorts of unnecessary friction all over the place.

But Docker in fact doesn't have compiletime dependencies, it needs you to specify runtime deps only. If you want to build something in Docker, you use two-stage builds and the runtime deps of the first stage become your compiletime deps.

> This is why something as simple as getting more than one process to work with each other on Docker is such an overcomplicated mess

I don't get this, why is it considered an overcomplicated mess? If you want to run several processes in one container, you just launch it with a lightweight process manager, if you want to run it in separate containers -- well, that's even easier, just launch separate containers and configure the communication between them with a network.

> Nix gets this right with better design and composable packages

Nix actually implements it worse than Docker in some sense. Particularly, the exact problem that you described:

> Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime

is not solved in Nix, runtime deps must be a subset of compiletime deps.

> that's neither composable nor customizable

Compositionality is a completely different issue on which I do have problems with Docker. DAG-oriented environment building is strictly better than inheritance-oriented, but that's all orthogonal with compiletime-runtime separation.


I believe Stow can fix everything except #2 [1], although I haven't actually used it before. But it's also easy to create your own "garbage collector" that cleans up dangling symlinks in your home directory.

All you need to do is to keep track of symlinks you've installed with your setup script. This can be done by creating a symlink to the symlink you've installed, which acts as a "GC root." The next time you run the setup script, it would check those "GC roots" to see if they point to a valid file and remove any dangling symlinks.

This is the approach I take for my own dotfiles. I seriously considered using Stow or the bare git approach before, but I decided against it because setting up my dotfiles involved more than just installing symlinks. I had to be able to download files (e.g., vim-plug), clone git repositories, change file permissions, and maintain files that's not meant to be linked into the home directory. I found the flexibility of a custom shell script most fit for my needs.

[1]: The "Deleting Packages" section in https://linux.die.net/man/8/stow


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: