Hacker News new | past | comments | ask | show | jobs | submit login
A Crude Personal Package Manager (nullprogram.com)
64 points by signa11 on Mar 28, 2018 | hide | past | web | favorite | 27 comments



Might I suggest NixOS[0] or Nix[1] itself? It is still a little rough around the edges but serves the same purpose.

[0] https://nixos.org [1] https://nixos.org/nix/


I'm starting to look into that as it seems less awful[0] than package managers in general.

My present setup is using doit[1] to automate setup. I know people use Ansible[2] for similar purposes.

The compelling case to my mind is that actions in Nix are immutable, and that's an amazing story for deploying software to prod, as it makes much stronger guarantees about what you can roll back.

I've actually set up docker images to do things like build old TeX documents simply because it's so bloody difficult to reproduce an earlier state. Nix seems like a much lighter weight answer to that.

[0] https://nixos.org/nixos/nix-pills/why-you-should-give-it-a-t... [1] http://pydoit.org/ [2] http://docs.ansible.com/ansible/latest/index.html


Along with Guix which is often mentioned together with Nix, there's also GoboLinux which has an interesting approach.


While we're on the topic of homebrew package managers, I think some people here might be interested in checking out Exodus [1] [2]. A lot of the project philosophy actually overlaps quite a bit with that of qpkg. Both projects seem to be focused on simplicity, produce tarballs which can be extracted anywhere and run on any Linux machine, and avoid collisions without a centralized database. Unlike qpkg, however, Exodus isn't concerned with compilation and does automatically handle dependencies. It does so by using the system linker to discover required libraries, and then creating small wrapper executables which invoke the linker with the necessary library paths.

I could actually see it being used in conjunction with a script like qpkg, where the script handles compilation and then invokes Exodus to produces self-contained bundles with all of the necessary dependencies. Exodus is generally agnostic to how software it bundles was initially installed or compiled, which has lead to some interesting use cases. For example, we've had a number of people use Docker images to install software from Debian or Alpine package repositories, and then use Exodus to repackage single applications to run in microcontainers.

- [1] https://github.com/intoli/exodus

- [2] https://intoli.com/blog/exodus-2/


I've been using GNU Stow as a pseudo "package manager." Let's me install multiple versions of a software. And I simply delete the directory when I don't need it anymore.


I'll second GNU Stow — I've been using it off-and-on for almost twenty years now (hard to believe that it's been that long!). It does have issues when pre/post-install scripts are required, but that could be easily wrapped.

I think Stow could be easily combined with the ideas in this article to yield something pretty awesome. It already supports clean uninstalls and checking for conflicts, and individual Stow dirs are easily tar-able too. Really, a wrapper around Stow that knows how to handle common configure/make/make install patterns sounds like it'd do the trick.


Guix builds on top of most or many ideas of Stow giving even more power. I started using Guix on ubuntu some time ago and then found about stow later, but haven't used it.


Guix is very cool, but I wish it were written in Lisp rather than Scheme. Absent re-implementing it myself, though, maybe I'll give it a shot.


I wrote something like that more than once. My use case: I am given the account on the machine on which I do not have admin access, but I still want my tools that are not installed: tmux, git, htop, latest version of vim sometimes, dos2unix, ack, etc. So I create ~/usr, and start the process of downloading and compiling tar.gz, with all the proper flags, most importantly --prefix. I automated this with bash scripts to the varying degrees of automation, but has always been wondering if there is a better and established tool for this.


It's been mentioned elsewhere in this thread, but Exodus is really great for this use case. You can just run

  exodus htop | ssh my-server-name
and your local machine's copy of htop will be installed in your home directory on the remote machine (along with all of the dependencies).


For the benefits of others: spent quite a bit of time googling that exodus thing (way too ubiquitous), and I think this is it: https://github.com/intoli/exodus


I also keep user-specific binaries in ~/.local/bin. IMO, keeping things clearly separated between the user and system is just the most sensible approach. It also keeps the home directory fairly free of clutter.

Something that bugs me about the XDG Base Directory Specification [0] is that the default is to have a `.config` and `.local` folder in your home directory. I'd rather it all be placed under the single hidden local folder.

On macOS you have ~/Library for user-specific files, /Library for global non-system files, and /System/Library for system files. I think if you use the Network Location feature some apps will also store different preferences in ~/Library/Preferences/ByHost, although I haven't really used the feature so I might be mistaken. Honestly, changing your system preferences based on your location makes tons of sense to me! For a portable system, it doesn't make sense to use the same settings in your home network as when you're in an untrusted network such as a coffee shop or airport.

[0] https://standards.freedesktop.org/basedir-spec/basedir-spec-...


Out of interest I followed the link to "installed in my home directory under ~/.local/bin", because I use the same method for "global" user installation of some npm packages and arbitrary little tools I put together... anyway I discovered this:

> export PATH=$HOME/.local/bin:$PATH

> Notice I put it in the front of the list. This is because I want my home directory programs to override system programs with the same name. For what other reason would I install a program with the same name if not to override the system program?

I've always made a point of doing the exact opposite of this when adding to /etc/profile, with the reasoning of: why the hell would I want to allow any user script to be able to override system commands without root permission?

Have I gotten this all wrong? I admit I am no expert on bins and paths and security, but isn't the above a potential security issue?


This is often called being "on the wrong side of the airtight hatch". If a program can write to ~/.local/bin, it can write to ~/.profile and change your path anyway, or set aliases.


> If a program can write to ~/.local/bin, it can write to ~/.profile and change your path anyway

You're absolutely right, but in the case that a program can write to ~/.local/bin but no where else, it is a concern. What I'm really talking about is when a package manager is using that directory and potentially untrustworthy packages can put whatever they want in ~/.local/bin but not ~


> allow any user script to be able to override system commands

Other users could only override system commands if they can write files to your ~/.local/bin

As /u/CJefferson pointed out, if a user has permission to write files to your ~/.local/bin, they can probably also write to your ~/.profile and ~/.bash_profile.

As long as you control the contents of your home directory, modifying your $PATH to include ~/.local/bin is safe.


Well, if you're running as `root`, then `$HOME` will point to `/root`. Assuming `/root` is set to only allow `root:root` to write to it, then you don't need to worry about other users placing binaries at `/root/.local/bin`.


sorry i meant root as in /etc/profile not the root user, this is inherited by all user sessions. I will update my comment to clarify this. But actually this also applies to your own user .profile or .bashrc or whatever shell you have. I suppose more specifically I am talking about using a package manager which essentially gives write permission to ~/.local for individual package maintainers and that you might not be able to trust (this applies to NPM in particular). As an example:

For the users's own session, I don't want apt linking to some nefarious bin if for instance I have a npm package whom's authors credentials are hacked at some point, inserting an extra ~/.local/bin/apt binary when it's updated. For this scenario the overriding $PATH method will cause apt to run the nefarious binary rather than the system one.

[edit]

I think I might have answered my own question through example... it's a security problem when pointing a package manager at the bin where individual package maintainers can publish at any time and be hacked at any time. But I suppose in the authors example, he is using it for his own scripts ONLY, often specifically for the purpose of subverting system package versions, in which case overriding is as safe as the user is.


I don't think the global profile should be setting user paths. Just add the usual /bin /sbin and /usr variants of these. That way an unknowing user would be less likely to run into the issue you're describing.


You should check out http://spack.io/

Active community, with thousands of packages


Oh, you reinvented installpkg! the next step is to create all the various pkgtools and the rest of the slackware :D


And then enjoy the fun of keeping track which packages to install first. :)


Sounds a lot like Slackware packages. Based on tar, doesn't handle dependencies, etc.


I've been happily using ruario's createpkg[1] for years now.

[1] https://gist.github.com/ruario/11246070


Check out habitat folks!

https://bldr.habitat.sh/


so, dpkg ?


Doesn't dpkg require root permissions to run?




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: