Hacker News new | past | comments | ask | show | jobs | submit login

Both of these issues seem like a timely reminder that everyday Linux desperately needs a proper application management and security model.

Installing software where your options are

1. running as a regular user, and the install script can put whatever it wants within your user's directories

or

2. running as root, and the install script can do literally anything to anywhere on your system

is not fit for purpose, when the risks from both malice and incompetence are both reaching new heights almost daily.

These are systems we use for real work, but even smartphones and their toy app stores do better now. How do we still not have controls so applications can always be installed/uninstalled in a controlled way, can only access files and other system resources that are relevant to their own operation, and so on?




> everyday Linux desperately needs a proper application management

You mean something, that won't allow two packages to own the same file? Something, like, rpm or apt?


You probably meant rpm and dpkg... Or you'd have to compare yum, zypper, apt, pacman and whatever else is out there.

But, I'm certain the parent didn't mean that. Dpkg and rpm both allow packages to overwrite files from each other and, more dangerously, allow fully authorized post-install scripts. And they're often necessary for sane package management (create user, initiate database), but could be exploited to wreck havoc on the system.


Yes, I meant dpkg.

Not sure about dpkg, but rpm does not allow two packages to own the same file. If you try to install package, that contains file owned by another, already installed package, the installation will fail (you can try that with installing an amd64 package that owns something in /usr/share, and then try to install the i386 version). Yes, post-install scripts are dangerous and rpm folks are doing small steps to phase them out: https://www.youtube.com/watch?v=kE-8ZRISFqA


dpkg will throw a fit in the same way.


it will throw an error message, which the user is probably going to ignore and install anyway.

This will ultimately cause errors down the line. Maybe not right now, but eventually problems will occur.

showing a warning is great, but not needing that warning would be preferable.

but most distributions are already working on solutions to that. ubuntu is working on SnapOn's [0] for example, and i remember hearing about something else from redhad as well.

[0] https://snapcraft.io/


No, it will not install the package at all:

    $ mkdir root/bin/
    $ (echo '#!/bin/sh'; echo 'echo "hi"') > root/bin/ls
    $ chmod +x root/bin/ls
    $ fpm -s dir -t deb -n bad-ls -v 1.0 -C `pwd`/root .
    Created package {:path=>"bad-ls_1.0_amd64.deb"}
    $ dpkg-deb -c bad-ls_1.0_amd64.deb
    drwxrwxr-x 0/0               0 2018-02-22 10:44 ./
    drwxr-xr-x 0/0               0 2018-02-22 10:44 ./usr/
    drwxr-xr-x 0/0               0 2018-02-22 10:44 ./usr/share/
    drwxr-xr-x 0/0               0 2018-02-22 10:44 ./usr/share/doc/
    drwxr-xr-x 0/0               0 2018-02-22 10:44 ./usr/share/doc/bad-ls/
    -rw-r--r-- 0/0             142 2018-02-22 10:44 ./usr/share/doc/bad-ls/changelog.gz
    drwxrwxr-x 0/0               0 2018-02-22 10:44 ./bin/
    -rwxrwxr-x 0/0              20 2018-02-22 10:42 ./bin/ls
    $ sudo dpkg -i bad-ls_1.0_amd64.deb
    Selecting previously unselected package bad-ls.
    (Reading database ... 837129 files and directories currently installed.)
    Preparing to unpack bad-ls_1.0_amd64.deb ...
    Unpacking bad-ls (1.0) ...
    dpkg: error processing archive bad-ls_1.0_amd64.deb (--install):
     trying to overwrite '/bin/ls', which is also in package coreutils 8.28-1
    Errors were encountered while processing:
     bad-ls_1.0_amd64.deb
    $ ls -l /bin/ls
    -rwxr-xr-x 1 root root 134792 Oct  2 10:51 /bin/ls*
It doesn't just warn and leave things in a bad state.


this is strange. I encountered a conflict at work recently with an /etc/ file conflict.

the stdout of the dpkg errormsg gave me the required tag i could use to do it anyway. it was basically just

  !! double click -> middle click


Actually, it will throw an error - on which the higher-level (libapt) tools above dpkg will abort, and going directly to dpkg with a --force-whatever is not quite as easy as clicking "yeah, just do it already". Not to mention that I have needed that twice in a decade, in rather obscure cases.

But yeah, containerizing the apps is probably a way forward, which sidesteps whole classes of issues.


dpkg won't allow one package to overwrite a file from another unless you pass it --force-overwrite, which is not the default.


You mean something, that won't allow two packages to own the same file? Something, like, rpm or apt?

No, not really.

For one thing, package managers are only useful on packages supplied by the distro (or otherwise bundled using that convention), and we need something that allows for installing (and uninstalling, and backing up configurations for, and...) software safely and systematically in the general case.

For another thing, even packages installed with a distro's own package manager can typically dump whatever files they want wherever they want, rather than having the OS restrict them to a controlled environment.


> For one thing, package managers are only useful on packages supplied by the distro (or otherwise bundled using that convention), and we need something that allows for installing (and uninstalling, and backing up configurations for, and...) software safely and systematically in the general case.

There's nothing that limits rpm/deb to distribution. Anyone who publishes a tarball with software, can publish rpm/deb as well. Many do.

> For another thing, even packages installed with a distro's own package manager can typically dump whatever files they want wherever they want, rather than having the OS restrict them to a controlled environment.

The list of files in manifest is checked beforehand and if there's a conflict with existing package, the installation is aborted.


There's nothing that limits rpm/deb to distribution. Anyone who publishes a tarball with software, can publish rpm/deb as well. Many do.

Hence my "or otherwise bundled..." note.

But you're still only thinking in terms of packages that are bundled and installed via the system tool. Anything not installed via that tool can typically do whatever it wants if its scripts run as root, and anything that is installed via that tool typically won't be aware of anything that wasn't and will happily write all over it with no mechanism for backing up what was there before or reverting a breaking change.

The point is that relying on some voluntary convention like this isn't good enough. A modern OS should enforce mandatory restrictions on all installed software. We should be able to do things like checking exactly what is installed, or uninstalling something unwanted with or without also uninstalling any now-unused dependencies or any configuration data, and we should be able to do these things reliably, safely, and without any requirement for the software itself to be "well behaved" in any particular way.


No, they do not have to be bundled. The vendor of given software has to support it.

Vendor A, supporting system B with it's packaging system .xyz, makes deliverables available as a package .xyz. Everything is fine, stuff works as it should.

Vendor C, makes deliverable as a self-extracting installer, that happens to run on system B needs your permission/credentials to install that on your system. If you do that without any auditing, it's your problem, if it overwrites something. You did give the permission (you had to type in that password) and didn't insist on proper packaging.

Because the system provides the facility to achieve what you want; you just chose to override it. You own all the consequences of that.

If you want for a modern OS to enforce mandatory restriction on all installed software, modify your sudoers file to only allow to run rpm/yum or dpkg/apt. Because packages installed via these mean fulfil the conditions that you describe.


If you do that without any auditing, it's your problem, if it overwrites something.

I don't know whether you're genuinely missing my point or just trolling, but this doesn't seem to be a very productive discussion so this will be my last comment here.

Your argument seems akin to saying that you could choose to install only open source software, and to personally audit every line of code in that software including all its dependencies, so if you don't do that then it's your own fault if something bad happens. If you're both a world class programmer and a security expert, and yet bizarrely you have ample free time available and nothing better to do with it, that might work. In the real world, it's totally impractical, and a much better solution is to operate according to the principle of least privilege, enforced at the level of the OS, without having to rely on conventions and/or good will.

If you want for a modern OS to enforce mandatory restriction on all installed software, modify your sudoers file to only allow to run rpm/yum or dpkg/apt. Because packages installed via these mean fulfil the conditions that you describe.

No, they don't, as I've repeatedly tried to explain. At best, even if packages are available and properly constructed, your method keeps track of where files go and can remove them again afterwards. It doesn't enforce any systematic use of the filesystem to contain packages within specific areas; it doesn't manage related issues like configuration files that you might want to back up or preserve across software changes; it doesn't restrict access to files, networking or other system resources that the software has no business touching; it doesn't scale to the many-small-dependencies model prevalent with tools like NPM; and at this point there are already so many fundamental problems with basic robustness and security that anything else is probably moot anyway.

I leave you with a question, which brings us back to where we came in. Given that this broken version of npm exists and that it was made available via at least one production channel that should not have included it as a result of presumed human error by the maintainers, how would anything material have changed today if people had been installing it via an official package and their package manager as you suggest, rather than via npm update?


> I don't know whether you're genuinely missing my point or just trolling, but this doesn't seem to be a very productive discussion so this will be my last comment here.

I'm afraid it is you, who is still missing the point.

No matter what the system does, if you use your root privileges, all bets are off. You are the god of the system, you can do whatever you want, the system has no way to stop you. That includes destroying the system, whether directly, or by scripts run on your behalf.

The only way for the system to enforce anything is to take away root from you. There is and will be no system in existence, that can both provide you with both unlimited power AND handholding you. That's the law of the objective reality we live in. To quote: "Ils doivent envisager qu’une grande responsabilité est la suite inséparable d’un grand pouvoir." (They must consider that great responsibility follows inseparably from great power).

> It doesn't enforce any systematic use of the filesystem to contain packages within specific areas

That's right, because it has no knowledge, what your specific areas are, or what they are allowed to contain.

> it doesn't manage related issues like configuration files that you might want to back up or preserve across software changes;

configuration files are app-specific, "the system" cannot have knowledge of it's internal structure and of your intent. What it can do (and does) is show you the old and new versions, optionally the diff between them and leave the final decision on you. It will never overwrite your configuration without your consent (see the first part of the answer).

If you want the full SCM power over you config, put your config into SCM. Not everyone wants it, but those who want it, have the option available. Others may prefer other ways of management, in the gamut from "none" to "full blown provisioning system".

> it doesn't restrict access to files, networking or other system resources that the software has no business touching;

To the software, or it's installer? It pretty much does to the software, when it is being run. To the installer? See the first part of the answer.

> Given that this broken version of npm exists and that it was made available via at least one production channel that should not have included it as a result of presumed human error by the maintainers, how would anything material have changed today if people had been installing it via an official package and their package manager as you suggest, rather than via npm update?

It boggles my mind, why anybody would run npm as a root. The only thing they achieve is to write files where they otherwise can't, and risk exactly what happened now.

They _could_ run npm as a normal user, which happens to own the target directory, and it would be without the risk of damaging the system.

So the problem is not npm bugs; to problem is people not realizing what they are doing and refusing to take responsibility when it goes wrong.


If you assume that conventions don't work as people will just run whatever crap as root, I don't think you can solve the problem without taking away that right from the user (as is customary on mobile devices).

At that point, solving the problem comes at too high a cost. A few messed up npm installs seem to be the lesser evil here.


If you assume that conventions don't work as people will just run whatever crap as root...

That's not really the issue, I think. Literally everything you install, however legitimate the source and however well-intentioned the people providing it, is "whatever crap" for the purposes of this exercise. What happened here could also have happened using just about anything else you installed on a typical Linux system today, whether from an official distro package repository, or some other source of packaged files, or side-loaded with one of those horrendous "Sure, I'll download your arbitrary script from the Internet and pipe it through sh as root to install your software without even checking it, as you recommend on your web site" things.

There is no reason that our systems should trust arbitrary installation scripts to do arbitrary things, whether they're running as root or not, but especially if they are. I'm stunned at the opposition I'm seeing from so many people on HN to the idea of making a system more secure, even while we're discussing a demonstrated, system-destroying bug in widely used software that was apparently unintentionally rolled out through at least one official channel when it wasn't ready.


This is governance issue.

Build all your software into packages appropriate for the OS you use and then put them in a company repo. Install from there.

If you're just dumping whatever "stuff" you want on a machine in whatever location with no control, you're gonna have a bad time.


Unless you are going to systematically and reliably audit literally everything that any installer in any of those packages does as root, this is not a solution to the real problem, it's just trying to reduce the risk a bit.


Yeah people like to hate on Microsoft/Mac App Stores, but at least they don't let programs vomit files across the disk.

The Linux solution I suppose is Nix/Guix or Flatpak/Snap or Docker a la RancherOS. Perhaps more restrictive SELinix profiles could work as well.


> everyday Linux desperately needs a proper application management and security model.

We already have the necessary tools to do it, eg. firejail. We only have to make every binary run in firejail by default (and write firejail profiles for more binaries).


Both of these issues seem like a timely reminder that everyday Linux desperately needs a proper application management and security model.

I agree. For instance, on recent macOS versions you cannot modify most system directories as root, unless System Integrity Protection (SIP) is enabled [1]. SIP can only be disabled by the user by booting into the recovery OS. Just making these directories read-only prevent accidents and malice.

AFAIK in Fedora Atomic Host/Server some system directories are also read-only [2]. Moreover, Fedora Atomic uses OSTree as a content-addressed object store, similarly to git, where the current filesystem is just a 'checkout'. So, you can do transactional rollbacks, upgrades, etc.

[1] https://support.apple.com/en-us/HT204899

[2] https://rpm-ostree.readthedocs.io/en/latest/manual/administr...


FWIW, "new npm broke Æeeeverything? Meh. Destroy the docker container, force version <= 5.6.0, rebuild" has now saved me from a bigger disaster. This is the 1.5th option, IMNSHO: npm gets its root(-ish) access, host computer is somewhat protected.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: