Hacker News new | comments | show | ask | jobs | submit login

Disclaimer: I work for SUSE, a Linux company which provides support for enterprises running SLES, as well as contributing our packages and knowledge to the openSUSE community.

I don't see how you could consider package management as a bad thing. Why do you consider "downloading binaries from a random website" to be a "good thing". Not to mention that those binaries almost never update themselves and how well they deal with dependencies depends on what $500 installer builder they used.

Package managers allow you to always keep your system up to date and you have a single database of all software that has been installed, what it's dependencies are and what files it did install (so you can uninstall it). They are definitely one of the awesome things about Linux package managers. OS X has homebrew, but it's not well integrated into the system because it's a third-party library of software. BSD's package managers are at least 10 years behind Linux's (they're still working on packaging the base system). Windows has nothing AFAIK. Things like OBS allow you to automate the release of new versions, OpenQA allows you to do automated QA testing to make sure there's no regressions in graphical or terminal services.

I especially don't understand why you think that not having a way to update the libraries on your system is a good idea. Packaging the same DLL in 30 different places is not a good thing.




1. Nothing wrong with packages, but central repos rarely contain up to date packages. I don't mind going to skype.com and downloading a .deb package for skype (I wish it was the same format for desktop software for all flavors of linux, but I digress).

2. Shared libraries are only good for saving space (largely irrelevant on desktop) and for security. Different side by side versions of libs work poorly when you reach "system" level is my experience. Example: having apps that require different incompatible glibc versions is painful. For example: see the accepted answer to this question http://stackoverflow.com/questions/847179/multiple-glibc-lib... "The absolute path to ld-linux.so.2 is hard-coded into the executable at link time" WAT?

I think the difference in mindset between what desktop computing is, and what a "system" is, is very different bewteen a linux user and a windows user (i.e. one that just wants an OS that is a dumb layer for running binary compiled shitware that must be built and distributed by its creator because it will never be in a repo).


> 1. Downloading binaries from websites is what Windows users know. Package management is vastly superior as long as the packages exist and are up to date with the "official" source such as Spotify. If the package is a week late, then I'm going to prefer the direct binary. Once half my apps are direct and half are packages, the benefits of a package system diminishes.

"What Windows users know" doesn't mean that it's a good thing. Windows users also know to run everything with administrative privileges. For sufficiently sophisticated build systems (read: OBS) you can automatically rebuild packages. The reason why packaging takes time is because there is a testing process (which can also be automated with things like OpenQA), but there's lots of other maintainence that goes on when curating packages. Believe it or not, but sometimes upstream is downright irresponsible when doing version bumps and it's the maintainer's job to deal with it. It's fairly thankless work, to be honest, because you're not working on the new hot stuff. Sure, "just download a binary" works until you have multiple components that depend on each other.

> 2. Shared libraries are only good for saving space (irrelevant on desktop) and for security. Different side by side versions of libs work poorly when you reach "system" level is my experience. Several libc versions etc is painful.

"Only good for [...] security" is enough reason for me. Tell me how Windows programs deal with updates to critical libraries that everyone uses separately. I'm guessing the answer is "not well at all". And if you're going though your package manager, then no package should require a specific version of libc (besides, this problem can be mitigated somewhat with symbol versioning). The gains far outweigh the perceived costs IMO.


Argh your ninja response time meant my complete rewrite of my above post now looks silly, sorry :)

> "What Windows users know" doesn't mean that it's a good thing.

I know (I also removed it). It's patently stupid. Let me rephrase it if you want one way of distributing apps that anyone can use, it's basically the only working way. Download a binary from the creators' site. Otherwise you end up with the utterly broken method of "check if it's in a tree in some package repo, if not, you can add more package sources to your repo, or if not, you check if you can find a downloadable package for it, if not, you build from source".

> Tell me how Windows programs deal with updates to critical libraries that everyone uses separately

They don't. It's both a bug and a feature. OS libraries are updated of course (by windows update) but I don't necessarily consider e.g. a C++ runtime to be an OS library, even if it's microsofts' own redist. I prefer my applications to ship their own copy of their c++ runtime and keep it local because it limits problems. Even at the cost of having an unpatched one somewhere.

> Windows users also know to run everything with administrative privileges.

Well, accidentally answering "yes" to the UAC prompt is about as likely as accidentally sudoing something imo.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: