Hacker News new | past | comments | ask | show | jobs | submit login

I think the problem is that Linux developers started abusing the Unix Philosophy to the point that you had to know about a lot of different programs in order to be productive - Sometimes, it's much more convenient if one program can do everything that you want it to do out of the box (it requires less understanding of the system).

The Unix philosophy is essentially the opposite of the Apple philosophy. It gives you flexibility and composability at the cost of simplicity and the overall experience.

The optimal solution tends to be somewhere in-between. If you look at Linux, it's actually a monolithic system (which goes against the Unix philosophy); the popularity of Linux is in itself proof that people do want a single cohesive product - If the Unix philosophy was the best approach, we'd all be using Minix by now.




I didn't find MacOS to be any simpler than newbie-oriented Linux distros like Mint and Ubuntu. It was just filled with a ton of proprietary, bastardized, closed-source garbage and limits that made it more difficult for power users to understand and effectively manage the system.


To be fair Linux distros have improved a LOT in the past 5 years. When I first used Ubuntu many years ago, you couldn't do anything without the command line. Installing software was a pain (I had tons of problem with Ubuntu Software Center and it never seemed to work).

I use Ubuntu (Gnome) these days. The only thing I miss from Windows is Windows Explorer. Nautilus just doesn't cut it in my opinion; I always end up browsing the file system with the command line. That said, I still prefer Nautilus over OSX's Finder.


"Installing software was a mess."

Of all the things to complain about in Linux, you choose the one thing that Mac OS and Windows still don't have right, and Linux had pretty good even back then?


Installation on Windows was always much easier; all programs had a relatively consistent UI wizard that stepped you through the installation process. Installing software from disks on Windows was really convenient (and disks where the real deal back then).

The fact that Linux relied on people to install stuff with the command line was a massive oversight. UIs are just way more intuitive than shell commands.


"Installation on Windows was always much easier"

I've rarely disagreed with something said on HN so strongly (at least among things that, in the grand scheme of things, really don't matter that much, but they matter a lot to my personal experience).

"The fact that Linux relied on people to install stuff with the command line was a massive oversight. UIs are just way more intuitive than shell commands."

This has never been true in the past 12 years. You have to go back even further to find a time when there weren't multiple GUIs for the leading package managers. And, for at least the past decade, the core GUI experience on every major Linux distro has had some sort of "Install Software" user interface that was super easy and provided search and the like.

There's lots of things Linux got wrong (and some that it still gets wrong) that Windows or macOS got right. Software installation really just isn't one of them, IMHO.

It's the thing I miss most when I have to work on Windows or macOS, and I miss it constantly...like multiple times a day. A good package manager is among the greatest time savers and greatest sources of comfort (am I up to date? do I have this installed already? which version? where are the config files? where are the docs? etc.) when I use any system, particularly one I haven't seen in a while.

I just really love a good package manager, and Linux has several. Windows and macOS have none (because if the entire OS didn't come from a package manager, it's useless...you can't know what's going on by querying the package manager, if the package manager only installed a tiny percentage of the code on the system). So, even though there's choco on Windows and Homebrew (shudder...) on macOS, they are broken from the get-go because they are, by necessity, their own tiny little part of the system with little awareness or control over the OS itself.


Why don't you like homebrew?

Also, if your problem with non-Linux package managers is that they only know about and control their own packages, then you must have the same objection to Nix and Guix, right?

What happened to wanting simple tools that do one thing and one thing right? Don't we want package managers to only manage packages, to decouple them as much as possible from the rest of the operating system, and leave system configuration management to other tools?


"Why don't you like homebrew?"

I've blogged about some of my problems with Homebrew. Generally speaking, Homebrew is a triumph of marketing and beautiful web design over technical merits (there are better options for macOS, but none nearly as popular as brew).

The blog post: http://inthebox.webmin.com/homebrew-package-installation-for...

I get that it's easy and lots of people like it, so I mostly try to hold my tongue, but every once in a while I'll see someone suggest something crazy like using Homebrew on Linux (where there is an embarrassment of good and even great package management options) and it makes me shudder. I'm not saying don't use Homebrew on your macOS system if it makes your life easier. I just would never consider it for a production system of any sort. I'm even kinda mistrustful of it on developer workstations (though there are plenty of similarly scary practices in the node/npm, rubygems, etc. worlds, so that ship has kinda sailed and I am resolved to just watch it all unfold).

"What happened to wanting simple tools that do one thing and one thing right?"

I still want that. Doing one thing right in this case means doing more than what packages on macOS or Windows do. One can argue about the complexity of rpm+yum or dpkg+apt, and it's likely that one could come up with simpler and more reliable implementations today, but if you want them to be more focused, I have to ask which feature(s) you'd remove? Dependency resolution? That one's a really complicated feature; a lot of code, and it's been reimplemented multiple times for rpm (up2date, yum, and now dnf). Surely, we can just leave that out. Or, perhaps the notion of a software repository? Is it really necessary for the package manager to download the software for us? I mean, I have a web browser and wget or curl. Verification of packages and the files they install, do we really need it? Can't we just assume that our request to the website won't be tampered with, and that what we're downloading has been vouched for by a party we trust? I dunno...I'm not really seeing a thing we can get rid of without making Linux as dumb as macOS or Windows.

"Don't we want package managers to only manage packages, to decouple them as much as possible from the rest of the operating system, and leave system configuration management to other tools?"

This is the strangest question, to me. Why on earth would we want the OS outside of the package manager? Why would we want to only verify packages that aren't part of the core OS? This is why Linux is so vastly superior to Windows and macOS on this one front. I'm having a hard time thinking of why having the package manager completely ignorant of the core OS would be a good thing. What benefit do you believe that would provide?

And, NixOS does not meet the description you've given. The OS is built with nix the package manager. Running nix as a standalone package manager on macOS does have the failing you've mentioned, but that's not the fault of nix. And, yes, nix is a better option for macOS than brew, but the package selection is much smaller and not as up to date in the general case...so maybe worse is better, in that case.

I get a bit ranty about package management. I spend a lot of time working with them (as a packager, software builder, distributor, etc.) and have strong opinions. But, I believe those strong opinions are backed by at least better than average experience.


> all programs had a relatively consistent UI wizard that stepped you through the installation process

No where near as consistent as installing from a package manager.

> The fact that Linux relied on people to install stuff with the command line was a massive oversight. UIs are just way more intuitive than shell commands.

Every user orientated distro has come with a GUI package manager for at least a decade, probably two.


Windows has always been fine for _installing_ software.

It's when you go to uninstall or upgrade it that you realize what a mess it is.


It's not using the command line that sucks about installing software.

It's that there are so many different standards for installing software, overlapping in sometimes conflicting ways.


Perhaps the issue when comparing Windows and Linux is more one of "dependency management" than just merely installing applications or libraries? Although I've had issues with package managers screwing up dependencies in the past, it hasn't happened in a while and when it would have happened I was warned beforehand.


Installing software from the distro is easy on Linux, but otherwise it's potentially a problem. Whereas Windows has no "distro" but the .msi system works quite well.


Pity Microsoft stopped bothering with .msi installers 10 years ago with Office 2007, and didn't bother to add any free ways to deploy all the new stuff they invented.


Re: macOS: I'm having a hard time seeing how copying a directory with everything necessary contained in it is not a good install procedure. Install a program: copy to the Applications folder. Uninstall a program: Cmd-Delete or drag to trash.

Compare to Linux, where a piece of software is scattered around /usr/bin, /usr/lib, /usr/share, /usr/doc. (Or /usr/local/*, you never know which) Oh, and those fun times where something depends on a libxxx.so.N but all that's on the system is libxxx.so.N.M.O and libxxx.so.N.M for some reason, so you have to make the symlink yourself. Or the distribution has a minimum of version N+1, so your option is to find the source for the library, figure out all the -devel packages it needs, and compile it up (hopefully), or just symlink libxxx.so.N to libxxx.so.N+1 and hope it works. And then the fun of figuring out what the package is named. pdf2text lives in poppler, who would have thought. Need gcc? That will be build-essential on Ubuntu last time I needed it. (Not build-essentials, either)


So, a tarball full of statically linked binaries. You can do that on Linux, too.

And, there are some new package managers that isolate in this way (and go well beyond it by containerizing). Flatpak is probably the most promising, IMHO. And, it still provides all the benefits of a good package manager, like verification, authenticity, downloading automatically from a repo, dependency resolution for core libraries. And, the way Flatpak handles the latter feature is really quite cool (and avoids having to distribute dozens of copies of the same libs).

Your description of installing packages on Linux does not match my experience in the past decade. Dependency resolution is a solved problem on Linux, at least on the major distros with good package managers.


For the average user, the worst install on Mac OS is drag icon from dmg into Apps folder. The other type is double clicking a package.


Large monolithic apps can be an ecosystem in their own right that is broadly comparable to an OS. And it is common to find apps that adopt Unix philosophy out of necessity within the ecosystem. This is not obvious to an outsider, and requires an understanding of the domain.

A good example is ArcGIS which on the surface is ridiculously monolithic. But within the toolbox function are several hundred programs that do only one thing and are composable. This approach is also seen in video or image editing workflows where a user works with a particular set of tools. The main difference is that the programs use a types system that is appropriate to the domain rather than just text.

The OS only really exposes an interface for working with OS level objects. That sometimes aligns to a workflow but not always. And we should not expect disciplines to align their techniques to OS level objects if that is not a good fit for the actual domain.


Linux is not a monolithic system. It has a monolithic (sorta) kernel. There's a big difference. You're arguing monolithic kernels vs. microkernels. Microkernels didn't even exist when UNIX was invented, so no, they are not representative of "Unix philosophy". "Unix philosophy" is merely about the user-space tools you use to do stuff, since back in 1970 all they had was the shell (sh), and various tools like grep, ed, awk, etc., to do things.

It's entirely possible to have a microkernel with a Unix-like system; HURD attempts this. Microkernel vs. monolithic is an entirely separate issue.


I think the parent poster is talking about the userspace. A Linux distribution like Ubuntu that uses systemd, GNOME, and NetworkManager is certainly monolithic compared to Slackware circa 1999. NetworkManager alone is a pile of garbage that goes completely against Unix networking conventions, for IMO no good reasons. OpenBSD's approach to integrating WiFi and other network interfaces into the existing BSD network management commands is so much better.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: