Hacker News new | comments | show | ask | jobs | submit login

The complexity of the `find` command is the least of Unix's problems. How about defending these?

1. Unnecessary and confusing directory structure. `/etc`? Why not `/config`? `/usr` instead of `/system`, `/var` instead of ... well who knows. The maximum directory name length is no longer 3 characters.

2. Programs are mushed together and scattered through the filesystem rather than stored in separate locations. This basically means applications are install-only. Yeah, package managers try to keep track of everything, but that is just hacking around the problem, and most developers don't want to spend hours creating 5 different distro packages.

3. Not strictly Unix, but the mess of glibc with respect to ABI compatibility, static linking, etc. is ridiculous. Musl fixes most of this fortunately.

4. Emphasis on text-based configuration files. This is often ok, but it does make it hard to integrate with GUI tools, hence the lack of them.

5. Emphasis on shell scripts. Fortunately this is starting to change, but doing everything with shell scripts is terribly bug-prone and fragile.

6. X11. 'nuff said about that. When is Wayland going to be ready again?

7. General bugginess. I know stuff works 90% of the time, but that 10% is infuriating. Windows is a lot more reliable than Linux at having things "just work" these days.




Regarding #5, I quite enjoy text-based configuration files, and can't stand systems that force me to use a GUI to change settings. If I have a text-based config file, I know that it will play nicely with git. If there are many related settings, users can change them all quickly with their preferred text editor.


Agreed, text based config files are good and not the problem. (Though binary formats != GUI tools only.)

I think the real problem is config files either in hard-to-parse-correctly custom ad-hoc formats or even "config files" written in a scripting language (-> impossible to parse).

All config files should use the same standard format. I'd say "like YAML", but I'm not aware of a widely-used format with standard features like file includes or data types beyond "int" and "string" (e.g. for time intervals; these really shouldn't be "just a string... with a custom format").


That works fine when config files are simple straight forward text data. But config files can grow increasingly complex over time, and eventually become complex turing complete languages of their own.

I think it would be better to just start with a Turing complete language. I think they should use Lua. It has very simple general data structures that are self explanatory, and its very lightweight and sandboxable.

The only issue is combining config files with other programs. You don't want to strip the comments or formatting when you modify it with another program. Also wish there was a way to specify metadata, like what the possible values for a variable are allowed to be. Or descriptions and help info. With that you could easily convert config files into GUIs.


> I think it would be better to just start with a Turing complete language. I think they should use Lua. .......... when you modify it with another program ...

No! Turing complete config files are even worse than ad-hoc config files.

If your config format is turing complete, you couldn't correctly modify config files automatically. (You wouldn't even know how long until you're done evaluating it.)

If you need more logic, put it in your program or have a plugin system or write a program that generates the config file, but don't put it in the config file.


I don't see anything wrong with adding the option to do scripting. No one is making you use it. But when you need it, there's no alternative.

Many projects start out with just simple config files. But then they realize they need to do logic, and hack that into it. Then they realize they need more complex logic, and hack even more stuff in. And it's just a mess. It would have been much cleaner if they just started out with a scripting language.

Whether you should be putting logic in the config file is a different issue, but as long as people do it or have a need to do it, it's much better than crazy ad hoc solutions.

See these discussions on the issue: https://stackoverflow.com/questions/648246/at-what-point-doe... https://medium.com/@MrJamesFisher/configuration-files-suck-6...


I do understand your point.

But as soon as scripting is supported, it's impossible to write tools that process the config files and always work, especially with untrusted config files. You can't have both.

So the question is: can code be separated from config (like I've proposed above) (the code can still be inlined in the config file, as long as the "root" of the config file is declarative and the boundaries are well-defined)?

If no, which is more important: parseability or flexibility? It's a tradeoff.


What you could do is go the other way around. The program's canonical configuration format is pure data in a well-defined format (XML, JSON, Protocol Buffers, etc.). However, the top-level user-facing configuration is a script, written in a well-known (and ideally easily sandboxed) scripting language, whose output is the configuration data. The script can still load pure data files, which can be automatically analyzed and transformed, and with enough discipline most of the rapidly changing parts of your configuration will live in these pure data files. Even without this discipline, the final output of the configuration script is pure data that can easily be stored separately for tests, diffs, analyses, etc.


The problem with this approach is that there's no way for my program to parse my config file and tell me if I've screwed something up without actually executing the config file, which may be expensive or infeasible to do at program start.

My personal opinion is almost exactly the opposite. If the program's config file requires anything more complicated than a regular language to specify, it's doing too much.


>All config files should use the same standard format. I'd say "like YAML", but I'm not aware of a widely-used format with standard features like file includes or data types beyond "int" and "string" (e.g. for time intervals; these really shouldn't be "just a string... with a custom format").

Not sure how this compares http://raml.org/



I thought of HOCON, it's quite usable for users, but it's not a solution:

1. Lack of adption: It's not widely used yet. Implementations for few languages.

2. No formal spec; the "spec" is very imprecise and seems to try very hard to leave as much as possible to the implementation.

3. The spec is very, very Java specific. No clear separation between "core HOCON" and "Java extensions".

4. It probably doesn't lend itself well to automatic changes in a way that preserves structure of an original file (-> without just rendering out JSON).

It's clear that HOCONs only real focus is being easy to edit manually.

It's a nice idea, but it's definitely not the long term solution we need.


> All config files should use the same standard format.

I can't think of a single format that would lend itself to all cases, but I agree that config files should be in a standard format (i.e. a format that is supported by parsing tools).

YAML++ for being very readable.


XML has includes and you can have data types like string, int, time intervals, strings with regex, etc. with an XSD, if you want.

The complexity this brings can be overwhelming compared to an ad-hoc config file format.


XML is too verbose for manual editing and I've yet to see an XML library that isn't cumbersome to use (compared to JSON). It's probably too complex, alright. But I'm convinced a simpler format could be specified.

But your complexity comparison is unfair. Ad-hoc file formats are overwhelming. Users don't see the complexity because there is no spec and they just write config files by example. Developers generally either give up or write something that doesn't behave quite the same. A fair comparison would be:

  """"
  XML looks like this:

  <section>
    <subsection>
      <key>value</key>
    </subsection>
  </section>
  """
That's about the level of detail in the documentation of most ad-hoc file formats I've seen. Followed by hundreds of examples that happen to show (but not specify) special cases.


The emphasis on doing everything with text is not a problem, it is a major feature of Unix. Configuration management, portability, and interoperability are all a lot easier to do thanks to Unix's dedication to text as the lingua franca of the operating system


+1 for GUI-based configuration being a pain, having a dotfiles repo that I can edit, clone, fork, etc. is magical (and far more flexible).


Never mind being able to put comments in line with the settings to potentially document why the seemingly crazy changes have been applied.


Text config files plus the command line makes explainations and documentation a lot simpler, more concise, and more precise. You can exactly duplicate a series of commands and config changes. "Type this command to edit this config file, change the value of this setting to this other setting, then run this command to reload." Making documentation is often simply a matter of copying from your .bash_history.

Making documentation for graphical programs often requires screenshots and sentences like "click the third radio button on the right-hand section" that add nothing to the documentation and are easily misunderstood by hapless users. Then the developer changes the layout of the dialog box and you need a new set of screenshots. I'll grant that GUIs are more discoverable to a casual user.

I seem to recall that it's possible to automate GUI software but it seems fraught with peril in a way that automation of a commandline and text config system simply is not.


Check out AutoIt for an example of GUI automation. I've used it before as a hacky way to extend GUI programs.


Totally with you. I HATE dealing with Windows server for this very reason. Whereas, even before Puppet and Chef and the like you could mostly automate the deployment and configuration of a new *nix server with shell scripts and config file templates.

Try automating the deployment of a new IIS server 8 years ago. Hell, try it today.


    > Try automating the deployment of a new IIS server 8
    > years ago. Hell, try it today.
I've tried manually setting up a new local development server a couple of times; given up either on that.

"Download this, and this. Then run the Installation Wizard and select this and this and this, and this if you want it but you might not need it. Then install this and reboot. Then run the New Server Configuration Wizard Utility Package. Then ..." (cont. 94 pages) -- every tutorial.


@7 - is that so ? I just gave up installing F# developer tools in Windows last night after two hours, 3 general install methods (check the F# foundation site) and some 6+ installer packages (some of which wanted to eat 8GB of diskspace). - And yes, my copy of windows is reasonably modern (8.1) and legal.

Contrast that with Ubuntu, where installing F# took all of 5 minutes, with one command, 200MB and I had a full IDE and F# support.

The one concession I'll make is the one you yourself seem ignorant of - ease of use pertains to your expertise with the system. If you grew up on Windows, you may get its idiosyncrasies.


> Contrast that with Ubuntu, where installing F# took all of 5 minutes, with one command, 200MB and I had a full IDE and F# support.

Unless you want the latest version then you are cloning a few repos, compiling and hunting dependencies because building mono + ide isn't straightforward.


There's always the rolling release route, which I have been happily running for 4+ years without any issues.


Same here (Arch), but I just wanted to point out that Ubuntu has PPAs for almost every piece of software out there. Including new versions of Mono, Monodevelop and F#.


There isn't any middle ground though. I want most of my software to be stable with a few packages of the same version. Windows does this scenario. Also I have trust issues (after mint who should blame me), is there any rolling release that is backed by a company?


Disclaimer: I work at SUSE.

OpenSUSE Tumbleweed is rolling-release and we use all of the same (free software) QA and build systems we use for OpenSUSE Leap and SLE to build and test it. Not to mention that SUSE essentially mirrors packages between OpenSUSE Leap and SLE (our enterprise distribution).

I've been told by some of the people working on Tumbleweee that there's been a lot of people switching from Arch to OpenSUSE Tumbleweed because the packages are released much faster (which appears to be the case from my usage of Arch and TW). But if you're looking at having minimalist installs, there's still some work left to do (minimal "server" installs are still a bit too bloated, and --no-recommends isn't the default).

But yes, there is a rolling-release distribution backed by a company.


There's SuSE rolling, but I can also attest to Arch being super stable.


You have latest Mono and Monodevelop available from a Ubuntu/Debian repo in the oficial web site.


These days. I remember not that long ago when I was fixing files by hand to make them compile on Ubuntu. And it was just an example, topical to f#/.net. Ubuntu is great when you are fine with the version in repos, but can be more pain when there aren't any third party repos because linux distros are fractured.


Is this an issue with Windows itself, or with developers poorly supporting Windows? I've had similar problem with installing stuff, but it's always programming related stuff. Most software just works, but try installing pip and you need to set aside the whole day for reading bad documentation that just assumes you use Linux.


Since F# is by Microsoft themselves, I would hope that is not the fault of the developers in this instance.


8. Lack of proper, well integrated, easy to use, expressive permissions system, ideally with a notion of complete isolation by default. Right now most users rely on the benevolence of software writers to not mess with their personal files, but sometimes things goes awry (that Steam homefolder deletion disaster comes to mind).

Imagine mobile OSs with just the Unix permissions system, the malware spread on those would be so humongous, it'd almost be funny again (arguably this was a long-time problem anyway privacy-wise, with software requiring privileges that couldn't be faked (e.g. giving the application a fake address book instead of your own), but at least apps couldn't easily nuke/hijack all your personal files.)


This is coming with wayland and xdg-app. I say this not to try to refute your point but to give you something to Google for if you're curious about how things will probably work in the future.


Android it just that, and what they do there is run each app as its own user.


Though, Android also uses SELinux. I am not sure I would consider SELinux part of standard Unix permissions.


That is something introduced in recent versions.

And i suspect they did it more to get onto government approval lists than anything else (though it may also placate the *AAs).


> Windows is a lot more reliable than Linux at having things "just work" these days.

As long as you only do the things you're allowed to do. I just replaced my 6-year old Windows gaming machine, the only win machine I have. I can't even change the windowing theme - it has to be MS's preselected graphics. I wanted to turn off all the 'phone home' stuff except updates and windows defender; these are spread through half-a-dozen locations. I hadn't even got to install my first bit of software yet (firefox) and already I was limited over what can normally be done with a desktop.

"Just works" isn't really an argument when it's paired with "but you can only do these things".

Not to mention that back in the server world that lightweight virtual servers are an impossibility with Windows arena. ^nix servers are small in volume size, can run on fumes, and are largely disposable. Windows servers are (relatively) huge, slow to launch, require much more system resources, and require licensing. That isn't "just works" for me.

> [text config] This is often ok, but it does make it hard to integrate with GUI tools, hence the lack of them.

Only if your GUI tools are written from the viewpoint that nothing else should touch the config file. After all, if I can write a bash script that upserts a config value in a text ini file, why can't a GUI programmer?


>Windows is a lot more reliable than Linux at having things "just work" these days.

Ha, I'm server tech at a hosting company and this is spit-take worthy.

I really can't see how anyone could possibly think this.

Unless you're talking end-users local PC's this is wrong, and even then it generally isn't 'Linux bugginess' it's 'user ineptness'



The two are completely different, and "just works" is definitely Windows strong side on the desktop.

On my desktop system I want apps installed in isolated directories, with few or no central dependencies. Even if it means I have some unpatched vulnerability in 10 places.

For a server system I don't mind having to tinker, and central libraries can even be a security win.


Windows "just works" when the box is freshly unwrapped.

Use it for a month and all manner of oddities bubble to the surface.


I think that can be attributed to user skill for both, I render my Linux machines weird more often than my Windows machines, but admit it's because I'm better at driving Windows machines. I have had zero problems with Windows machines since win7 that can't be attributed to hardware failures.

Also by "just works" I meant mostly being able to get binaries from a random website and installing without having to use a package manager or failing to have the right deps. If you go to the 100 biggest app sites (Skype, Spotify, ...) and try to set up from a download, the "just works" is probably a lot better on win and Mac. This is of course a lot due to the size of the market, but it's no secret that standard cross-distro/desktop-env prebuilt binary installers for GUI apps are still not exactly a strong point on Linux.


Disclaimer: I work for SUSE, a Linux company which provides support for enterprises running SLES, as well as contributing our packages and knowledge to the openSUSE community.

I don't see how you could consider package management as a bad thing. Why do you consider "downloading binaries from a random website" to be a "good thing". Not to mention that those binaries almost never update themselves and how well they deal with dependencies depends on what $500 installer builder they used.

Package managers allow you to always keep your system up to date and you have a single database of all software that has been installed, what it's dependencies are and what files it did install (so you can uninstall it). They are definitely one of the awesome things about Linux package managers. OS X has homebrew, but it's not well integrated into the system because it's a third-party library of software. BSD's package managers are at least 10 years behind Linux's (they're still working on packaging the base system). Windows has nothing AFAIK. Things like OBS allow you to automate the release of new versions, OpenQA allows you to do automated QA testing to make sure there's no regressions in graphical or terminal services.

I especially don't understand why you think that not having a way to update the libraries on your system is a good idea. Packaging the same DLL in 30 different places is not a good thing.


1. Nothing wrong with packages, but central repos rarely contain up to date packages. I don't mind going to skype.com and downloading a .deb package for skype (I wish it was the same format for desktop software for all flavors of linux, but I digress).

2. Shared libraries are only good for saving space (largely irrelevant on desktop) and for security. Different side by side versions of libs work poorly when you reach "system" level is my experience. Example: having apps that require different incompatible glibc versions is painful. For example: see the accepted answer to this question http://stackoverflow.com/questions/847179/multiple-glibc-lib... "The absolute path to ld-linux.so.2 is hard-coded into the executable at link time" WAT?

I think the difference in mindset between what desktop computing is, and what a "system" is, is very different bewteen a linux user and a windows user (i.e. one that just wants an OS that is a dumb layer for running binary compiled shitware that must be built and distributed by its creator because it will never be in a repo).


> 1. Downloading binaries from websites is what Windows users know. Package management is vastly superior as long as the packages exist and are up to date with the "official" source such as Spotify. If the package is a week late, then I'm going to prefer the direct binary. Once half my apps are direct and half are packages, the benefits of a package system diminishes.

"What Windows users know" doesn't mean that it's a good thing. Windows users also know to run everything with administrative privileges. For sufficiently sophisticated build systems (read: OBS) you can automatically rebuild packages. The reason why packaging takes time is because there is a testing process (which can also be automated with things like OpenQA), but there's lots of other maintainence that goes on when curating packages. Believe it or not, but sometimes upstream is downright irresponsible when doing version bumps and it's the maintainer's job to deal with it. It's fairly thankless work, to be honest, because you're not working on the new hot stuff. Sure, "just download a binary" works until you have multiple components that depend on each other.

> 2. Shared libraries are only good for saving space (irrelevant on desktop) and for security. Different side by side versions of libs work poorly when you reach "system" level is my experience. Several libc versions etc is painful.

"Only good for [...] security" is enough reason for me. Tell me how Windows programs deal with updates to critical libraries that everyone uses separately. I'm guessing the answer is "not well at all". And if you're going though your package manager, then no package should require a specific version of libc (besides, this problem can be mitigated somewhat with symbol versioning). The gains far outweigh the perceived costs IMO.


Argh your ninja response time meant my complete rewrite of my above post now looks silly, sorry :)

> "What Windows users know" doesn't mean that it's a good thing.

I know (I also removed it). It's patently stupid. Let me rephrase it if you want one way of distributing apps that anyone can use, it's basically the only working way. Download a binary from the creators' site. Otherwise you end up with the utterly broken method of "check if it's in a tree in some package repo, if not, you can add more package sources to your repo, or if not, you check if you can find a downloadable package for it, if not, you build from source".

> Tell me how Windows programs deal with updates to critical libraries that everyone uses separately

They don't. It's both a bug and a feature. OS libraries are updated of course (by windows update) but I don't necessarily consider e.g. a C++ runtime to be an OS library, even if it's microsofts' own redist. I prefer my applications to ship their own copy of their c++ runtime and keep it local because it limits problems. Even at the cost of having an unpatched one somewhere.

> Windows users also know to run everything with administrative privileges.

Well, accidentally answering "yes" to the UAC prompt is about as likely as accidentally sudoing something imo.


Nope. Linux turns weird usually when someone is fiddling with it outside of daily usage. Windows seems to turn weird no matter what.


I reinstall Windows every few years when something really bad happens and never have seen Windows turn weird. After few years it is as good as a fresh install.

On the other hand sometimes in the internet I see "advices" like "Windows should be reinstalled every six months" and wondering what the hell these people are doing with their computers?


> On my desktop system I want apps installed in isolated directories, with few or no central dependencies. Even if it means I have some unpatched vulnerability in 10 places.

Why?

We aren't there yet, but Linux is trending towards xdg-app and appstream-esque projects producing a "common" nomenclature for software. Then you can write once install anywhere sandboxed app packages. All you really need are abstractions for both the package manager specific naming conventions and the system specific MAC filter.


if you want isolated directories, install to /opt then. that's what it's for.


Some comments on some of your questions:

1. Legacy and convention. Why do my 64-bit system files live in C:\Windows\system32? Why is the first volume on my system C and not A? Why are there multiple 'global' window stations on my system? Why is the real path to my disk drive \GLOBAL??\PhysicalDrive0, but for some reason I have to use a different path (\\.\PhysicalDrive0) in my programs or else it won't work; a path that I can't discover by myself but have to be told to use by scouring the darkest depths of MSDN? What the heck is ipv6-literal.net and why do I have to refer to some weird third-party domain to connect to a file share on IPv6 within my own network? Making these changes would be disruptive for the software that has to make the change, and impossible for the software that can not be modified. Distributions exist that try to improve the hierarchy (GoboLinux) but no one adopted them. We're currently seeing a push to unify the / and /usr hierarchies, so at least in the future things will get a bit simpler here.

2. Legacy and convention. In the days when storage was scarce, you could keep the contents of /usr/share and /usr/lib on central file servers; a single export for the former could serve all your clients, and you'd only need a single instance of the latter for each architecture in use, rather than having a separate copy on each machine. Besides, even in a world where each program lives in its own directory, as soon as you want one program to install a component for the other to consume, you have to bring in a package manager to remember the fact that /app/A installed a plugin into the /App/B/Plug-Ins directory... not to mention the unusably long PATH environment variable that would result... I find the package manager approach is overall superior to unreproducible the crap-fest you get when applications arbitrarily dump files all over the system.

3. Perhaps I'm in the minority, but I've never had problems relying on glibc via dynamic linking; you just have to build against the oldest version that you want to support, which is a bit of a pain but it's hardly the end of the world. The tradeoff with musl is, that you can no longer rely on dynamically loaded NSS and gconf modules. If you don't need these, fine, knock yourself out--but you should be aware of the tradeoffs when you switch your libc implementation out; namely that you can no longer use mdns, myhostname, ipv6literal, winbind, LDAP, etc. for looking up hosts, users, groups and so on.

4. System-wide configuration is better kept in text form where it can be read by a human, contain comments, and be kept in Git. Desktop programs can store their config in dconf, or otherwise do whatever they want as long as it lives in ~/.config and I don't have to care about it.

5. As long as you use “set -eu” with an understanding of its shortcomings, and know how to quote variables properly, writing small and medium systems in shell is a great tradeoff between development speed and robustness. Components can be rewritten in a real systems programming language once they stabilize or require better integration than can be had by parsing the output of other commands.


> What the heck is ip6-literal.net

So, I was looking this up as I havn't used windows in forever. I then did a `whois ipv6-literal.net` and was surprised that Microsoft doesn't own it. That seems weird for them to use it in such a way?


I made a typo, it should have been ipv6-literal.net, which they do own. Still sucks for the rest of us who have to interoperate, though at least there is an NSS module available to make it a bit easier (https://www.samba.org/~idra/code/nss-ipv6literal/README.html). Not that IskKebab will be using it with musl... :_)


This is what I see:

    Domain Name: Ipv6-literal.net
    Registry Domain ID: 1915314004_DOMAIN_NET-VRSN
    Registrar WHOIS server: whois.NameBright.com
    Registrar URL: http://www.NameBright.com
    Updated Date: 2015-09-26T00:00:00.000Z
    Creation Date: 2015-03-31T18:16:33.000Z
    Registrar Registration Expiration Date: 2016-03-31T00:00:00.000Z
    Registrar: DropCatch.com 577 LLC
    Registrar IANA ID: 2057
    Registrar Abuse Contact Email: abuse@NameBright.com
    Registrar Abuse Contact Phone: +1.720.496.0020
    Domain Status: clientTransferProhibited
    Registry Registrant ID:
    Registrant Name: lirong shi
    Registrant Organization: www.Juming.com


The answer for most of these: legacy. Changing all these is very hard, as it'd be difficult to change the \ to / under Windows, etc.


Where does using / in paths rather than \ not work in Windows? Sorry to pick on your example, but from what I can see, they've already done that.


If you are using / in paths your are limited to max 260 characters paths.

If you want paths up to 32k you need to use back-slashes.

I think this is the reason they are not switching e.g. Visual Studio over to max 32k paths.


It clashes with command line switches. dir /w does not list the contents of the directory called w.


On Windows, you've really got to be quoting your paths everywhere anyway, since spaces abound, and if you do that, then dir "/w" works as expected.


Considering that most file systems these days allow spaces in paths, I'd guess you can safely remove the »On Windows« there. The amount of shell and build scripts on Unix-likes that die horrible deaths when encountering spaces is not funny. And well, yes, in a way that probably means that spaces in paths are not »supported« there, but you could then say the same about Windows. As well as using non-ASCII in paths.


> The amount of shell and build scripts on Unix-likes that die horrible deaths when encountering spaces is not funny.

That's so accurate. I used to write shell scripts with

    command $1
until I got a few too many nasty surprises with dashes in filenames.


Always use "$@"

    foo() {
        for arg in "$@" ; do
            echo "arg is \"${arg}\""
        done
    }

    foo 'bar baz' 'spaces in filename.txt'
> dashes in filenames.

When writing shell scripts it's a good idea to use the -- option whenever possible

    stupid_backup() {
        cp -a -- "$@" /stupid/backup/dir/
    }
 
    # copies 2 files
    stupid_backup --files '-with -leading -dashes'


What does the -- option do? I haven't encountered that before.


It means "everything past here should not be parsed as a - or -- flag". If you have a file named "-l", then ls -- -l will show you that file, instead of doing a long listing.


It should be noted that not all flag parsers support it. But most people use getopt so it's not a big deal.


Thanks- that's really useful.


Not everything's a script though. I avoid using spaces, so I'm in the habit of (outside of a script) not quoting; if I bump into a space within a path then by that point it's just quicker to escape it.


I've been able to successfully use / in the Win32 API with the exception of CreateProcess. I think the reason is that when you start an executable you may pass command line arguments starting with /


Respecting legacy decisions is very important. Often in software design there are many ways to do the same thing. Unless there is an important reason to favor one of the possibilities, the correct answer is almost always to pick the legacy version.

Comparability is important, and not just for existing tools. Choosing something different has a learning cost, and if you have to care about both versions you have ongoing mental effort costs as well.

There may be better names for "/etc", but it's not worth the effort.


>Programs are mushed together and scattered through the filesystem rather than stored in separate locations. This basically means applications are install-only. Yeah, package managers try to keep track of everything, but that is just hacking around the problem, and most developers don't want to spend hours creating 5 different distro packages.

What about things like encap/gnu stow which symlink files from one single package dir.


> 1. Unnecessary and confusing directory structure.

"Unnecessary" for you maybe, but have you considered the possibility that there is a reason why the directory structure is the way it is - and that the problems originally addressed may still be relevant? Here is a hint: partitions. Partitioning the files roughly by usage pattern allows you to tune performance, safety and security in a way that you would have a very hard time doing otherwise. Your suggested names make me think that you aren't very clear on the directory's actual purpose [0]. While longer names may have helped you out in understanding their purpose, once you actually learn it you're stuck with an unnecessarily long PWD that wraps each prompt.

> 2. Programs are mushed together and scattered...

Again, consider why it is the way it is. Do you really want to have a PATH that includes every directory for every binary on the system, or manage individual file permissions within all those directories? You want to do that with libraries as well? Just consider the complexity of what you're proposing and how you'd address: system defaults, setuid, per user preferences, dependencies, build environments... Years ago I basically did what you're suggesting, when Redhat was my daily driver. I'd build from source and install into ~/bin. Try it out for a while, you'll hate it.

> 4. Emphasis on text-based configuration files... hard to integrate with GUI tools...

What are you suggesting, a windows like registry? The text-based configs are no more difficult to use through a GUI than a binary representation, they're both a library function call away. Unless you are suggesting a windows like registry... but you've already pointed out how developers don't want to spend time on portability - so that can't be it.

> 7. General bugginess... Windows is a lot more reliable than Linux...

Ah, well try out an OS that is closer to Unix than Linux - maybe one of the BSDs. I'd put my Freebsd workstation against any flavor of Windows in a contest of uptime and performance, that is a bet I'd be happy to take. As far as Windows just working, that is true for the majority of tasks for the majority of users. But if you fall outside of that happy band of the target market, you are SOL. Consider the whole Windows telemetry issue. Also, I just noticed in my network logs that Windows update is sending out IPv6 dns requests despite the fact that I've disabled it on the network interface and tweaked several registry variables... there is nothing I can do about it. It would be a pretty simple fix for any opensource OS though.

[0] https://www.freebsd.org/doc/handbook/dirstructure.html


> Do you really want to have a PATH that includes every directory for every binary on the system, or manage individual file permissions within all those directories?

No way. But I'd love to gt ride of the plain PATH, replacing it with a hierarchical PATH with a convention. Maybe with every 'bin' directory inside that path getting in the search path, or maybe something that let me nest things deeper.

One can not do this in Unix, and that's the point.


> One can not do this in Unix, and that's the point.

One certainly can, very easily actually: edit you shell rc file in /etc to modify your PATH based on PWD, boom - hierarchical PATH the Unix way. Don't want it system wide? Edit your shell rc in the user home directory. Want something more complex? The posix shell source code is a lot simpler than you'd think. I wanted the same fancy git repo status PS1 stuff found in bash rc scripts, but without the performance impact - and in tcsh. It only took an hour of work to integrate libgit2. I don't think it would have been as easy with cmd.exe.


In fact, you are right.

And I'm hierarchising my ~/bin :)


I personally implemented my PROMPT generation's git commit and branch checks using zsh (just shell scripting) because compiling and dealing with a divergent version of my shell is just too much of a pain given how many machines I have to deal with.


That would have been the easy way to do it, if tcsh allowed dynamically generated prompts (outside of a few stock flags). On Freebsd it makes sense to run your own packaging build server once you start running custom compiles on more than a few machines, so it is no big deal to compile once and `pkg install` everywhere you want it. I guess the downside of that is that it makes it easy to be lazy and not push upstream, which I'm pretty sure is the primary motivation for a lot of code contributions :)


> `/etc`? Why not `/config`?

/etc contains more than just config files so /config would be a misleading (or at least overly specific) name. If we went that route we'd need dozens of top level directories to cover everything. I do prefer OSX's more user friendly directory layout but the traditional directory structure has been around for decades. It works fine.


/etc contains more than just config files because it's called '/etc' and not '/config'.

Any why would you need dozens of top level directories? I can't imagine that you could name even one dozen completely orthogonal aspects of program and system configuration that can't be put into a _some_ meaningful hierarchy.


Linux != UNIX


As for 1. and 2., I kind of like how Apple solved it. When moving Xcode from one Mac machine to the other, I only needed to copy /Applications/Xcode.app directory to the other machine and everything magically worked, configuration files are kept to their applications rather than littering the filesystem.


  > How about defending these?
  > 1. Unnecessary and confusing directory structure. `/etc`? Why not `/config`? 
  > `/usr` instead of `/system`, `/var` instead of ... well who knows. The maximum
  >   directory name length is no longer 3 characters.
I know yrro already explained the ludicrously inconsistent nature of the OS you're apparently defending, but I'll add in:

Why do I need to edit c:\Windows\System32\Drivers\etc\hosts (in what way is hosts related to bit-length or a driver?)

Why .htm rather than .html?

Why programiwanttorun.exe rather than programiwanttorun

/system is as overloaded a word as any in IT. /config doesn't accurately reflect what /etc is about (but in any case, the latter is not a great barrier to entry)

If you're complaining about limits on directory entries .. you're skating around on very thin ice if you're on the NTFS lake (compared to any of xfs, btrfs, reiserfs, ext2/3/4fs, etc)

  > 2. Programs are mushed together and scattered through the filesystem rather
  > than stored in separate locations. This basically means applications are
  > install-only. Yeah, package managers try to keep track of everything, but
  > that is just hacking around the problem, and most developers don't want to
  > spend hours creating 5 different distro packages.
14,000 registry entries for one suite of software ... how is that not 'mushed together and scattered'.

Applications are not install-only. Because package managers (or, rather, distributions) managed to solve this problem elegantly more than a decade ago, I don't know how you can credibly make this claim.

Is 'mushed together and scattered' a contradiction?

Building packages for a variety of distros is a solved problem (again, it has been for a decade or more).

  > 3. Not strictly Unix, but the mess of glibc with respect to ABI compatibility,
  >  static linking, etc. is ridiculous. Musl fixes most of this fortunately.
It's hard to not sarcastically comment with the observation that the phrase DLL Hell did not originate within the nix world.

More pragmatically, I rarely (in twenty years) have had glibc issues. I think perhaps 3 times. All easily solved.

  > 4. Emphasis on text-based configuration files. This is often ok, but it does
  >  make it hard to integrate with GUI tools, hence the lack of them.
The biggest complain with Win95 was the move away from .ini files (text-based).

GUI tools do not intrinsically have an issue dealing with text-based configuration files.

I posit the problem you're describing is the fact that text-based configuration files often contain useful human-readable components (which non-text-file config systems, such as 'the registry' lack) which is slightly harder to maintain using automated tools. But only slightly. As noted elsewhere, these are typically only a library call away.

  > 5. Emphasis on shell scripts. Fortunately this is starting to change, but
  > doing everything with shell scripts is terribly bug-prone and fragile.
Doing anything badly is fragile. Shell scripts aren't intrinsically bad - as evinced by the success of shell scripts.

I don't know many people who have significant experience with apt|rpm && sccm (for example) ... but I know a couple, and the fragility of shell scripts leads to fewer expletives than the alternative.

  >  6. X11. 'nuff said about that. When is Wayland going to be ready again?
Are you suggesting that a system that completely disavows any network-awareness is preferable to one, designed >20 years ago, that does it fairly well?

What problems have you had with X11 that aren't dwarfed by citrix / rdp / single-user consoles / etc?

  > 7. General bugginess. I know stuff works 90% of the time, but that 10% is
  > infuriating. Windows is a lot more reliable than Linux at having things
  > "just work" these days.
Sounds like hyperbole. If things in the GNU/Linux world broke 10% of the time there'd be a lot fewer people using it - and if Microsoft Windows was more reliable than GNU/Linux, there'd be a lot more people moving towards it rather than away from it.


X11 is inherently insecure. With Wayland, the compositor itself is privileged but all clients get access to only their frame buffers and event streams.

Network transparency is mostly irrelevant on modern desktop systems, X11 remotes modern apps poorly at best, and if you really need remote desktop access you know where to find RDP and SPICE.

The Wayland switch will be a win because X11 is almost pure cruft. The good parts are what was kept in Wayland.


> X11 is inherently insecure.

That depends on your perspective. Why are you accepting connections from malicious X clients?

> Network transparency is mostly irrelevant on modern desktop system

For you, maybe. That's an opinion that many of us do not share.

> X11 is almost pure cruft

It's only cruft if you limit your use cases to stuff like GTK+ that decided to only use X as a dumb framebuffer.

> The good parts are what was kept in Wayland.

Except for a long list of features, such network transparency, support for copying PRIMARY selections in addition to CLIPBOARD, or overriding input events of arbitrary programs. Until Wayland supports these, it isn't compatible with a lot of my tools.

Just because something is "old" or has features that you personally don't mean they are "bad" features that should be removed. There are more use cases than those in you definition of "desktop".


>it does make it hard to integrate with GUI tools, hence the lack of them.

I don't understand this point. Care to explain, please?


My guess is that GUI tools often clobber configuration files once they touch them, because it's easier to write code that stores configuration as a hashmap and when it saves it back, the output is not going to preserve things like, user comments, the order of things, etc.


Is it really though? And why are we just calling out GUI tools? Command line tools also must read in a config file, and they manage with text. json, ini, yaml, xml parses and writers exist in every language. There really isn't an excuse to not use a text-based config.


If the Windows registry + INI files + config files anyway is the alternative, I'll take the text files please.


I think command line tools (like say, git-config) manage one option at a time.

GUI tools tend to manage everything at once. Imagine the giant options box from MS Word 2003.


It still has to read in the current config, mutate it, and write it out.


I guess they mean that because GUI tools often have, well, a GUI to change options, they tend to write their configuration files. Whereas for command-line tools it's rare that they provide methods for changing their options and thus they tend to only read their config, thus never running into the problem of clobbering user edits.

Mind you, there are plenty of GUI applications that get this right anyway. But usually for the vast majority of users there is never a need of mangling configuration files by other means.


OS X is actually certified as a Unix, unlike many Linux distros.


Do you mean POSIX? In which case you should be aware that it actually costs money to get licensed as a POSIX-certified OS. If you actually meant UNIX, then of course GNU (GNU is not UNIX) isn't UNIX. It's in the name. At best the Linux kernel is a cousin of UNIX.


I meant Unix, but I should have said unlike all Linux distros except Inspur K-UX. I thought there were more but WP doesn't list any and seems comprehensive otherwise.

My point was really that some/most of those complaints about "Unix" don't apply to OS X for most people and sound more like the pain that users running Linux have to deal with, even though it's not typically an official Unix. I hope this was more clear.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: