Hacker News new | past | comments | ask | show | jobs | submit login

I've been tempted to write rants like this before. Ryan's point seems particularly centered around Unix, which makes sense. My experience of trying to get stuff done in Unix has taught me that it is a really powerful, extremely well-thought-out way to waste my fucking time.

All of it. Down the fucking toilet, and for stuff I don't give a shit about. Every single goddamn thing I try to accomplish while setting up a server involves a minimum of 1 hour of googling and tinkering. Installing PHP? Did you install it the right way? Did you install the special packages that make it secure and not mastadon slow? You want to create a daemon process? Hope you found the right guide! Setting up a mail sever? Kill yourself.

For some people, this is not the case. They have spent multiple decades breathing in the Unix environment, and are quite good at guessing how the other guy probably designed his system. And they don't mind spending the majority of their productive hours tinkering with this stuff. But I don't have time. I don't care. I don't have time to read your 20-page manual/treatise on a utility that doesn't explain how to actually use the thing until page 17. I don't want to figure out why your project doesn't build on my machine because I'm missing some library that you need even though I have it installed but some bash variable isn't set and blah blah blah blah.

The problem with Unix is that it doesn't have a concept of a user. It was not designed that way. It was designed for the people who programmed it. Other pieces were designed for the people who programmed them. If you are using a piece that you built, then you are a user. Otherwise you are a troublesome interloper, and the system is simply waiting in a corner, wishing you would go away.

And yet...we put up with it. Because there isn't a better option. Because it's our job. Because we'd rather just bull through and get things done than spend an infinite amount of time fixing something that isn't fixable. Life sucks, but NodeJS is pretty cool.

"Because it's our job." Well probably not. I don't spend hours on fixing my own car. I leave that job to the ones who know about cars. Install a new server: ask someone who knows, give him your specs and he will set up the rig.

That's the funny thing about internet. Everybody can get the knowledge he needs, but that doesn't mean you can apply that knowledge and know about all consequences.

Long live the PaaS mechanics!


Have your car fixed, pay by the hour, brilliant idea!

Exactly. As a friend of mine once said, "being in shape and being able to get into shape aren't the same thing".

He mentions things in that rant that I have no idea about, but I can get plenty of stuff done on Debian (as an example) - coding command line apps, creating services, deploying webapps...

I mean, it sounds like he's doing some pretty fiddly stuff - really getting in there and hacking. I don't see how that could ever be simple, and it certainly isn't anything 'end user' facing.

End users check their email online and work in spreadsheets occasionally. They don't develop server-side js frameworks. I guess the argument is that if things were simpler, then maybe they could do those things? But I don't buy it.

(I get the frustration, when you're held up for 45 minutes googling because some library is missing or a string isn't formatted just so, but that sort of thing only happens when you're literally hacking things up. Which isn't end user behavior, and I can't see how it could ever be a whole lot simpler. Maybe I'm just short sighted.

End users NOT only "check their email online and work in spreadsheets occasionally". They use complicated workflow systems, data analysis environments, resource-hungry media editing applications, and very complex yet almost undocumented scientific instruments. And the mountain of domain knowledge they have is no less than that of Unix systems programmer, so they just don't have place for the latter.

I don't think you understood the post correctly. Your comment is the same point that he is making, that end users never see any of the stuff he needs to fiddle around with while developing. Why does the developer need to learn all of that stuff when it makes no difference to the end user?

He's saying that the development side could be simplified as long as the end result (what the user sees) stays the same, since the user doesn't care how the product was developed.

The dev side is horrendously complicated, which is why he says he hates almost all software.

Here's an example: I use Snort, and wanted to set up Snorby because BASE is ancient and creaky and doesn't work well. It's written in Ruby, so it should be easy, right? Just get a package with the correct version of Ruby, then gem install until I have the prerequisites.

Nope. I gave up trying to install it months ago, but it required many external programs at versions too recent to be included in distro repositories, and which as far as I could tell were mutually incompatible. Obviously, people have gotten it to work, because it's a pretty popular front-end, but I never did.

I agree with robomc but it appears that most people is mixing up 'not buying' with 'not understanding'.

Sure nodejs would be easy to pick up if it had no dependencies and if its binaies were contained in a folder - i.e. portable - but assuming nodejs could be of any interest to the end user is a bit of an exaggeration IMHO. Therefore I don't buy it either.

On a practical note, I've seen the install situation improve in the last 5 years thanks to two ubuntu features:

- the ease of apt-get install (inherited from debian)

- ubuntu LTS releases

I now use availability of apt-get instructions for an LTS as one of the measures of the maturity of a software package.

So.. things that have been in Debian for years.

NO - this is about the availability of DOCS to setup 3rd party software.

10 years ago, Debian may have had apt-get and stable but a lot of software setup docs were for building from source leading to possible compile issues and version conflicts for libraries.

Now, many setup docs are written as apt-get of binary packages for the last Ubuntu LTS.

Unix is just an example. Doing something non-trivial in Windows is usually even more complicated and involves a good amount of black magic. The post is about how tool complexity vastly exceeds complexity of the problems those tools solve.

    Because we'd rather just bull through and get things done than spend an infinite
    amount of time fixing something that isn't fixable.
This is a lie. Most developers don't even try to make things simple, and then say it's an unfixable issues to give themselves an excuse. I see this happen pretty much every day. Yes, simple is hard. Yes, designing simple systems requires doing more work, and sometimes re-doing your old work, but it does not require "infinite amount of time". It's perfectly doable, and there is return on investment in the long run, unless you're solving fake problems in the first place.

Have you looked at http://www.turnkeylinux.org/

Thus Heroku.

Exactly, and Heroku isn't simple.

It's just simple for you, because it places you in an actual end user role. (Which you pay for).

Heroku is no doubt complicated as hell for the people who developed it. So if the argument is that developing node.js should have been like deploying to a paid managed hosting environment with fairly tight requirements, then ok. But that's a weird thing to assert.

Isn't the entire point of the op about making software simple for the end user? Obviously Heroku is complex underneath, but they've created a beautiful abstraction that makes an extremely complicated stack very simple to interact with. It doesn't solve every system administration problem in the world, but they're doing their part to make software more pleasant to work with.

Yeah see what you're saying. I guess my angle is that the original Heroku (before they had cash and time to add more support) had a tight limited costly scope - it concealed the complexity of deploying a rack app, and charged $50-100 a month for that concealment.

And I'd say that if all you want to do is deploy a rack app, you can do that relatively simply in Debian too, for free. (I'd do it with rvm and perhaps build nginx from source and such, but you could do 90% of it straight from apt-get, rubygems and editing ~three config files).

The sorts of things that are really tricky, and have you sweating over the sorts of things the original post is complaining about, are just hard original work (and free, and you don't have to wait years for them to support your pet language or framework or whatever).

It's an acceptable trade-off, an issue internal to the concerns of developers - not end users, and not a reason to hate on unix, is all I'm saying. Not at all hating on the service Heroku provides.

I looked the Heroku sign up page, and got intimidated as heck by all the terminology they use. I can't imagine what the setup and admin process is like.. it could be simple, who knows but their pre-sales page confuse me...

For anyone who isn't familiar with git, I can see how that could be confusing. Other than that, the "How It Works" page (which is what I assume you mean by signup page) is mostly just explaining the secret sauce, and doesn't really matter that much to the end user (developers). All that matters is gem install heroku, heroku create, git push heroku master (etc).

And now you know why I use a Mac. :)

Unix is there, if I want it and thank the gods there are package managers.

Imagine the amount of hacks and abstractions that go into making a GUI work on top of UNIX. the complication is compounded, it is easier for the end user (for some tasks not all) but the complexity he is talking about isn't solved by fancy widgets.

It depends what kind of user you are. Few user require fussing around with D-Bus etc..

Unix is very easy if you take time to learn it. Most things behave the same way.

Demonstrably untrue based on two and a half decades of experience. At least chasing lib dependencies for 18 hours during the config/make/make install process is mostly a thing of the past.

Do you have any broad advice on easing the config/make/make install process?

apt-get install

or yum install, Where yum automatically resolves the dependencies for you. No more make && make install

Nothing to do.

  Most things behave the same way.
Ha ha. You jest.

  bbot@bbot:~/foo$ ls
  bbot@bbot:~/foo$ ls bar
  bbot@bbot:~/foo$ mv bar baz
  bbot@bbot:~/foo$ ls
  bbot@bbot:~/foo$ cp baz bar
  cp: omitting directory `baz'
  bbot@bbot:~/foo$ ls
There are hundreds of things like this. dd's hilarious syntax. The spotty usage of --help. Inconsistent behavior when you invoke a command with no arguments. The dazzling array of contradictory option flags for ls. Everything about vi. Etc etc etc.

He said "Unix", not "Linux".

I'm confused. Which of these don't apply?

You just proved the authors point.

Free Your Technical Aesthetic from the 1970s: http://prog21.dadgum.com/74.html

And yet...we put up with it. Because there isn't a better option. vs this is needless (from the article) doesn't really gel. If it's actually needless, then you know a better way of doing it - so publish it!

The problem with Unix is that it doesn't have a concept of a user. Nope. It doesn't have a concept of a naive user. Table saws also don't have a concept of a naive user, but people don't bitch about folks trying to use a table saw without having to learn how first.

> Nope. It doesn't have a concept of a naive* user.*

I forgot who originally said it, but you comment reminded me of this quote: UNIX is user friendly, it's just choosy about who its friends are.

And occasionally even your best friends get on your nerves, I find.

That's a one brilliant thought.

When someone inexperienced tries to use the chainsaw, he may cut his hand off. Nobody blames the tools - it's obvious that if a green guy is hurt, it's because of his own foolishness.

Sadly, in the IT it's the opposite. People are bitching about the tools, paradigms, philosophies, without really doing their homework. Hey. Once upon a time it took a lifetime to master specific crafts. Let's be decent and maybe humbler a bit.

I have a theory why it is so, by the way. In the conventional crafts everything is physical, touchable, solid. In IT everything is abstract and prone to easy judgement and mindless relativism. Please let's bring back craft to the hacking!

Challenge accepted!

- Here's how needless Unix users is:

Every fresh server install, I have to make up a meaningless string called the 'login' of an 'admin user' who belongs to a 'group' called 'admin'. Once upon a time, I could use the well known admin login 'root'. Now that's not allowed. I have to make up a name, remember this name when you connect and then remember to prefix every command with sudo.

- Here's a better way of doing it:

Give me a server distro where I don't need a 'login'.

Meanwhile, why is apache pretending to be a 'user' called 'nobody'/'http' and not using some 'capabilities' or some shit like that?!!!!

> Every fresh server install, I have to make up a meaningless string called the 'login'

If you are only doing the occasional install then this really shouldn't be a great hardship. If you are installing many servers you should have this part automated. And ti shouldn't be meaningless either. I think you are doing it wrong.

> Once upon a time, I could use the well known admin login 'root'.

You still can. root login can always be reenabled if you want it that badly. There is also "sudo su" (unless explicitly disabled by your admins) to avoid repeated invocations of sudo while you are performing a long admin task.

> - Here's a better way of doing it:

> Give me a server distro where I don't need a 'login'.

No, no, and thrice no. Far too many newbies will leave it in that state and get hacked to buggery in short order. Even if it is only for local console logins, I'd consider it a bad idea.

No matter how inconvenient it is, server install should default to an insecure state and allowing access without authentication is such a state.

Live CDs often do this, but they are not production systems.

> Meanwhile, why is apache pretending to be a 'user'

Well that much is a valid point. There has been work in this area but non of it has made its way to default setups of most unix-a-like systems.

> There is also "sudo su" (unless explicitly disabled by your admins) to avoid repeated invocations of sudo while you are performing a long admin task.

You can also do 'sudo -s', which keeps you in your normal user's shell. It's pretty slick.

You learn something new every day. Thanks for the tip, I'll give that a try at some point.

Here's a simple thought experiment:

    Imagine a distro that changed the terms 'login/password' to 'password1/password2'
No commands you type would change, but you'd wonder why the password is in 2 parts. That's how redundant user is!

There's a security problem with that:

  $ chpasswd maybeUniqueString
  Error: maybeUniqueString already in use.

  $ su maybeUniqueString
So username and login are not totally redundant.

Well, as the old saying goes, those who do not know Unix are doomed to reinvent it.... poorly. I am rarely surprised anymore by the bad ideas I see people come up with.

Sounds like you've misunderstood that either password1 or password2 will get you in!

I didn't change any behavior - just the UI strings. So you need both!


    $ su maybeUniqueString1
    Enter Password2:

Isn't that exactly the amount of stuff they would already have to guess?

No, you can have many users, and this way you try to guess everyone's password at the same time. Also, sadly, passwords tend to be repetitive, so now someone can accidentally guess someone else's password.

Cute, except that password1 has to be unique. Hope you like confusing some of your users.

You'd be better off getting rid of the login altogether and using a GUID. People still share computers, you know.

Uniqueness is not an issue - see my original point - we only create one admin user on servers!

Imagine a team of 3 people running a SaaS webapp on 3 web servers & 1 db server. I guarantee no one will waste their time creating 3 'users' on each server i.e. 3x4 = 12 'users' on that cluster.

> I guarantee no one will waste their time creating 3 'users' on each server i.e. 3x4 = 12 'users' on that cluster.

It sounds — and I don't mean to be rude — like you have not been involved in a "real" production environment.

Modern Unix environments are automatically managed with modular configuration systems such as Puppet or Chef. Sysadmins have little or no need to log into servers to configure them; they just hook the server into Puppet (for example), and Puppet will do everything required to mould the server into its correct state: Create users, install public keys, install packages, write config files etc.

Puppet in particular is so simple that you would want it even if you were managing a single box. Why? Because if that single box dies/is wiped/whatever, you just get a new box and point it at Puppet, and it will become identical (minus whatever data you lost) to the old one. Or need to buy more hardware? Just point the new box at the Puppet server, and you have two, or three, or ten identically configured boxes.

So yes, in a sense you're right; sysadmins won't waste their time creating a bunch of users, because they will let the configuration management system do it. :-)

> Puppet in particular is so simple that you would want it even if you were managing a single box.

You know... that's so blindingly obvious, that it had never even occurred to me. I'm in the middle of a home "IT refresh" right now and I'm trying to update the obscene amounts of documentation needed on how to configure every little thing.

Your comment just gave me a "the code is the documentation" kind of moment; realized I'd much rather have all that documentation checked into a Git repo somewhere as automateable configs. Thanks!

That's exactly the point. Glad to help. :-)

It's really nice having separate users in production for a server. That way, you log sudo and know who issued an admin command. But in a larger system, you don't worry about setting them up on each server; instead, you rely on LDAP.

Unix is the LISP of server operating systems. It's a multiplier. In return, it demands much more from the operator. This is not ideal for a desktop system. It's amazing when you have an admin who knows his shit.

LDAP is a sensible suggestion.

Why can't we still kill the local login i.e. directly map LDAP user -> permissions instead of LDAP user -> local 'user' -> permissions.

Why can't we still kill the local login

If LDAP ever goes down you might want to retain the ability to login to your box.

I ran into a problem similar to this on a recent DR exercise.

Active Directory (an LDAPish service) was down. Was going to be down for a while. If I could get into my three Windows hosts I could re-jigger the service account for $APP from the AD user to a local account, start things up.

I couldn't login to the servers: my .admin account was in AD. No one had any idea what the local administrator account could be. We were just .. stuck .. until AD came up.

I could have booted the system a rescue disk (linux) and edited the SAM to change the password. Didn't happen then for complicated reasons. And one shouldn't have to resort to heroic methods to get local access back.

And can you imagine doing that for hundreds of hosts?

And can you imagine doing that for hundreds of hosts?

This is why you:

- Always provide redundant, network-local LDAP servers so that LDAP doesn't go down.

- Wire up remotely accessible serial consoles that provide emergency-level local root access.

You can attach a modem to the serial console systems, or a hardline (which is what I did at a previous job) between your data center and offices.

We had a fixed 'role' account for the serial console systems, but it existed only for the purpose of emergency access, could only be accessed from specific local networks (we divided different classes of employees into different VLANs), and the knowledge of the password could be constrained to those that needed rare "server fell over" access.

The serial consoles won't work if the parent poster removes all local accounts and goes 'LDAP only'.

Unless I've misunderstood something about that. It happens.

We do the redundant Active Directory thing. It didn't help during the DR exercise when the AD guy did something foolish (don't remember what) and the AD / DNS host went down for a few hours.

Single host because the DR was limited in scope.

I was fine with my Solaris hosts - had local root access via serial and SSH. I was simply locked out of my Windows hosts, and could not reconfigure those services to work without AD.

Unless I've misunderstood something about that. It happens.

You just maintain a local/serial-only root account for that eventuality.

[Edit] And make sure internet-facing production services don't rely on administrative LDAP.

divtxt is proposing that exactly those things be removed.

Yes, and that's stupid, and I'm explaining how we made accounts work fine for multiple users (in production, across 100+ servers).

Today, it could probably be done. I'll have to think about the implications.

This is why we have cfengine (or similar). Because yes, we do have NxM accounts (double and triple figures respectively), and we can all passwordless-ssh to any box we have to and have our dotfiles set up just the way we like them.

But no, we don't "waste our time" creating these accounts. We have tools to do this for us. Revolutionary, I know.

I guarantee no one will waste their time creating 3 'users' on each server i.e. 3x4 = 12 'users' on that cluster.

You're just wrong. Plenty of people will do this. If one of those three people leave the company, you can disable that account without having to change the password for the other two. If you insist everyone use sudo, you get logs of all the commands run via sudo, and that includes who ran it.

This is all really useful. You don't understand why it's useful, but lots of people do understand it.

To join the rest of the people: We create separate accounts on all of our production and staging servers for every developer.

We switched to this after I had to stay about 5 hours late one night to switch all of our passwords on all our servers because someone quit.

We don't, however, do it manually. We have tools setup to do it for us (chef, in our case)

Wasting their time creating 3 users? Just sync /etc/passwd and authorized_keys between them. Zero time wasted.

Replying to multiple comments:

1) Automation: Yes to automation, but if I'm arguing that a task is needless, automating it does make change that.

2) Authentication: Yes to authorized_keys, auditing, LDAP, etc. I'm killing the local 'login' - not trying to kill security.

You've already received a few answers here, but one that seems to have been missed:

Use something other than ubuntu. Although there may be others out there, I'm unaware of any other distro that disables root. Complaining about disabling root is an ubuntu-specific complaint - it doesn't apply to linux in general, let alone unix.

Also, if you don't like using passwords, copy-ssh-id is your friend.

As for apache, I don't play with it much so I can't comment there. It certainly scares me :)

Nobody forces you to use a login, you can use the root account with a blank password on your server if you like, it will do exactly what you want...

You don't even have to do that. You can replace init with whatever you want and you'll never see a login prompt.

Aside from security, I think what this indicates (and maybe this is fundamental to computers?) is that Unix doesn't understand that sometimes users will lie. Some of the time when I say `rm -rf` (or run a process which contains that command somewhere) I want to delete the indicated directory, but some of the time I actually don't. I'm lying.

Unix knows who I am, and it knows what I want to do, but it has no way of knowing how much.

The way we get around this is by inventing an imaginary person called "root" who always actually wants what they say they want. On the other end, the imaginary person "nobody" almost never actually wants to do anything. This is obviously a half-solution, and it shouldn't be surprising that it causes weird workflow problems.

Here's an example showing how a human user does not require a local unix user of each server:

Jack starts Apache on one of the web servers:

    $ ssh jack@web4.example.com
    Password: secret123
    [_x_@web4] $ sudo /usr/bin/apachectl start # or similar
    [_x_@web4] $ logout
Now, there are 2 possible values for '_x_':

A) 'jack' - because there's a unix user 'jack' (what we have today)

B) 'sysadmin' - because there's no unix user 'jack' - only an entry in /etc/sshpasswd

B is the same as A as long as you update auditing to trace the Apache start to the jack/secret123 combo.

Sidenote: wow this thread blew up!

Actually, in one of Neal Stephenson's articles/booklets, he actually compares Unix to a very large drill (working the metaphor in some comic depth). When said drill is told to turn, it turns, consequences be damned. ("In The Beginning Was The Command Line")

You don't know the tools, and don't want to take the time to learn them, and yet you have a difficult time using them


It's not really because you don't care, etc, it's just because you're not good enough. Some people are knowledgeable, others are doomed to be Ruby programmers forever, but that's life.


This feeling that an appropriate amount of punishment must be exchanged for everything of value that you get back from your computer is the attitude that leads to unusable software that is configured to be insecure by default.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact