All of it. Down the fucking toilet, and for stuff I don't give a shit about. Every single goddamn thing I try to accomplish while setting up a server involves a minimum of 1 hour of googling and tinkering. Installing PHP? Did you install it the right way? Did you install the special packages that make it secure and not mastadon slow? You want to create a daemon process? Hope you found the right guide! Setting up a mail sever? Kill yourself.
For some people, this is not the case. They have spent multiple decades breathing in the Unix environment, and are quite good at guessing how the other guy probably designed his system. And they don't mind spending the majority of their productive hours tinkering with this stuff. But I don't have time. I don't care. I don't have time to read your 20-page manual/treatise on a utility that doesn't explain how to actually use the thing until page 17. I don't want to figure out why your project doesn't build on my machine because I'm missing some library that you need even though I have it installed but some bash variable isn't set and blah blah blah blah.
The problem with Unix is that it doesn't have a concept of a user. It was not designed that way. It was designed for the people who programmed it. Other pieces were designed for the people who programmed them. If you are using a piece that you built, then you are a user. Otherwise you are a troublesome interloper, and the system is simply waiting in a corner, wishing you would go away.
And yet...we put up with it. Because there isn't a better option. Because it's our job. Because we'd rather just bull through and get things done than spend an infinite amount of time fixing something that isn't fixable. Life sucks, but NodeJS is pretty cool.
That's the funny thing about internet. Everybody can get the knowledge he needs, but that doesn't mean you can apply that knowledge and know about all consequences.
Have your car fixed, pay by the hour, brilliant idea!
The problem with Unix is that it doesn't have a concept of a user.
Nope. It doesn't have a concept of a naive user. Table saws also don't have a concept of a naive user, but people don't bitch about folks trying to use a table saw without having to learn how first.
I forgot who originally said it, but you comment reminded me of this quote: UNIX is user friendly, it's just choosy about who its friends are.
And occasionally even your best friends get on your nerves, I find.
When someone inexperienced tries to use the chainsaw, he may cut his hand off. Nobody blames the tools - it's obvious that if a green guy is hurt, it's because of his own foolishness.
Sadly, in the IT it's the opposite. People are bitching about the tools, paradigms, philosophies, without really doing their homework. Hey. Once upon a time it took a lifetime to master specific crafts. Let's be decent and maybe humbler a bit.
I have a theory why it is so, by the way. In the conventional crafts everything is physical, touchable, solid. In IT everything is abstract and prone to easy judgement and mindless relativism. Please let's bring back craft to the hacking!
- Here's how needless Unix users is:
Every fresh server install, I have to make up a meaningless string called the 'login' of an 'admin user' who belongs to a 'group' called 'admin'. Once upon a time, I could use the well known admin login 'root'. Now that's not allowed. I have to make up a name, remember this name when you connect and then remember to prefix every command with sudo.
- Here's a better way of doing it:
Give me a server distro where I don't need a 'login'.
Meanwhile, why is apache pretending to be a 'user' called 'nobody'/'http' and not using some 'capabilities' or some shit like that?!!!!
If you are only doing the occasional install then this really shouldn't be a great hardship. If you are installing many servers you should have this part automated. And ti shouldn't be meaningless either. I think you are doing it wrong.
> Once upon a time, I could use the well known admin login 'root'.
You still can. root login can always be reenabled if you want it that badly. There is also "sudo su" (unless explicitly disabled by your admins) to avoid repeated invocations of sudo while you are performing a long admin task.
> - Here's a better way of doing it:
> Give me a server distro where I don't need a 'login'.
No, no, and thrice no. Far too many newbies will leave it in that state and get hacked to buggery in short order. Even if it is only for local console logins, I'd consider it a bad idea.
No matter how inconvenient it is, server install should default to an insecure state and allowing access without authentication is such a state.
Live CDs often do this, but they are not production systems.
> Meanwhile, why is apache pretending to be a 'user'
Well that much is a valid point. There has been work in this area but non of it has made its way to default setups of most unix-a-like systems.
You can also do 'sudo -s', which keeps you in your normal user's shell. It's pretty slick.
Imagine a distro that changed the terms 'login/password' to 'password1/password2'
$ chpasswd maybeUniqueString
Error: maybeUniqueString already in use.
$ su maybeUniqueString
LOGIN OK, PROCEED TO EXPLOIT ME
I didn't change any behavior - just the UI strings. So you need both!
$ su maybeUniqueString1
You'd be better off getting rid of the login altogether and using a GUID. People still share computers, you know.
Imagine a team of 3 people running a SaaS webapp on 3 web servers & 1 db server. I guarantee no one will waste their time creating 3 'users' on each server i.e. 3x4 = 12 'users' on that cluster.
It sounds — and I don't mean to be rude — like you have not been involved in a "real" production environment.
Modern Unix environments are automatically managed with modular configuration systems such as Puppet or Chef. Sysadmins have little or no need to log into servers to configure them; they just hook the server into Puppet (for example), and Puppet will do everything required to mould the server into its correct state: Create users, install public keys, install packages, write config files etc.
Puppet in particular is so simple that you would want it even if you were managing a single box. Why? Because if that single box dies/is wiped/whatever, you just get a new box and point it at Puppet, and it will become identical (minus whatever data you lost) to the old one. Or need to buy more hardware? Just point the new box at the Puppet server, and you have two, or three, or ten identically configured boxes.
So yes, in a sense you're right; sysadmins won't waste their time creating a bunch of users, because they will let the configuration management system do it. :-)
You know... that's so blindingly obvious, that it had never even occurred to me. I'm in the middle of a home "IT refresh" right now and I'm trying to update the obscene amounts of documentation needed on how to configure every little thing.
Your comment just gave me a "the code is the documentation" kind of moment; realized I'd much rather have all that documentation checked into a Git repo somewhere as automateable configs. Thanks!
Unix is the LISP of server operating systems. It's a multiplier. In return, it demands much more from the operator. This is not ideal for a desktop system. It's amazing when you have an admin who knows his shit.
Why can't we still kill the local login i.e. directly map LDAP user -> permissions instead of LDAP user -> local 'user' -> permissions.
If LDAP ever goes down you might want to retain the ability to login to your box.
I ran into a problem similar to this on a recent DR exercise.
Active Directory (an LDAPish service) was down. Was going to be down for a while. If I could get into my three Windows hosts I could re-jigger the service account for $APP from the AD user to a local account, start things up.
I couldn't login to the servers: my .admin account was in AD. No one had any idea what the local administrator account could be. We were just .. stuck .. until AD came up.
I could have booted the system a rescue disk (linux) and edited the SAM to change the password. Didn't happen then for complicated reasons. And one shouldn't have to resort to heroic methods to get local access back.
And can you imagine doing that for hundreds of hosts?
This is why you:
- Always provide redundant, network-local LDAP servers so that LDAP doesn't go down.
- Wire up remotely accessible serial consoles that provide emergency-level local root access.
You can attach a modem to the serial console systems, or a hardline (which is what I did at a previous job) between your data center and offices.
We had a fixed 'role' account for the serial console systems, but it existed only for the purpose of emergency access, could only be accessed from specific local networks (we divided different classes of employees into different VLANs), and the knowledge of the password could be constrained to those that needed rare "server fell over" access.
Unless I've misunderstood something about that. It happens.
We do the redundant Active Directory thing. It didn't help during the DR exercise when the AD guy did something foolish (don't remember what) and the AD / DNS host went down for a few hours.
Single host because the DR was limited in scope.
I was fine with my Solaris hosts - had local root access via serial and SSH. I was simply locked out of my Windows hosts, and could not reconfigure those services to work without AD.
You just maintain a local/serial-only root account for that eventuality.
[Edit] And make sure internet-facing production services don't rely on administrative LDAP.
But no, we don't "waste our time" creating these accounts. We have tools to do this for us. Revolutionary, I know.
You're just wrong. Plenty of people will do this. If one of those three people leave the company, you can disable that account without having to change the password for the other two. If you insist everyone use sudo, you get logs of all the commands run via sudo, and that includes who ran it.
This is all really useful. You don't understand why it's useful, but lots of people do understand it.
We switched to this after I had to stay about 5 hours late one night to switch all of our passwords on all our servers because someone quit.
We don't, however, do it manually. We have tools setup to do it for us (chef, in our case)
1) Automation: Yes to automation, but if I'm arguing that a task is needless, automating it does make change that.
2) Authentication: Yes to authorized_keys, auditing, LDAP, etc. I'm killing the local 'login' - not trying to kill security.
Use something other than ubuntu. Although there may be others out there, I'm unaware of any other distro that disables root. Complaining about disabling root is an ubuntu-specific complaint - it doesn't apply to linux in general, let alone unix.
Also, if you don't like using passwords, copy-ssh-id is your friend.
As for apache, I don't play with it much so I can't comment there. It certainly scares me :)
Unix knows who I am, and it knows what I want to do, but it has no way of knowing how much.
The way we get around this is by inventing an imaginary person called "root" who always actually wants what they say they want. On the other end, the imaginary person "nobody" almost never actually wants to do anything. This is obviously a half-solution, and it shouldn't be surprising that it causes weird workflow problems.
Jack starts Apache on one of the web servers:
$ ssh firstname.lastname@example.org
[_x_@web4] $ sudo /usr/bin/apachectl start # or similar
[_x_@web4] $ logout
A) 'jack' - because there's a unix user 'jack' (what we have today)
B) 'sysadmin' - because there's no unix user 'jack' - only an entry in /etc/sshpasswd
B is the same as A as long as you update auditing to trace the Apache start to the jack/secret123 combo.
Sidenote: wow this thread blew up!
I mean, it sounds like he's doing some pretty fiddly stuff - really getting in there and hacking. I don't see how that could ever be simple, and it certainly isn't anything 'end user' facing.
End users check their email online and work in spreadsheets occasionally. They don't develop server-side js frameworks. I guess the argument is that if things were simpler, then maybe they could do those things? But I don't buy it.
(I get the frustration, when you're held up for 45 minutes googling because some library is missing or a string isn't formatted just so, but that sort of thing only happens when you're literally hacking things up. Which isn't end user behavior, and I can't see how it could ever be a whole lot simpler. Maybe I'm just short sighted.
He's saying that the development side could be simplified as long as the end result (what the user sees) stays the same, since the user doesn't care how the product was developed.
The dev side is horrendously complicated, which is why he says he hates almost all software.
Nope. I gave up trying to install it months ago, but it required many external programs at versions too recent to be included in distro repositories, and which as far as I could tell were mutually incompatible. Obviously, people have gotten it to work, because it's a pretty popular front-end, but I never did.
Sure nodejs would be easy to pick up if it had no dependencies and if its binaies were contained in a folder - i.e. portable - but assuming nodejs could be of any interest to the end user is a bit of an exaggeration IMHO. Therefore I don't buy it either.
- the ease of apt-get install (inherited from debian)
- ubuntu LTS releases
I now use availability of apt-get instructions for an LTS as one of the measures of the maturity of a software package.
10 years ago, Debian may have had apt-get and stable but a lot of software setup docs were for building from source leading to possible compile issues and version conflicts for libraries.
Now, many setup docs are written as apt-get of binary packages for the last Ubuntu LTS.
Because we'd rather just bull through and get things done than spend an infinite
amount of time fixing something that isn't fixable.
It's just simple for you, because it places you in an actual end user role. (Which you pay for).
Heroku is no doubt complicated as hell for the people who developed it. So if the argument is that developing node.js should have been like deploying to a paid managed hosting environment with fairly tight requirements, then ok. But that's a weird thing to assert.
And I'd say that if all you want to do is deploy a rack app, you can do that relatively simply in Debian too, for free. (I'd do it with rvm and perhaps build nginx from source and such, but you could do 90% of it straight from apt-get, rubygems and editing ~three config files).
The sorts of things that are really tricky, and have you sweating over the sorts of things the original post is complaining about, are just hard original work (and free, and you don't have to wait years for them to support your pet language or framework or whatever).
It's an acceptable trade-off, an issue internal to the concerns of developers - not end users, and not a reason to hate on unix, is all I'm saying. Not at all hating on the service Heroku provides.
Most things behave the same way.
bbot@bbot:~/foo$ ls bar
bbot@bbot:~/foo$ mv bar baz
bbot@bbot:~/foo$ cp baz bar
cp: omitting directory `baz'
Unix is there, if I want it and thank the gods there are package managers.
It depends what kind of user you are. Few user require fussing around with D-Bus etc..
I hate all cars, especially my own. I hate that heavy, dangerous, gas-guzzling honda civic with an over-sensitive brake pedal and enormous, completely pointless blind spots over both shoulders. I hate filling it up with gas, which is expensive, smelly, and bad for the environment. I hate the dishes that I have to wash every day after I use them. I hate my Aeron chair that I sit in all day long. I hate peeling grapefruit. I hate the sound of my central air conditioning fan powering up. I hate how I'm either sore from working out or depressed from not working out.
There's nothing wrong with a rant now and again but let's recognize it for what it is.
Life is pain, Highness. Anyone who says differently is selling something.
Blaming "human ineptitude" is pessimistic. Sure, the fact that humans can't all manipulate computational machines directly and require layers of abstraction to effectively model problems can, technically, be called ineptitude, but really-- why be so down about it? That's the way things are and there's a lot of good that comes from software if you think about it for more than 30 seconds.
Human ineptitude is a part of our physical environment. We're just animals. Clever ones, but not perfect.
Furthermore it is a physical limitation for how much software you can write (and have it work) if you can get something that "mostly works" by building on top of yesterday's cruft then you do it, since the alternative is starting over from scratch and not being able to finish.
One of the more intriguing comments I've seen on HN. Care to elucidate?
For example my brand new Hyundai Sonata has pretty shitty rear visibility due to it's 'sleek' styling and therefore smallish rear window. I could cite many more.
It's more than the mirrors, and unless you have transparent pillars on top of the car (giving up the structural integrity of the cabin) it's going to have blind spots.
You should actually have them a lot farther out such that visibility in your side mirror coincides with losing rear-view visibility. That position is a lot farther out than most people think and is tricky to do the first few times.
Never fails to amaze me what users will do with a software tool.
I've seen experienced devs and support staff run a C program written to parse some weird data against another data set in the vain hope that it would parse the new data set into something usable.
I've seen MBAs who could barely tell you what a variable is write visual basic macros in Excel to do hardcore data management.
Game devs who almost seemed to frickin' think in OpenGL.
It is a big ball of mud (turtles all the way down, eh?), but on a good day, I listen to a hacker talk about finally getting that little piece of code beat into submission and it's very satisfying just to see that gleam in their eye.
Just how much reliance we put on autoconf really makes me shudder.
Programming languages, their frameworks, their libraries, their petty concerns are a mere vanity folly, riddled with re-invention, abstraction arcana, and deus-ex-machina hoopla. We have lost our way, straying so far from the path of the UNIX philosophy such that I must now 'whole-stack' an application instead of using the pipe character. A pox on the whole damned lot of it!
Some days I just despair of all the time I've wasted bustling and jostling, crushed by the sweaty masses in the ghetto. But if I'm honest with myself, I must confess I love it too. I love my programming languages, my libraries, the eight different ways I know to full-text search, to regex, to parse, to lock, to async. I love the smell and heat of the coal-face, the futility of it all. Stockholm Syndrome indeed!
There is so much more that could be done with operating systems in any direction you want to go. I'm thankful that doing things on my iPad doesn't involve messing with command lines. But for when I want hackability I'd rather have what a Lisp machine could have become than a silly way to do functional programming in shell.
This sharing creates new abstraction boundaries, increases the number of concepts and moving parts, and there are lots of compromises involved in reusing a common part compared with crafting something small and simple specific to the task at hand. But if you didn't do this, you'd have lots of duplication of similar, but not quite identical work, like a pre-industrial society; a massively inefficient use of human labour.
You can't pause the world while you rebuild everything; it would take far too long to get to something better than what you're trying to replace. You can only repair one or two things at a time, and hopefully leave the world better for it; but the mindset espoused in the rant is more likely to result in a half-baked start on something new, but abandoned when the scope of the whole problem is fully perceived.
Just about everybody knows that all our software is imperfect crap on top of imperfect crap, from top to bottom. Everybody, when met with a new codebase above a certain size, thinks they could do better if they started over and did it "properly this time". Everybody can look at a simple thing like a submit form in a web browser, and sigh at the inefficiencies in the whole stack of getting what they type at the keyboard onto the wire in TCP frames, the massive amount of work and edifices of enormous complexity putting together the tooling and build systems and source control and global coordination of teams and the whole lot of it, soup to nuts, into a working system to do the most trivial of work.
But this is not a new or interesting realization by any means. It's not hard to point almost anywhere in the system and think up better ways of doing it. Pointing it out without some prescription for fixing it is idle; and suggesting that it will be fixed by wholesale replacement by another complex system is, IMO, fantasy.
Unless the new thing isn't Turing-complete and can't be implemented with a Turing-complete system, it will be abstracted away at first just so we have an environment to start building with, and can start using it without reinventing every single wheel we have.
Without radically changing the paradigms on such a fundamental level a start-over just wouldn't happen.
I don't think Ryan Dahl is at all naive for wanting something like this. I also think something like this is totally possible.
"For example, essentially all of the standard personal computing graphics can be created from scratch in the Nile language in a little more than 300 lines of code. Nile itself can be made in little over 100 lines of code in the OMeta metalanguage, and optimized to run acceptably in real-time (also in OMeta) in another 700 lines. OMeta can be made in itself and optimized in about 100 lines of code."
and, btw: https://github.com/tristanls/ometa-js-node
Most people won't change their mind about anything, unless everyone else already did. Therefore, (Kay concludes at 24:00), truly new ideas take at least 30 years to become popular.
STEPS is too young. At this pace, wait for at least 20 years.
Is there some similar proof for ometa ?
- TCP-IP in 200 lines. Current C implementation use 10Kloc (50 times more).
- Most of Cairo's functionality in 500 lines. And it's fast enough. Cairo on the other hand weighs about 40Kloc. (Again, about 50 times more code.)
And that's for functionality they couldn't scrap altogether, or merge those with similar capabilities. For instance, you don't want to send emails, or publish a web page, or print a PDF, or, goodness forbid, a Word document. You just want to handle a fucking document. Send it, publish it, whatever, this is all glorified text (you do need the glorification, though).
The bottom line is, Alan Kay and his team rule.
He's right about the fight against unnecessary complexity. He's right about how ultimately the enduser's experience is king. But he's objecting to a lot of the complexity that lies behind that UX facade. Because that's exactly what that UX is: a facade. It's an abstraction. And one that sometimes leaks. The iPhone is loved because of it's UX. But inside, behind the screen, it's not a box of mostly empty air and perhaps a little magical fairy who blows kisses and VOILA! the UX is delivered. It doesn't work like that. There are moving parts, both physical and virtual, a lot of them, that must be complex because they have real world constraints they MUST satisfy which your own mental model or messy subconscious human desires don't have to satisfy. The little girl wants a pony and SHE WANTS IT RIGHT NOW, DADDY! But her father lives closer to reality. He can't just wave a magic wand and give her a pony. It takes time. It takes money. You have to find a pony. Get it. Where do you keep it? Who feeds it? Shelters it? Can we afford it? Or are we just going to let it starve after the little girl gets bored playing with it? These are all the niggling little details that lie around the edges and behind the scenes when trying to satisfying this little girl's desire for a pony immediately. It is good to satisfy and deliver a desired experience. It is dumb and naive to think it only takes the wave of a magic wand or the press of a button. Yes we can provide a button you can press to make that pony appear. We can. That's just straightforward engineering and entrepreneurship. But there's going to be a lot of complexity and ugly moving parts, some with sharp edges, or unpleasant chemical properties, or esoteric technical jargon, under the hood, to make that button press deliver.
(I honestly am not trying to imply that that is the case; I'm just musing.)
Designing and programming a tool that abstract the details from users is not more difficult, but it's very tedious. Just giving out meaningful and accurate error messages has a huge effect.
Systems programming has always been the code that most people won't tackle because the problems are ugly (thus the label systems programming). I really dislike autotools but I am not really up to resolving that problem, so I'll leave it to those that do. Pretty simple conclusion. When people with the guts to go in and replace these tools come around, I try to support them, but bashing others doesn't magically make that happen.
The claim that people who build on top of these systems are making problems worse. You could say the same thing about the users of that software then. There should be no hate for the act of construction. Destructive negativity is just a waste of time unless you want to lead people somewhere to construct again, and this post doesn't do much but hate. I'd favor suggestion over damnation. Don't hate people for building, encourage them to build something better!
This isn't really rage and hate. You're taking the words too literally. It's the frustration of being able to feel clearly that there ought to be a simpler way, that there is a simpler way, while at the same time being caught in a sticky spider web and unable to do much about it.
You know what I bet is driving this? The realization that Node.js itself has turned out way too complicated. It ought to be a nice library to provide non-blocking I/O and networking APIs to V8 apps. Now it's becoming Rails at one end and an operating system at the other.
(I don't mean to pick on Node. It's valuable and I use it. My point is that we are all the sorcerer's apprentice, and runaway complexity will always be the default unless ruthlessly counteracted. It wasn't counteracted in Node's case, and since Ryah is a true hacker I imagine that he has the taste to know it. Indeed he says as much in the OP.)
Which means rewriting crufty pieces of your stack when certain thresholds occur. 'There will come a point where the accumulated complexity of our existing systems is greater than the complexity of creating a new one' -- is something that happens in motion, iteratively, and which you do when you have time at all levels of the evolution that we call development.
Anyway, that's my two cents. Nice others are on the same wavelength, I think.
To be regarded the same way as a long relationship with a cranky friend, maintaining an aging specimen tree, or a historic house.
Do you cut the branch off, or just prune it back a little?
Recognizing and curtailing this impulse leads you toward enlightenment.
If every reason for every fix based on an unanticipated logic path was commented, there would be 10x more comments than code.
7. Codethulu looks on, and says: "Now you have become one of us."'
In this rant he didn't say Node.js was the solution, or better than any of these crappy abstractions.
Don't assume that. He might not say he doesn't like node.js, but it doesn't mean he is happy with it.
But I do feel that node.js is Rayn's attempt to enlighten me and hopeful others. He doesn't try to hide everything like ports and the underlying c code. Its all there, and best of all in a language that is familiar(at least for web developers).
If you want people to get the performance benefits from using non-blocking libraries, you care.
But there's no jQuery, etc. It's a somewhat nicer way to work with Java since you aren't forced into I-don't-care-about-it-exception-catching hell and Map<Map<Map<...>>> madness, but compared to Jython or Clojure it doesn't match up. You can get a headless jQuery working with Rhino, though it's not as simple as it should be.
It's a decent language, though. Here's a comparison of JS to Ruby for some of the stuff people generally love Ruby for. JS comes out looking pretty decent.
None of that is true for JS. I don't think Crockford thought his remark through.
1. Modern (JS) runtimes have JITs that can generate code
almost as fast as C++.
2. You can write purely functional code in JS.
3. JS has arguably less features than R5RS Scheme, which
itself is hard to fit on half a page.
I don't like magic in programming, yet nowadays there seems a move (especially in Ruby with the [over]use of method_missing) that encourages it.
Every level of abstraction above binary code, from assembly, to C, to Ruby, to Rails DSL's--each works by creating magic incantations that let you run larger functionality with a new shorter series of magic words.
Are you really against magic, or is it that you are against black magic (which I would classify as leaky abstractions)?
var app = new Mars.UIApplication().init(1280, 720);
var scene = new Mars.UIScene().init(app);
var surface = new Mars.UISurface().init(app);
var texture = surface.getTexture();
texture.setPixel(150, 100, 0x000000FF);
Yesterday I was looking at a Chartbeat gem that accesses the Chartbeat REST API . The entire class is 40 lines of code, however it's coded so weirdly that you'd have to read the source in order to use it. Every API call was a method_missing call, so instead of doing (in irb)
puts Chartbeat.new(:apikey => 'a', :host => 'b').public_methods
The code does look magical, and kudos to the developer that wrote it for the ingenius use of method_missing, but IMHO it's a bit to magical for my tastes. I like to look at a library's documentation and instantly know what methods I'm allowed to call and what exceptions/results I'm going to get back.
 Edit: Added explanation since, on a second read, it seems like I'm digressing from the topic.
I might be naive, but Go is the only thing that has given me some hope for the future of software development in recent times, it means there is a chance that maybe some day I will be able to write software in an environment and with tools that are not byzantine hideously insane piles of layers gratuitiously complex crud.
Hell, with Go you completely bypass even libc (but unlike Plan 9, you still can take advantage of the hardware and software support of existing operating systems/kernels, that sadly are an unavoidable mess, which is one of the things that made it impossible to adopt in practice).
And that is precisely what gives me hope, Go is going against the trend of almost every other language.
The OS is not wrong, what is wrong is you imagining that every system should be as simple as "right click / start". If you want that, take the Heroku/<you PaaS here> route and you'll be happy. But the day you have 5000 customers connecting at the same second and your environment collapses because you don't have the flexibility to tune it, don't come crying.
On the other hand, it's arrogant for one to think that he or she could do it that much better than the next guy. Writing efficient, maintainable, and "simple" software requires adding layers of indirection and complexity. You have to use your best judgment to ask whether the new layer you're adding will make things ultimately cleaner and simpler for future generations of programmers, or will hang like a millstone around their necks for years to come.
Let's try a little thought experiment: go back a few decades to the early 80s. Propose to build node.js as a tool to make it much easier for developers to write real-time network applications. You'll need to design a prototype-based dynamic language, itself an extremely difficult (and dare I say complicated) task. The implementation will need a garbage-collector, a delicate, complicated, and cumbersome piece of code to write. To make it acceptably fast, you'll need to write a JIT, which traces and profiles running code, then hot-swaps out JITted routines without missing a beat. You'll need to write a library which abstracts away event-based IO, like the "libev" node.js uses. That will require kernel support.
Frankly, even forgetting about relative CPU power at the time, I think you'd be laughed out of the room for proposing this. All of these things, for production systems, were extremely speculative, "complicated" things at the time they were introduced. People can't predict the future, and they obviously have difficulty predicting what tools will become useful and simple, and which will become crufty tarpits of painful dependencies and incidental complexity. No one in 1988 could say "a dynamic, prototype-based, garbage-collected language paired with a simple event model will allow developers to create simple real-time network applications easily in 2011". Many of them probably had high hopes that C++ templates would deliver on the same vision by then. But, instead, we have Boost.
Further, it's extremely arrogant of Dahl to create a dichotomy between those who "just don't yet understand how utterly fucked the whole thing is" and those, like him, with the self-proclaimed power of clear vision to see what will help us tame and conquer this complexity. Who knows, maybe in 15 years we'll be saddled with tons of crufty, callback-soup, unreliable node.js applications we'll all have to maintain. I don't think James Gosling envisioned the mess that "enterprise Java" became when he designed the initial simple language. Most developers do many of the things he cites, like adding "unnecessary" hierarchies to their project, because they believe it will help them in conquering complexity, and leave something simple and powerful for others to use down the line.
Ryan is right. Most of the software we use is crap. That's because Worse is Better.
"Node is fucked too. I am also one of these people adding needless complexity. ... The entire system is broken - all the languages and all the operating systems. It will all need to be replaced."
I am not a node.js user.
Oh, hell, just watch the Mother of all demos (1968): http://www.youtube.com/watch?v=JfIgzSoTMOs
It's not arrogant of him to think like this. It's more like Steve Jobs, circa 2006 thinking phones sucked and deciding to do something about it. Or maybe it's like Steve Ballmer thinking he could take over the phone market. I think it's too early to say for sure, but the early signs are promising.
Other than that, I agress with pretty much everything you said. It's easy to forget how reliant we are on things like industrial-strength GC, multithreading and JIT and how young those things really are.
tsort. There we go. I don't hate tsort. pbcopy and pbpaste.
Edited to add: Git. Git is also a thing of beauty. Who knew revision control could be made to not suck? Sure, SVN was a welcome relief from the unrelenting stone faced hell that was CVS, but that's damning by faint praise.
I love git, and use it for all of my projects. But my love for git might also be because I have spent so much time on learning it.
ed went on to influence ex, which in turn spawned vi. The non-interactive Unix command grep was inspired by a common special uses of qed and later ed, where the command g/re/p means globally search for the regular expression re and print the lines containing it. The Unix stream editor, sed implemented many of the scripting features of qed that were not supported by ed on Unix; sed, in turn, influenced the design of the programming language AWK, which in turn inspired aspects of Perl.
And I love AWK too.
I like and use grep. But, as a programmer, I like ack much better than grep.
Ask me what git merge does, or how to branch, or even how to delete a file in a repository (w/o deleting it from your local), or how to revert back to another version, or how to even check out, and I'll say I dunno. I read the documentation, but am still confused. I sometimes feel like I'm dumb because I feel everyone loves git and everyone gets it. Glad to hear that I'm not alone.
I'm not an iPad user but it makes a lot of people pretty happy.
Chrome is usually a pleasure to use. Firefox used to be that way, and they are starting to regain my trust again. I feel good about using Firebug as well.
Skype has some irritations, but for the most part it's still marvelous to open up a video chat to fucking Zanzibar whenever I feel like it.
vim is not always easy, but wow is it rock solid.
Upgrading Debian works very well for me at least, thanks to the miracle of apt-get. Debian itself, well, it's not winning usability awards, but still....
And don't even get me started about C the language. It makes me want to go find Ritchie and punch him in the junk.
Why is rsync not a binary linked to a useful librsync? Why STILL TO THIS DAY a grep/egrep difference? And why did Apple integrate Interface Builder -- their best piece of software, IMO -- into Xcode?
Let's not even get into Google.
Games, a game delivery system like steam or perhaps vlc for playing my favorite music, but even then, its the music that makes me happy, not the program playing it...
Ont he other hand, there is a lot of software that I actually hate.
The list of software that I actively despise, however, is finite, but unbounded.
As a result of having to think about it all day, every day however, it's understandable that Ryan would despise this kind of stuff. On the other hand, as a web developer that uses the result of his hard work, I am not affected by it at all, so the complexity of my work is substantially reduced.
The solution is to delete nearly as much code as we write. Put differently, the solution is small programs, ruthlessly pursued. The reason we don't do this is that it's totally absent from (nearly all) software culture -- absent as in blank stares and why-you-talk-crazy-talk if you bring it up -- and by far the number one determinant of what people do is what other people do.
There are a few points of light. Chuck Moore and Arthur Whitney come to mind. From everything I've heard, their programs are small enough not to have these problems. And in case anyone is wondering "If this is so much better how come we don't all work that way?" - the answer to that conundrum hit me the other day: sample size. Statistically speaking, it's never been tried.
BC Is that advice you would give to practitioners: to throw out more?
AW Yes, but in business it's hard to do that.
BC Especially when it's working!
AW But I love throwing it all out.
But why do you say they are working in well-defined problem spaces? No more well-defined than most, I would have thought. Certainly Moore was a pioneer of iterative development and evolving a program (and notation) in the direction of the problem. That's why he invented Forth in the first place.
Edit: Oh, having looked up the Medawar reference I realize you probably mean "well-defined problem space" in the way a mathematician would: a problem space narrow enough to be effectively studied but rich enough to produce meaningful results. Certainly most software projects do not start out in such a space. On the other hand, we don't try to learn enough about our problems to find such spaces. We merely add code. One might almost say we excrete it.
Overall, I don't agree with this.
Complexity arises because what we want to do is complicated. I don't think there's a way around that. Sometimes too much cruft builds up in an area, but that leads to redesigns of specific components. For example, client-side configuration of LDAP and Kerberos has been unreasonably complex for a long time. That didn't lead to people ditching them, that lead to https://fedorahosted.org/sssd/. It's likely that one day we will decide it's best to replace LDAP, just like was done with NIS. However, it won't mean we have to throw out all of linux.
The "users don't care" argument doesn't appeal to me. I don't care what tools the architects used when they designed my apartment building, but if learning some complex math and geeking out over slide rules enabled them do it, I'm all for it. Being told there's something wrong with me because I've changed the settings in my text editor is insulting.
The actual solution is a desktop client and a web application used for simple CRUD purposes, each with around 10 screens / pages.
We have a huge suite of tests. We have a large amount of different layers. Gigantic amounts of interfaces inheriting from interfaces, and being passed around as parameters. Partial classes, with implementation spread out all around the application. Everything grandly designed according to design patterns, and every piece of code positioned in the smallest possible unit. Everything in the front end is a user control.
In theory this gives us extreme extensibility, flexibility and code reuse. From an academics stand point, it's well designed according to best practice.
In reality, it is completely and utterly obfuscating the actual code that get things done. Adding another db field to the UI requres modification of data-access layer, business object layer, changes to 2-3 different types of interfaces, additional code to a type conversion class, initialization code in the front-end, additional display logic to a user control, extra custom validation logic etc etc.
I really feel with the author, and can unfortunately confirm it's often the same shit no matter what software you are dealing with.
If you don't take the time to configure your editor properly I do not want to collaborate with you on anything. Ever.
There's a proper way to tailor a suit, but no one expects an off-the-rack
suit to fit everyone.
Some people want computers and software to be just like household appliances, but seem oblivious to the rants people have about appliances (stupid settings, no way way to config, lowest-common-denominator, one-size-firs-none, yadda yadda yadda).
Yeah, I get pissed at computers and software at times, but other times I'm awestruck at just what amazing things we can do.
Overall, we're winning.
(That seems like it should have a "confucius say" or something prefixing it).
Ryan Dahl will regret this post for years to come.
Alan Kay once said, "Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves."
NB I've been to the center of Khafra's pyramid at Giza and I'm pretty sure that I wouldn't have done this if it hadn't exhibited rather a lot of structural integrity.
PS. yes, yes, yes, there is irony in Dr. Kay's quote in that the Giza pyramids are of almost magical construction. He meant it metaphorically.
I think we should looking for the arch, and not be happy with piling rocks in a heap.
The Egyptian pyramids are quite incredible bits of engineering - hardly "rocks in a heap".
No I was nitpicking about some factual inaccuracies about Egyptian pyramids - I wasn't making any comment whatsoever about software.
I suspect I do. Funny you bring up a Kay quote I've heard that he regrets, but anyways...
The same day this came onto hacker news, Jamis Buck also posted an unintentional counterpoint to Ryan's rant: http://www.jamisbuck.org/presentations/rubyconf2011/index.ht...
90 years ago, I think Jamis could have written a similar deck about why cranking a Model-T while pumping the throttle was just the hard work necessary to enjoy driving.
Edit: Funny you bring up a Kay quote I've heard that he regrets
If he regrets such an important and honest quote, he's off my Christmas list! :-)
I feel like these points are contradictory. If it is "all about the user experience" and you're advocating using more sophisticated and cognitively intensive, but conceptually cleaner and more repeatable, processes like the arch... then shouldn't algorithms be important?
It's all about the user experience, right? While our job may be complicated and involve a lot of math; at the end of the day we're to present an approachable front and clean interface to the intent of this complexity.
Or are we just raging because software is hard? I have no sympathy for people who aren't constantly improving themselves. Writing software will probably never be easy; and nothing Ryan has said (or that Node.js does) changes that.
Large systems are almost impossible to create and maintain. Imagine that we could build dog houses and, if we're careful, houses, but nothing bigger that wasn't under constant threat of falling apart? Imagine news headlines of "Chartres Cathedral collapsed again today." And then imagine a response of, "Well, that's because it's hard to build."
What Dr. Kay said, and I'm sure he only backed off of it because it's tiring making these arguments over and over again to people who look on with open mouths, is that there is the equivalent of the arch waiting for us. You, Dave, may not believe it. You may say anyone who complains about houses falling in on themselves just doesn't know it's hard work. I'm saying it's hard work to make them out of toothpicks and dental floss.
The reason I get annoyed in my comments (and had to apologize) is that I've spent 15 years working on a solution and have had nothing but resistance from those who can't see past the state of the art.
You have no sympathy for people who aren't constantly improving themselves. I have no sympathy for an industry that isn't.
As programmers, it is easy for us to get wrapped up in the act of programming, and to forget about the point of programming: to solve problems as quickly, cheaply, robustly, and maintainably as possible.
Software development is all about tradeoffs, and some amount of environment configuration is undoubtedly a good thing. Just like some abstractions are good, some design patterns are good, etc.
But you have to be honest with yourself about whether the investment you are making learning and building additional complexity is really paying dividends, or if it's just fun to play with.
An alternate interpretation is that nobody should need to configure an editor. Editors should work already. It's 2011. If you like doing this and don't see that you should be spending your time more productively, you are part of the problem.
Software needs to be complicated because the tasks that it performs are complicated. The only way human programmers can deal with this is abstraction on abstraction on abstraction. This will only becomes worse as software handles more "real world" things such as formerly hardware tasks.
Unnecessary software complexity is added due to maintenance, where the maintainers added extra complexity because they weren't able to integrate the changes into the current design and/or didn't understand it well enough. Probably a better job could be done with better tools/documentation in this case. Not by ranting at developers though.
His rant about UNIX is crazy. Any full-featured operating system necessarily is complex under the hood. If something randomly doesn't work in his favorite OS he also has to spend hours googling, diving into obscure settings managers, etc.
I get why complexity is disliked(/feared?) by some people, but unless you've got a better workable solution that you're ready for me to try out, your rant is just noise to me.
I've often found myself begrudged by the complexity of a piece of software, but that doesn't make me think we should throw the entire program out. How about we make it easier to use instead?
Pointing your finger and making noise will draw attention to the issue, but isn't likely to fix it on it's own.
bash tab completion
the details of programming languages
formatting source code
configuring window managers
What he's saying is, "Where are better abstraction mechanisms?" And that's a tremendously important question (whether you get it or not).
Should no programmers have to know about inter-process communication? Is dbus a bad IPC mechanism? Is IPC itself a flawed concept?
Those are interesting questions, because we can ask why and look for alternatives. Ryan's post and your response, not as interesting.
DBus as an implementation of messaging largely sucks. It uses an ad hoc protocol, there's little security, the C library implementation of it is a big mess, the socket interferes with remote X, introspection doesn't really work, and it uses far too much XML.
There are better abstraction mechanisms.
Even the most simplest and straightforward development work is matched to a pattern and implemented as a design pattern.
I think there should be a rule that says if your feature or a particular problem that you are trying to solve does not exceed XYZ lines of code, then it should never be implemented as a design pattern.
Yet to determine XYZ. I would guess XYZ = 200?
Right. Because current attitudes are that if you aren't developing everything purely object oriented with a design pattern or five using nosql for your data store you're a fucking imbecile. I shit you not I've seen a coworker spend half a day adding 800+ lines of get() and set() methods to a 150 line email script. The truly bizarre part is, it's not like he's stupid, or fresh out of school. The guy's a certifiable genius with six or seven years of industry experience under his belt.
This kind of cargo cult bullshit is, in my opinion, the single largest recurring (recursive?) problem in our industry. Unfortunately this isn't a new problem. Every generation of programmers finds some set of development concepts to enshrine as the gospel.
Doesn't he hate that a 50,000 LOC VM linked to C++ libraries is more popular than a 8,000(?) LOC language that solves the same problems and more?
It doesn't matter for most end users but it sucks to be the one to deal with V8's GC, lack of continuations, design by committee language, etc. But there are more bodies in his corner, dealing with that complexity.
You can see the difference between a chair made by hand by a carpenter who wanted to make the perfect chair and one made by a carpenter who wanted to get it over with.
Now on the other hand, you have two extreme ends:
1) The 'architect' who creates 4 layers of class hierarchies and factory-factories
2) And the guy who doesn't indent his code and types all of it in notepad.exe
I guess the key is to take pride and put thoughtfulness into what you do without losing sight of the fact that there's a end-user at the end who needs to use your work.
I think a few months ago, this wouldn't have made sense to me, but now I totally get what he's saying.
On my new app, I'm still using express to do most of the connect-ey stuff, but i've definitely decided that most MVC-ey frameworks are a premature optimization (for me). I'd rather just start with node + express, add in whatever DB I need (Redis / Mongo preferably) and build small and progressively.
My lesson learned... would love to hear other opinions.
There is very little value in trying to draw generalisations from past experiences when it comes to deciding whether to use a framework, and if so, which framework to use.
Unless you're reinventing the wheel, each project will front you with a unique set of challenges. More often than not, with a framework, these problems are awkwardly solved with code written by other people to solve other people's problems.
In my opinion all of this has less to do with what's trying to be achieved than it does with what you personally want to take away from the experience. If your motivation is to make money quickly to feed your family then you would be silly not to jump on something like Rails and ride on the shoulders of giants. On the other hand, if you want to become the most proficient programmer you can become then this path will probably lead you astray.
A quick look around the internet reveals both tiny projects that fail, as well as enormous projects that succeed, on full-stack frameworks such as Rails.
Needless to say if you're hacking on node you are of the latter category; The developer striving to broaden her horizons by exposing herself to the news and unknowns. This won't help you learn to work in a team on a large project. It wil not teach you to control complexity. In fact, it will probably lead you to believe you're learning all these things when you're in fact becoming comfortable with the complete opposite (working alone, hacking in anything, anywhere you feel like).
Generalisations really piss me off.
I whole-heartily encourage anyone who is open-sourcing node.js code to continue doing so. Even if it is Yet-Another MVC framework. Just take note of what ry is getting at here by keeping it mean and lean.
Operating Systems make all of this much simpler by setting it up for you. Try to use only what they give you and you will go a lot farther with less effort. Buck the system and you're in for a world of hurt.
There's a better way to go about it. It's called: PAY FOR YOUR SOFTWARE. Then you might get support too. You want it for free, you bet your ass it's going to be painful to use.
By the way, I don't know who this Ryan Dahl guy is, but it strikes me as very naive to consider that groking the entire inner-workings of the complete organization of an operating system - from the development tools to make it to the execution and use of its applications - should somehow be simple for anyone. I wonder if he'd bitch that the kernel is hard to modify without affecting another component, or that different versions of software may not have been written to be completely backwards compatible with one another?
This is the real world. This shit is complicated because it evolved that way. It's almost infinitely flexible and powerful and gives you everything you need to do what you have to do - and you complain that it's complex? Grow up.
I don't know a whole lot about rails, so this is conjecture, but I imagine this is why ruby on rails is so popular: you don't need to know very much to get it going.
>> There will come a point where the accumulated complexity of our existing systems is greater than the complexity of creating a new one. When that happens all of this shit will be trashed.
Amen! (Include with complexity above the complexities of real life like project schedules, time-to-market, ..., ultimately economics.)
Ryan claims that the systems are still complex, which suggests that the accumulated complexity (including project schedules) has NOT exceeded the complexities of creating a new one in general.
Having said this, what is Ryan really saying that tells us something we did not know before!
A tacit aspect of the whole argument is that the people are intelligent enough to judge complexity to make rational decisions, and would be able to find a simple solution when creating a new one when even with all the new understanding gained with experience, the new solution will still be very complex (just simpler than the existing one). This is to an extent analogous to the rational market hypothesis, and that I doubt to be true.
Next Ryan may propose a new system that will be written from scratch to satisfy his no-overly-complex goal. Only to find that the new software runs on the top of existing hardware which is immensely complex. Oh, then he thinks about developing hardware again too. Only to find that hardware development is immensely complex (EDA tools for example). Oh, then he thinks about developing them again too. He now concludes that the accumulated complexity hasn't yet become too high after all.
After taking all of that into account and if that is not complex by itself, find something intermediate level (say a programming language) that has less complexity at that level (going deeper would increase complexity) and build something on the top of it. But isn't this what all of us already do?
What does the author of this essay wants? A mind-reading machine? I RTFM, so I don't complain. In my archlinux netbook there is plenty of room for creativity, and amazement, and fun.
I love what I have. Besides of that, is free, and I can hack it.
I usually just start Visual Studio, create a project, import some NuGet packages and off I go.
(No, that wasn't a "my framework/platform is better"-rant - JVM IDEs + Maven can do essentially the same thing)
Point is, I don't even know what autoconf is, and I like to keep it that way.
Only a tiny fraction of us have to make Node/Java/.NET/Ruby+Rails+Rack+etc. All the rest can just go and solve problems. These tools really do abstract away from accumulated platform complexity. They add a little on their own (like $NODE_PATH), but that's on the platform level too, the level i don't care about anyway. I have npm, you know.
On a platform-related side note: if you're not using UNIX then it's going to be pretty hard to run into the described issues. When you go with a proprietary developer platform like Visual Studio you are outsourcing a lot of little headaches to Microsoft whose job it becomes to make your life easier. The problem is if Microsoft decides to drop their clue off in the weeds somewhere you have to either start writing pleading letters to Ballmer or make a huge jump to some other platform. In UNIX you suffer a steady string of little headaches, but you have the open source ecosystem to back you up, so with enough persistence there is almost nothing that you can't solve.
I know I'm not doing system programming. My point is that a very small minority of programmers is doing system programming. And if you're not part of that minority, software isn't at all as bad as Dahl describes. In fact, it got super many enablers for free, without many corresponding headaches, over the last 10 years or so. There are a lot of real decent high level tools.
Most of the Java ecosystem is open source, btw, and it has comparably easy high level tools as Visual Studio / .NET. Eclipse and NetBeans are real decent and open source, and so are Maven and all tooling for the cool JVM languages (i.e. not Java). I don't think my point applies to proprietary platforms like .NET only.
In fact Node is quickly becoming an equally cool platform. Big load of open source libraries, robust and dependable core engine, thriving community, excellent package management, etc. Nothing there to hate.
I daresay Ryan Dahl hates software for us so we don't have to.
He seems to imply that there could be an easier, more straight-forward way to describe things in some more common language. And that while he doesn't give any evidence how the current ways are overly complex.
Of course, there is broken or outdated software, and some things were crap from the start. Of course, there are always concrete things to improve but you won't get anywhere by dismissing all of it and starting anew.
For me, understanding the current state as part of our culture and our humanity and improving gradually on it, has guided me well in the past.
I think this argument is perfectly sound. As a software developer, there are times when I wanted to do something simple using a particular framework, and I was faced with a steep learning curve to achieve it. Note here that I was not trying to use the fanciest features of the framework, but the most simplest of it.
Or harder yet, diagnosing the issue as the spark plugs to begin with. Cars have been around for a hundred years and they can still present a challenge to even highly skilled and trained mechanics.
Manage your expectations when you try to do something you aren't an expert in and you won't be disappointed.
Ultimately we are developing the software because of one simple thing.
Usability discussions like this invariably fill me with rage because of how oblivious and dismissive some of the comments are.
They might as well say: No Wireless. Less Space Than A Nomad. Lame.
You'd think people would know better after 10 years & $350B in market cap!
The results of modern software development speak for themselves. One of the biggest things I learned from reading a few chapters of The Mythical Man–Month was what software development used to be like.
You're tempted to criticize because someone told you growing up that you're a unique little snowflake and your opinion is worthwhile whether it's qualified or not. This is Ry. And his sentiment is echoed by the greats, like Alan Kay and others. Listen for a second (and you can't listen if you're already babbling your unintelligible knee-jerk response).
Anyway, I see your point.
Some guy releases a library or application, then it gets packaged one way into .debs, another way into .rpms, another into macports. Maybe the author does this work, maybe more likely distribution maintainers do it.
Or in the world of a specific programming language, there is a similar story with a language specific packaging system. Maybe it gets packaged as a gem, or a jar, or an egg, or a module, or maybe the new node package manager.
Often, installing a package involves spreading its contents around. Some of it goes in /var, some goes in /opt, some goes in /etc. Who knows where else?
Many of the reasons for the unix directory layout don't apply for most people today. How many people even know what those directories' purposes are? How many have actually read the Filesystem Hierarchy Standard document?
Typically, those directories were established so that sysadmins could save storage space by sharing files between sets of machines (the word "share" seems to have about a dozen different meanings in these discussions). So you slice it one way so that machines with different architectures can "share" the contents of /usr/share, and you slice it another way so that things that change can be backed up more often, so they get thrown together in /var (and then you can mount /usr read-only!)
Most of these considerations are not worth the effort for most people. I think they are outdated. We don't generally have these directories mounted on separate partitions. We just back up the whole damn hard drive when we need a backup.
Here's an idea: a package should be a directory tree that stays together. Each programming language should not have its own separate packaging system. A package should be known by the url where the author publishes it. That author should also declare his/her package's dependencies by referring to other packages by their urls. Then you don't need dependency resolution systems that function as islands unto themselves (one for debian, another for node etc).
Software is published on the web, in things like git or mercurial or subversion repositories. These have conventions for tagging each version. The conventions are gaining adoption (see semver.org for example) but not fast enough.
Some middle layers just add friction to the process: distributing tarfiles, distributing packages to places like rubygems or cpan or npmjs.org. Developers usually want the source straight from the source anyway -- users might as well use a setup that very closely mirrors developers'.
If you want to add a package into your system, the only piece of information you should need is the url for the project's main repository, with an identifier for the exact release you need. That's a naming system shared by the entire web. If there are issues, that information can go from the user directly to the author, with no layers in between.
Apple has a great install/uninstall process for many applications: you move them to the applications folder, or you drag them out of the applications folder into the trash. We need to strive for this level of simplicity. Deployed software should have the same structure as the package distributed by the developer, in almost all cases.
My approach right now is to manage all my software within my home directory in a way not unlike what GoboLinux is doing. The home directory gets mounted on different machines with different operating systems. So the aim is to gradually work out a software packaging strategy that works well across all the existing OSes.
Similar to homebrew or GNU Stow, actually. But homebrew is mac specific, and weirdly tied to a github repo.
And then he somehow tries to make it "better" by ripping on himself, too, saying he's a part of the problem. Um, no, being self-deprecating in the same way that you're insulting everyone else does not magically make it ok for you to insult everyone else.
I've been using Linux (and a couple UNIXes on and off) for a little over 10 years. So I can get around a UNIX-like system pretty well. A lot of things are easy, and a lot of things aren't. Saying that it's somehow someone's fault is ridiculous. Claiming that all software developers are collectively lazy or don't care about user experience just doesn't hold up.
The funny thing is that he works in a position that naturally involves some difficult stuff. Let's say my favorite language to write software in is called XYZ. Say it's super easy, intuitive, concise, performant, and the method for compiling/deploying/distributing the end result of your hard work is trivial. In all ways, this system is just beautiful to work with.
Great, but I'll bet you the guy who wrote all the development tools and runtime for XYZ had to do a lot of difficult work to make that possible. Dahl is building a runtime for web applications. Unless he's writing it in some high-level language, it's not going to be easy. Supporting every platform he wants to support isn't going to be easy. User interfaces should be as simple as they can be, but often that requires a lot of complexity under the hood.
Go down even farther. Let's think about our basic building blocks. Transistors. Hgih and low, ones and zeroes. It's a very simple interface. You construct logical operations by using NAND, NOR, NOT, etc. gates, which are built from transistors. Also simple. But the next step for our modern computer is... well... the microprocessor. And while it's made up of these incredibly simple building blocks, the combination of them is extraordinarily complex. So the interface into that mess is also not the most friendly thing to work with: a machine instruction set. So we build things on top of that to make it successively easier: assembly language, C, Ruby.
And the tools that come along with this are only as good as the technologies they're built on. Tradeoffs must be made to be portable. Yes, all this is a huge mess that "we" have collectively invented over the past 30-50 years or so, but it's simply not possible to go back to the 1970s, know exactly where we're going to be in the 2010s, and design the perfect system, even with foreknowledge. The current state of computing is a product of the evolution of our technology. Often that means doing the best you can today, and hoping for something better tomorrow.
IMO you could simplify things a lot with a distro that only shared, say, the kernel/module/libc layer, plus a package management system. Beyond that, each packages would manage its dependencies, and install them under its own root directory - so you have only the package maintainer to blame if something is missing. This would give an application much more control in how to configure itself. It would also have the added benefit of super simple uninstall - just delete the app's directory, just like on osx.
So, what is the trolling here? The lack of serious tone in the comment?