I don't know a whole lot about rails, so this is conjecture, but I imagine this is why ruby on rails is so popular: you don't need to know very much to get it going.
>> There will come a point where the accumulated complexity of our existing systems is greater than the complexity of creating a new one. When that happens all of this shit will be trashed.
Amen! (Include with complexity above the complexities of real life like project schedules, time-to-market, ..., ultimately economics.)
Ryan claims that the systems are still complex, which suggests that the accumulated complexity (including project schedules) has NOT exceeded the complexities of creating a new one in general.
Having said this, what is Ryan really saying that tells us something we did not know before!
A tacit aspect of the whole argument is that the people are intelligent enough to judge complexity to make rational decisions, and would be able to find a simple solution when creating a new one when even with all the new understanding gained with experience, the new solution will still be very complex (just simpler than the existing one). This is to an extent analogous to the rational market hypothesis, and that I doubt to be true.
Next Ryan may propose a new system that will be written from scratch to satisfy his no-overly-complex goal. Only to find that the new software runs on the top of existing hardware which is immensely complex. Oh, then he thinks about developing hardware again too. Only to find that hardware development is immensely complex (EDA tools for example). Oh, then he thinks about developing them again too. He now concludes that the accumulated complexity hasn't yet become too high after all.
After taking all of that into account and if that is not complex by itself, find something intermediate level (say a programming language) that has less complexity at that level (going deeper would increase complexity) and build something on the top of it. But isn't this what all of us already do?
What does the author of this essay wants? A mind-reading machine? I RTFM, so I don't complain. In my archlinux netbook there is plenty of room for creativity, and amazement, and fun.
I love what I have. Besides of that, is free, and I can hack it.
I usually just start Visual Studio, create a project, import some NuGet packages and off I go.
(No, that wasn't a "my framework/platform is better"-rant - JVM IDEs + Maven can do essentially the same thing)
Point is, I don't even know what autoconf is, and I like to keep it that way.
Only a tiny fraction of us have to make Node/Java/.NET/Ruby+Rails+Rack+etc. All the rest can just go and solve problems. These tools really do abstract away from accumulated platform complexity. They add a little on their own (like $NODE_PATH), but that's on the platform level too, the level i don't care about anyway. I have npm, you know.
On a platform-related side note: if you're not using UNIX then it's going to be pretty hard to run into the described issues. When you go with a proprietary developer platform like Visual Studio you are outsourcing a lot of little headaches to Microsoft whose job it becomes to make your life easier. The problem is if Microsoft decides to drop their clue off in the weeds somewhere you have to either start writing pleading letters to Ballmer or make a huge jump to some other platform. In UNIX you suffer a steady string of little headaches, but you have the open source ecosystem to back you up, so with enough persistence there is almost nothing that you can't solve.
I know I'm not doing system programming. My point is that a very small minority of programmers is doing system programming. And if you're not part of that minority, software isn't at all as bad as Dahl describes. In fact, it got super many enablers for free, without many corresponding headaches, over the last 10 years or so. There are a lot of real decent high level tools.
Most of the Java ecosystem is open source, btw, and it has comparably easy high level tools as Visual Studio / .NET. Eclipse and NetBeans are real decent and open source, and so are Maven and all tooling for the cool JVM languages (i.e. not Java). I don't think my point applies to proprietary platforms like .NET only.
In fact Node is quickly becoming an equally cool platform. Big load of open source libraries, robust and dependable core engine, thriving community, excellent package management, etc. Nothing there to hate.
I daresay Ryan Dahl hates software for us so we don't have to.
He seems to imply that there could be an easier, more straight-forward way to describe things in some more common language. And that while he doesn't give any evidence how the current ways are overly complex.
Of course, there is broken or outdated software, and some things were crap from the start. Of course, there are always concrete things to improve but you won't get anywhere by dismissing all of it and starting anew.
For me, understanding the current state as part of our culture and our humanity and improving gradually on it, has guided me well in the past.
I think this argument is perfectly sound. As a software developer, there are times when I wanted to do something simple using a particular framework, and I was faced with a steep learning curve to achieve it. Note here that I was not trying to use the fanciest features of the framework, but the most simplest of it.
Or harder yet, diagnosing the issue as the spark plugs to begin with. Cars have been around for a hundred years and they can still present a challenge to even highly skilled and trained mechanics.
Manage your expectations when you try to do something you aren't an expert in and you won't be disappointed.
Ultimately we are developing the software because of one simple thing.
Usability discussions like this invariably fill me with rage because of how oblivious and dismissive some of the comments are.
They might as well say: No Wireless. Less Space Than A Nomad. Lame.
You'd think people would know better after 10 years & $350B in market cap!
The results of modern software development speak for themselves. One of the biggest things I learned from reading a few chapters of The Mythical Man–Month was what software development used to be like.
You're tempted to criticize because someone told you growing up that you're a unique little snowflake and your opinion is worthwhile whether it's qualified or not. This is Ry. And his sentiment is echoed by the greats, like Alan Kay and others. Listen for a second (and you can't listen if you're already babbling your unintelligible knee-jerk response).
Anyway, I see your point.
Some guy releases a library or application, then it gets packaged one way into .debs, another way into .rpms, another into macports. Maybe the author does this work, maybe more likely distribution maintainers do it.
Or in the world of a specific programming language, there is a similar story with a language specific packaging system. Maybe it gets packaged as a gem, or a jar, or an egg, or a module, or maybe the new node package manager.
Often, installing a package involves spreading its contents around. Some of it goes in /var, some goes in /opt, some goes in /etc. Who knows where else?
Many of the reasons for the unix directory layout don't apply for most people today. How many people even know what those directories' purposes are? How many have actually read the Filesystem Hierarchy Standard document?
Typically, those directories were established so that sysadmins could save storage space by sharing files between sets of machines (the word "share" seems to have about a dozen different meanings in these discussions). So you slice it one way so that machines with different architectures can "share" the contents of /usr/share, and you slice it another way so that things that change can be backed up more often, so they get thrown together in /var (and then you can mount /usr read-only!)
Most of these considerations are not worth the effort for most people. I think they are outdated. We don't generally have these directories mounted on separate partitions. We just back up the whole damn hard drive when we need a backup.
Here's an idea: a package should be a directory tree that stays together. Each programming language should not have its own separate packaging system. A package should be known by the url where the author publishes it. That author should also declare his/her package's dependencies by referring to other packages by their urls. Then you don't need dependency resolution systems that function as islands unto themselves (one for debian, another for node etc).
Software is published on the web, in things like git or mercurial or subversion repositories. These have conventions for tagging each version. The conventions are gaining adoption (see semver.org for example) but not fast enough.
Some middle layers just add friction to the process: distributing tarfiles, distributing packages to places like rubygems or cpan or npmjs.org. Developers usually want the source straight from the source anyway -- users might as well use a setup that very closely mirrors developers'.
If you want to add a package into your system, the only piece of information you should need is the url for the project's main repository, with an identifier for the exact release you need. That's a naming system shared by the entire web. If there are issues, that information can go from the user directly to the author, with no layers in between.
Apple has a great install/uninstall process for many applications: you move them to the applications folder, or you drag them out of the applications folder into the trash. We need to strive for this level of simplicity. Deployed software should have the same structure as the package distributed by the developer, in almost all cases.
My approach right now is to manage all my software within my home directory in a way not unlike what GoboLinux is doing. The home directory gets mounted on different machines with different operating systems. So the aim is to gradually work out a software packaging strategy that works well across all the existing OSes.
Similar to homebrew or GNU Stow, actually. But homebrew is mac specific, and weirdly tied to a github repo.
And then he somehow tries to make it "better" by ripping on himself, too, saying he's a part of the problem. Um, no, being self-deprecating in the same way that you're insulting everyone else does not magically make it ok for you to insult everyone else.
I've been using Linux (and a couple UNIXes on and off) for a little over 10 years. So I can get around a UNIX-like system pretty well. A lot of things are easy, and a lot of things aren't. Saying that it's somehow someone's fault is ridiculous. Claiming that all software developers are collectively lazy or don't care about user experience just doesn't hold up.
The funny thing is that he works in a position that naturally involves some difficult stuff. Let's say my favorite language to write software in is called XYZ. Say it's super easy, intuitive, concise, performant, and the method for compiling/deploying/distributing the end result of your hard work is trivial. In all ways, this system is just beautiful to work with.
Great, but I'll bet you the guy who wrote all the development tools and runtime for XYZ had to do a lot of difficult work to make that possible. Dahl is building a runtime for web applications. Unless he's writing it in some high-level language, it's not going to be easy. Supporting every platform he wants to support isn't going to be easy. User interfaces should be as simple as they can be, but often that requires a lot of complexity under the hood.
Go down even farther. Let's think about our basic building blocks. Transistors. Hgih and low, ones and zeroes. It's a very simple interface. You construct logical operations by using NAND, NOR, NOT, etc. gates, which are built from transistors. Also simple. But the next step for our modern computer is... well... the microprocessor. And while it's made up of these incredibly simple building blocks, the combination of them is extraordinarily complex. So the interface into that mess is also not the most friendly thing to work with: a machine instruction set. So we build things on top of that to make it successively easier: assembly language, C, Ruby.
And the tools that come along with this are only as good as the technologies they're built on. Tradeoffs must be made to be portable. Yes, all this is a huge mess that "we" have collectively invented over the past 30-50 years or so, but it's simply not possible to go back to the 1970s, know exactly where we're going to be in the 2010s, and design the perfect system, even with foreknowledge. The current state of computing is a product of the evolution of our technology. Often that means doing the best you can today, and hoping for something better tomorrow.
IMO you could simplify things a lot with a distro that only shared, say, the kernel/module/libc layer, plus a package management system. Beyond that, each packages would manage its dependencies, and install them under its own root directory - so you have only the package maintainer to blame if something is missing. This would give an application much more control in how to configure itself. It would also have the added benefit of super simple uninstall - just delete the app's directory, just like on osx.
So, what is the trolling here? The lack of serious tone in the comment?