This of course is nothing new - it's something Alan Kay has been telling us for more than 3 decades , who also has an enlightening talk addressing the biggest problem facing software engineering .
Before vanishing from the Internet node's Ryan Dahl left a poetic piece on how "utterly fucked the whole thing is" .
Steve Yegge also has dedicated one of his epic blog posts to "Code's worst enemy" .
More recently Clojure's Rich Hickey has taken the helm on the issue producing his quintessential "Simple Made Easy"  presentation, explaining the key differences between something that is "Easy", to something that is truly "Simple".
The Viewpoint Research Institute is trying to build an entire computing stack in 20kloc - http://www.vpri.org/pdf/tr2010004_steps10.pdf
The Berkeley Order-Of-Magnitude group reimplemented Hadoop and HDFS in a few thousand loc whilst remaining api compatible and having similar performance - http://db.cs.berkeley.edu/papers/eurosys10-boom.pdf
Both efforts required questioning the assumption that the complexity of modern systems reflects inherent complexity in the problem. That is the point of this manifesto - to get such drastic improvements we have to step back and rethink our entire approach to build software systems.
I suspect that a large part of the problem is the different response curves for features and complexity. Adding a new feature for a little complexity seems like a good trade. Thousands of people make that decision in their own individual areas and suddenly the cumulative complexity falls off a cliff and we can't handle it anymore. To get simplicity back we have to take a very high-level view of the tradeoffs and demand much more power in return for each unit of complexity we add.
EDIT "Programmers tend to overlook the fact that spring cleaning works best when you're willing to throw away stuff you don't need." - http://steve-yegge.blogspot.com/2007/12/codes-worst-enemy.ht...
The BOOM group seem to be using a Prolog variant, and their conclusions argue for declarative, high-level programming languages (I didn't read the paper in full; please correct me if I'm wrong!). But if you take a look at declarative programming languages such as Prolog, or functional languages like Haskell or OCaml (all languages which aim to make complex problems more tractable), this is precisely what many programmers reject as "too complex". It doesn't help that very often, "complex" is used as a synonym for "unfamiliar".
Here is another piece of the puzzle: in another post by John Edwards, he claims simplicity led him to reject higher-order functions for the "simple" language he is designing. But higher-order functions are a tool (not necesarily a simple tool) useful to reduce the complexity of code! It just happens that a programmer unused to them must first learn about them before becoming proficient with them.
So the trade-off between complexity and simplicity isn't so easy to define.
Higher-order functions, or higher-order anything (e.g. predicates) are not complicated because they are unfamiliar, but are complicated necessarily: HOFs, for example, indirect control and data flow, and now you have to think 2nd or 3rd order (hopefully not more) about what your code is doing!
Complexity is really just in the definition of "higher-order." It even shows up directly in our tools for automatically reasoning about program behavior (1st order is easier to deal with than 2nd or N-order). There is a reason 1st order logic is so popular, not because people don't get higher-order logic, but 1st order logic is easier to deal with. The problem is, of course, expressiveness (you can't do much with a functional language that doesn't admit HOFs).
It's not surprising you can do a lot with a programming language that doesn't support HOFs (though, of course, there wouldn't be much point in calling it functional) -- you can in fact do everything without them. The key issue here is whether HOFs are a useful tool that, once learned, will help make the complexity of the problem you're attempting to solve more manageable. I believe they are, by helping with modularity and composability.
Even if learning to use more powerful tools is more "complex" (that is, it's "easier" or "takes less time to get used to"), once you master them, presumably you're equipped to deal with the tasks you want to accomplish.
Maybe some power tools can be made easier to handle. This is a valuable goal. But if your goal is to never make a tool that requires the user to be trained, you're severely limiting yourself.
> ...once you master them...
Arguments of this sort tend to assume that a) the learning time can always be written off and b) that once you master a tool there are no more overheads. a) is false in the case where the learning time is too long or simply not available eg Joe Accountant may benefit from learning to program but he doesn't have the time to spend years learning. b) is false in the case where the abstraction is harder to reason about or makes debugging harder.
There is certainly an economic decision to be made here but it must consider opportunity cost, cognitive overhead and maintenance overhead as well. That decision doesn't always favour the slow-to-master tool.
It's like extolling the virtues of investing in an industrial nailgun to someone who is just trying to hang a photo. Sure, the nailgun will make driving in nails faster but the investment won't pay off for them.
Once you have mastered using HOFs, you have simply become proficient at doing a higher-order analysis in your head, it is still more complicated than a lower-order analysis. Even SPJ's head would probably explode given a function that took a function that took a function that took a function... with HOFs, you really want to keep N (the order) fairly small.
There is actually a whole body of work on point-free programming, where the idea is to make something like f(g) look more like f * g, that you aren't passing g into f (which you actually are, but this is an implementation detail), but you are composing f and g. The semantics of that composition are then more well-defined (and hence restricted) than naked higher order functional programming (which is more an assembly language at that point).
Jonathan Edwards: "Complexity is cumulative cognitive load. We should measure complexity as the cumulative effort to learn and use a technology."
BOOM: "One simple lesson of our experience is that modern hardware enables real systems to be implemented in very high-level languages. We should use that luxury to implement systems in a manner that is simpler to design, debug, secure and extend — especially for tricky and mission-critical software like distributed services."
> The BOOM group seem to be using a Prolog variant
They are using a variant of datalog, which is dramatically simpler than prolog (no unification, no data-structures, no semantic-breaking cut etc). In our (admittedly limited) user experiments we have found that their data-centric approach is easier to teach non-programmers since it lends itself to direct manipulation, live execution and computer-assisted debugging (eg we can cheaply record the exact cause of every change to runtime state and make it queryable in the same language). We (Light Table) are working on a similar datalog variant with a front-end that is about as complicated as excel.
> So the trade-off between complexity and simplicity isn't so easy to define.
I agree that there is a often a trade-off between simple and easy but I don't think that our current tools are anywhere near optimial in that sense. For the case of distributed programming the BOOM tools are both simpler and easier in that they allow you to focus on specifying high-level behaviour and not worry about most of the details that normally make up the bulk of programmers mental load.
Yes, but this is accomplished using declarative high-level languages, precisely the kind of languages that many programmers will claim are "too hard" (i.e. sufficiently unlike traditional imperative languages).
I bet almost everyone agrees that the goal of most modern software development should be "[...] to implement systems in a manner that is simpler to design, debug, secure and extend — especially for tricky and mission-critical software like distributed services."
The disageement here is on whether this can accomplished by making programming languages simplified and easier to learn as the overriding principle. Sometimes you have to use a complex tool in order to make your work better.
Your argument seems to be that Prolog and Haskell are high-level and Prolog and Haskell are complicated and hard to learn. I agree up to that point. That doesn't imply that any high-level declarative language must be hard to learn. SQL and Excel are declarative high-level languages (more so than Haskell and Prolog in terms of abstraction from the execution model). Both have had much more success with non-programmers than imperative languages have.
Bloom is a much simpler language than, say, Ruby. It doesn't have data-structures. There is no control-flow. Semantics are independent of the order of evaluation. The entire history of execution can be displayed to the user in a useful form. Distributed algorithms can be expressed much more directly (eg their paxos implementation is almost a line for line port of the original pseudo-code definition). We have actually tested Bloom-like systems on non-programmers and had much better results than with imperative or functional languages.
> I bet almost everyone agrees that...
The key part of that quote was "We should use that luxury to implement systems..." ie give up low-level control over performance and memory layout in exchange for simpler reasoning. Note that in both Excel and SQL the user doesn't have to reason about what order things happen in or where things are stored. The same applies to successful dataflow environments like Labview. In contrast, programmer-centric languages like Haskell and Prolog require understanding data-structures and Prolog requires understanding the execution model to reason about cut or to avoid looping forever in dfs.
Excel, SQL and Labview are all flawed in their own ways but we can build on their success rather than writing them off as not real programming.
Interesting! Do you have any link to your study? I'm interested to see your methodology and how you handled researcher bias :)
By the way, SQL is a pretty complex formal system well grounded on theory. I would never write it off as "not real". Most people don't understand SQL without training, by the way.
I think lots of people understand, or can understand, the theory of SQL. It's the "install this, go into this directory, edit this config file, make sure it starts with the system by doing this, install these drivers for the database" stuff that stops a lot of people before they start. Same applies to pretty much everything that runs as a service, like webservers.
This isn't an argument against SQL, by the way. It's an argument against the notion that the overriding principle when assessing a tool/language is whether it's simple and easy to learn.
See also: http://www.lighttable.com/2014/05/16/pain-we-forgot/
SQL itself is a complex beast. Approachable in parts and with training, like many complex tools.
Oh we totally didn't :) Our user testing is very much at the anecdotal phase so far. Fwiw though, the researcher bias was in the other direction - @ibdknox was initially convinced that logic programming was too hard to teach. We were building a functional language until we found that our test subjects struggled with understanding scope, data-structures (nesting, mostly) and control flow. That didn't leave a whole ton of options.
1. X is bloated, let's reimplement it under the name Y with all the fat trimmed!
2. Notice that the problem may be a little bit harder than originally envisioned, start to add more and more features
3. Y is impossible to distinguish from X
Not to say that sometimes you shouldn't cast down existing systems, but it's often because the times have moved on (eg, Wayland vs X11).
I'll note that 1. is (involuntarily) in the same category as the discourse of populist politicians who promise simple solutions to complex problems. I am wary of both.
Isn't it more probable that the tools we have today are just inadequate to deal with those problems? And maybe they are still inadequate after all these years because our industry is very stubborn and doesn't learn from its mistakes?
I see nothing complex about drawing interactive elements on the screen. Smalltalk with its Morphic interface offers a much richer and flexible GUI toolkit and that was out when, in the 60s? How many GUI toolkits have learned from those lessons? And Morphic/Smalltalk was cross-platform before Java, in ways Java isn't to this day. It seems to me what hampers evolution is the technology we choose (Java, C) and not the problem itself (drawing interactive elements on the screen).
For anyone interested in this discussion I recommend Alan Kay's "The Computer Revolution hasn't happened yet": www.youtube.com/watch?v=oKg1hTOQXoY
"[the] intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."
In other words, that feeling when you realize that a thing you've rebuilt because you didn't like it was built that way for a reason.
No doubt there is a fair amount of architecture astronautics out there, but thinking you're going to turn complex problems into simple problems by "changing the environment" is most of the time extremely naive. You can push against reality all you want, but reality tends to push back.
More seriously, there are many classes of problems which range from extremely impractical to impossible to tackle without software. The question is also, who is getting the headaches? If you pile up enough man-hours on the most awful software, it will eventually work satisfactorily. Customers don't give a damn about the underlying code, nor they should, they want a product that works.
Suppose you've developed an application that has a graphical user interface. For the sake of simplicity, you rejected all of the overly complex GUI toolkits out there and rolled your own, with the assumption that all of your users can see what you're drawing on the screen. So you didn't implement your host platform's accessibility API (assuming you're running on a host platform with an accessibility API.) Now, a blind person needs to use your software. How do you fix that problem without software?
Beyond that specific example, the point is that, as the GP said, trying to change the environment to eliminate inherent complexity is often not feasible.
But I did something different: rather than wrap high-level APIs in even more higher-level APIs, I instead focused on making it "simple to express the math." So for a table, you could use data binding to simply express some constraints on cell width, height, adjacency, and so on. So I never bother using what WPF calls a table (or list or whatever); I would just use canvas and organize the cells using a few lines of math. That is quite liberating!
It is not always the case that GUI toolkit goes in the direction of increasingly more abstraction.
Maintaining complex software is also infeasible. Maybe we're doomed.
It doesn't mean you shouldn't challenge existing paradigms. Innovate! Blow our minds! But just make sure you understand the problem you're solving first :)
I have only made this letter longer because I have not had the time to make it shorter. - Blaise Pascal.
Every few years someone comes up with the idea of making programming so simple that anyone can do it. This is not new; the concept has been around at least since the '60s. Look at some COBOL history:
"They agreed unanimously that more people should be able to program and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximum use of English, be capable of change, be machine-independent and be easy to use, even at the expense of power." 
Many attempts have been made to achieve this Silver Bullet. The examples that get closest are things like GameMaker:Studio  or
Unity 3d , which are extremely domain-specific (for certain classes of 2d and 3d games, respectively).
So you CAN create a domain-specific language that anyone can use. But every attempt to create a completely general purpose language that anyone can use to do anything has failed -- or has ultimately (accidentally?) produced a domain-specific language that solves the specific problems that the authors are most familiar with.
Excel is used by millions of people and powers large swathes of the worlds economy. VB6 is still in use today and powers multi-million dollar companies (http://msdn.microsoft.com/en-us/magazine/jj133828.aspx). COBOL still lurks in the heart of many banks.
These systems have problems that cause untold economic damage. Instead of improving the tools to help people avoid these problems we sit around being smug about our uber-programming skills and declaring that programming is too complex for those people anyway. What many people here don't seem to realise is that the choice is not between the crappy VB6 app and a nicely architected system built by a 'real' programmer, the choice is between the crappy VB6 app and the same people doing the job manually.
Empirically speaking, given tools with an approachable learning curve many people are capable of producing simple programs that make them more effective at their jobs. Rather than lamenting that these people produce bad code we should be improving the tools to lead them down the correct path.
However, I don't think anyone is arguing that nothing should be done about it. Some of us would argue that something must be done, but that the answer is not necessarily simplification, especially simplification of the "make it easy" kind.
Especially since, for many programmers, "simple" & "easy to learn" are actually synonyms for "similar to something I'm already familiar with".
And why is that such a problem? If you can handle excel and vba you can learn to program. A lot of existing languages are easier to learn than excel+vba.
Excel is not easy to use if you want to do anything moderately complex. Even after years of training all through school and in the workplace the average user can't do much programmatically with it.
Excel solved the input/output problem, the availability problem, and the business acceptability problem.
By that I mean business users were doing a massive amount of data entry and data manipulation mainly in order to do simple calculations and view data logically in tables. They did this without thought to automate but it made automation 10x easier.
As a business grunt I can't just install python and start cutting code. If I said "Hey boss I can automate this with python. It will only take me 2 weeks" I've got no hope. For starters corporate IT wouldn't even let me install python. But I can spend a month fiddling around in a spreadsheet to automate the same task and my manager will be happy.
> It's not just programming that is hard, it's editing, debugging, compiling, version control, packaging, deploying, upgrading etc. Existing tools give you a huge amount of flexibility and power (that most end-user tasks don't need) at the expense of a brutal learning curve.
Sure, I agree all of those steps are painful, and I think everyone here agrees that anything that can improve them is welcome! At the same time, some of them only exist in specific contexts. For example, is the guy writing an Excel macro not bothered about version control because Excel is an easy-to-learn tool that achieves simplicity, or because he isn't building something collaboratively with a team of 5 other people, all working on the same task?
By the way, I read your article. I fully agree with this:
"Finally, environments can't be black boxes. Beginners need a simple experience but if they are to become experts they need to be able to shed the training wheels and open the hood. Many attempts at end-user programming failed because they assumed the user was stupid and so wrapped everything in cotton wool. Whenever we provide a simplified experience, there should be an easy way to crack it open and see how it works."
I'd add that they also need to be able to reach for the complex tools once they've mastered the simpler ones.
My point is that for end-user programming to work we need version control, deployment, packaging etc to scale down as well as up. The software industry has a tendency to fetishize scaling up and ignore scaling down.
Experienced professional programmers still struggle with git in their day-to-day workflow (http://www.sussex.ac.uk/Users/bend/ppig2014/13Church-Soderbe...). The average end-user has no chance. Even if the underlying data model is something like git, the experience for the user needs to be more like undo-tree or etherpad.
Excel smooths the learning curve for calculation/simulation - you can get started by just putting in some numbers and learning a few basic functions. If that's all you need, that's all you have to learn. I want to see the same pay-as-you-go approach for all our tools.
It's not just programming that is hard, it's editing, debugging, compiling, version control, packaging, deploying, upgrading etc. Existing tools give you a huge amount of flexibility and power (that most end-user tasks don't need) at the expense of a brutal learning curve. They make complex things possible but easy things tiresome. Consider how long it takes for the average CS undergrad to reach a point where they could build, deploy and maintain the simple webapp I described in that post.
There is definitely room for a simplified set of tools that lets end-users build simple applications with a pay-as-you-go approach to complexity and learning curve.
You want this to succeed, then pick a domain and solve that problem for people.
It's not the source control that is the primary problem. You can do Excel-like things on Google Drive, which has integrated revision control, shared editing, and a dozen other features to help the average user or team get things done.
No, the primary problem is that most people don't know what they want. They don't even know what's possible, much less how to accomplish it. Ask any consultant and they'll agree with me: The first thing you need to do is figure out what the users actually can use based on their needs.
It's like Henry Ford said: If you asked users what they wanted, they'd have told him they wanted a faster horse. 
The problems exist because they aren't easy to solve unless you have the expertise. I'm not poo-pooing Excel or any other domain-specific solution. If you can create a general solution to a problem (say, for example, WordPress), then users can plug together the pieces they need and create an app.
But there is no "language" that will solve the "average person can't program" problem in the general case any more than there's a paintbrush that will solve the "average person can't paint" problem.
I already said that Domain-Specific systems can be successful, and I have nothing against improving domain specific systems.
The original article implied that they were going to fix All The Programming. I'm saying it's been tried, and Every Single Time it fails utterly (or produces a domain specific solution).
If that's the end result of mediocrity, our programming culture could use a whole lot more of that please.
I mean, Unity3D uses C# on top of .Net, which isn't at all what is described in the OP.
That's pretty much my point.
What people need is a full stack to solve their problem, but they need the stack to be able to be customized to their particular variant of the problem.
That full stack will have a lot of code that is beyond 95% of people's ability to create. But even though Unity uses C#, artists and designers I know who are absolutely not programmers (by their own insistence!) can and do successfully create games on top of Unity.
And that is the future of making programming more accessible. Not creating a better language, which is what the original manifesto was about.
Python and Ruby are notable counter-examples to this.
Both Python and Ruby were designed primarily for human-friendliness, as this essay advocates. Guido has described Python as "executable pseudo code". Matz has said that the primary purpose of Ruby is to make programmers productive and happy, and to design more for humans than for the machines.
People have tried to make the performance come. Python is well-loved which has led to numerous attempts to accelerate it (Psyco, Unladen Swallow, PyPy). Many of these efforts have made notable progress, but none have met the bar of compatibility and performance necessary to displace the relatively slow cPython. Ruby is getting faster, but it's still among the slowest compared with its peers.
Lua on the other hand is probably fast because it is simple. There aren't many other programming languages like that. (C and Scheme maybe?)
Building simple systems whilst keeping performance in mind is a delicate balancing act. Local optimisations tend to add complexity that may prevent later optimisations. Sweeping generalisations that reduce implementation complexity often inhibit later optimisation due to lack of information (eg http://lucumr.pocoo.org/2014/8/16/the-python-i-would-like-to...).
Python has a rather simple syntax indeed, but it has very complicated semantics. `foo.bar` - syntactically very simple expression of the attribute access has a mountain of semantics behind it.
> Given that Python and Ruby are both simple to learn and use (as users, not implementors), they are both simple by the definition given in this article.
Good remark. I still tend to disagree though. It's easy to learn and use a very small subset of the language. Not the whole language. Even then, it's practically impossible to limit yourself to a certain subset. Not if you want to use any python libraries.
The problem when try to make something simple is that carry the idea across boundaries/subsystem eventually break. If, for example, you wanna to make a performant language with clean syntax for a multi-procesor setup with intuitive behaviour, but, the base system/os/CPU not make it easy, you eventually need to break the simplicity along the way.
This also touch the problem that to make something simple is necesary the skill and mastery across the whole problem. So perhaps python/ruby is like is because the authors are great at building a aproachable language but not at build the VM. You need to be both.
Contrast with erlang. Amazing VM but whacky syntax. Or haskell, amazing ideas but the way they are teached... seriuously? A functor, monad, who talk like that? Have simplicity along ALL the path... that is hard!
> We should measure complexity as the cumulative effort to learn and use a technology.
Given that Python and Ruby are both simple to learn and use (as users, not implementors), they are both simple by the definition given in this article.
The thing that no developer really benefits from is added state complexity. But we've spent the last 30+ years coming up with ways to hide that kind of complexity from a programmer. That's /why/ we have our programs and processes divided into boxes (PL, OS, etc.) and /why/ we use things like higher level languages instead of assembly, object oriented programming, and general data hiding. On the other hand, it sounds much less sexy to say "People should do a better job of following best practices, and a lot of the problems in our industry are because people prefer to glue things onto existing code instead of being willing to do a higher level restructuring of a code base when a new need is established."
I'd agree that a lot of programming is not science. We shouldn't be treating it as such. The harder parts of programming are an application of theoretical mathematics (Dijkstra pointed this out years ago, and it's still true), but very little of CS is 'science' the way something like physics or chemistry is 'science'.
http://db.cs.berkeley.edu/papers/cidr11-bloom.pdf - by unboxing distributed systems we can statically predict whether a given program is eventually consistent (http://arxiv.org/abs/1309.3324 - and automatically add coordination protocols where necessary)
http://db.cs.berkeley.edu/papers/dbtest12-bloom.pdf - by unboxing distributed systems we can use static analysis to more efficiently explore the space of message interleavings in unit tests
https://infosys.uni-saarland.de/projects/octopusdb.php - by unboxing storage, databases and application-side queries we can treat the entire pipeline as a single optimisation problem
http://www.openmirage.org/ - by merging the OS box and the PL box we can improve performance and security for server applications
I don't disagree with you exactly, but I would point out that a) boxes are a mean of dealing with complexity - less complexity means we can have bigger boxes and more opportunities for cross-layer optimisation b) the places we have drawn those boxes are largely arbitrary and shift over time - the existence of the boxes does not imply that we can't benefit by moving the lines around or by merging some of them.
>a) boxes are a mean of dealing with complexity - less complexity means we can have bigger boxes and more opportunities for cross-layer optimisation b) the places we have drawn those boxes are largely arbitrary and shift over time - the existence of the boxes does not imply that we can't benefit by moving the lines around or by merging some of them.
I agree completely. However, the manifesto seems (to me) to advocate all-over unification as the end goal, which I think is naive. I read it as "if a program needs to be divided into separate boxes, perhaps you need to make it simpler", which to me seems like the wrong way to go about things.
> We should concentrate on their modest but ubiquitous needs rather than the high-end specialized problems addressed by most R&D
A lot of the things we do in programming give us power and flexibility at the expense of increasing the learning curve: eg separate tools for version control, compiling, editing, debugging, deployment, data storage etc. IDEs can show all those tools in one panel but it can't change the fact that they were designed to be agnostic to each other and that limits how well they can interface.
My current day job is working on an end-user programming tool that aims to take the good parts of excel and fix the weaknesses. We unify data storage, reaction to change and computation (as a database with incrementally-updated views). The language editor is live so there is no save/compile step - data is shown flowing through your views as you build them. We plan to build version control into the editor so that every change to the code is stored safely and commits can be created ad-hoc after the fact (something like http://www.emacswiki.org/emacs/UndoTree). Debugging is just a matter of following data through the various views and can also be automated by yet more views (eg show me all the input events and all the changes to this table, ordered by time and grouped by user). We have some ideas about simplifying networking, packaging/versioning and deployment too but that's off in the future for now.
Merging all these things together reduces power and flexibility in some areas but allows us to make drastic improvements to the user experience and reduce cognitive load. It's really a matter of where you want to spend your complexity budget and how much value you get out of it. We think that the amount spent on the development environment is not paying for itself right now.
Most of the problems of mathematics arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify mathematics.
Much complexity arises from how we have partitioned mathematics into boxes: <put various branches of mathematics here>; and likewise how we have partitioned mathematics development: <put various phases here, from "Eureka" moment to copy-editing of proof>. We should go back to the beginning and rethink everything in order to unify and simplify. To do this we must unlearn how to do mathematics, a very hard thing to do. (I paraphrased by replacing the original "to" with "how to do".)
Revolutions start in the slums. Most new mathematics platforms were initially dismissed by the experts as toys. We should work for end-users disenfranchised by lack of mathematics expertise. We should concentrate on their modest but ubiquitous needs rather than the high-end specialized problems addressed by most R&D. We should take inspiration from end-user tools like <replace with your favorite tools, like pen-and-paper, whiteboard, abacus>. We should avoid the trap of designing for ourselves. We believe that in the long run expert mathematicians also stand to greatly benefit from radical simplification, but to get there we must start small.
I think the problem is that the basic definition of programming relies on editing textual and mainly static source code that describes processes at a low level. As soon as you start to get away from that your tool by definition is not a programming tool and therefore programmers don't want to be associated with it.
Its a problem of a failure of imagination and worldview and a psychological issue of insecurity. Programmers are like traditional wood workers who look upon automated manufacturing with disdain.
Ultimately superior artificial general intelligence will arrive and put those mainstream programmers with their overly complex outdated manual tools out of work.
But programmers will hang on to their outdated paradigm until the end of the human age. Which by the way is coming within just a few decades.
There are 100 decisions every minute that need very complicated thought processes to solve, this is as true in manufacturing as it is in software development. Do you make this button out of plastic or metal? What kind of metal or plastic? How big should it be? Should it click or not? Does it have multiple functions based on the state of the vehicle? Does it have a label? Where does the label go? What does it say?
You could write this all into a spec, but the spec would end up being the same as the CAD drawing / program. So you end up relying on the experience of the engineer to make all the decisions without it all being 100% specced out. Maybe it's a button on a top end Mercedes that gets pushed a lot, so it should be metal to complete the feeling of luxury. Maybe it's a button on a mid-end Nissan that gets pushed a moderate amount - make it a decent plastic, etc.
In engineering of any kind, you have functional requirements and you have outputs that you design to satisfy them. In software engineering, you may be trying to help people manage finances; in mechanical engineering, maybe you're trying to remove water from a mine shaft. Your outputs are a graphical computer program and a pump. You have certain resources available, such as operating system X or a steel mill. Your job is to bridge the gap between resources and outputs by creating the minimal viable instruction set that results in the output.
The instruction sets are, respectively, source code and a CAD drawing.
The mechanical engineer has a CAD program that already knows a lot about pumps, and a library of standard screws, threads, and sockets. The program can infer constraints, suggest placement of parts, substitute components based on formulae, read data from Excel or CSV files, simulate stress due to different loadings, and even knows a bit about manufacturing so that it can tell if the part can be made or not. It can help generate drawings that the steel mill can use to create the pump. The mill, too, is pretty smart and has a lot of leeway to make the pump however the want, as long as it works.
All this is basically equivalent to the what the software engineer has available: test suites, frameworks, reusable functions and libraries, cross-platform compilers. The difference is workflow:
While working on the design, the engineer can switch between a variety of layouts, color schemes, transparencies, and zoom levels. S/he can visually show constraints (using symbols and colors), directly manipulate parameters and positioning of objects. The engineer doesn't have to memorize commands or syntactic conventions in order to be productive. In a modern system, these kind of interactive operations are intuitive to perform and distinct from the manipulations of the source object (the CAD file). The equivalent for software engineering doesn't really exist. It's like instead of separating engineering and implemenetation, you're trying to do everything on the factory floor.
The effect of this is that you have plenty of smart people (in science, business, whatever) frustrated in translating their ideas into "code". In the long run, what we should, and probably will, see is smarter compilers, more sophisticated IDEs -- abstractions above the level of implementation (the problem level) rather than below (the machine level). This is similar to what happened with the process of putting ideas into words for people to read: at first, only scribes could read and write; then, anyone could read and write, but only printers could publish books; now, anyone can read, write, publish, and distribute anything via the internet.
None of this is to say "you're doing it wrong," but rather "we're not there yet." Software engineering is not going away in the same way that mechanical engineering is not going away because of CAD and simulation programs. More sophisticated tools are a path towards "Simple things should be simple, complex things should be possible" (quote from Alan Kay).
 There is a reason it's called code, after all.
Probably the largest disconnect is that while I heartily endorse simplicity and fighting complexity—even if it increases costs elsewhere in the system—I worry that we do not have the same definition of "simplicity". Rich Hickey's "Simple Made Easy"² talk lays out a great framework for thinking about this. I fear that they really mean "easy" and not "simple" and, for all that I agree with their goals, that is not the way we should accomplish them.
How "easy" something is—and how easy it is to learn—is a relative measure. It depends on the person, their way of thinking, their background... Simplicity, on the other hand, is a property of the system itself. The two are not always the same: it's quite possible for something simple to still be difficult to learn.
The problem is that (greatly simplifying) you learn something once, but you use it continuously. It's important for a tool to be simple and expressive even if that makes it harder to learn at first, since it will mostly be used by people who have already learned it! We should not cripple tools, or make them more complex, in an effort to make them easier to learn, but that's exactly what many people seem to advocate! (Not in those words, of course.)
So yes, incidental complexity is a problem. It needs addressing.
But it's all too easy to mistake "different" for "difficult" and "difficult" for "complex". In trying to eliminate incidental complexity, we have to be careful to maintain actual simplicity and not introduce complexity in other places just to make life easier for beginners.
At the same time, we have to remember that while incidental complexity is a problem, it isn't "the" problem. (Is there every really one problem?) Expressiveness, flexibility and power are all important... even if they make things harder to learn. Even performance still matters, although I agree it's over-prioritized 99% of the time.
Focusing solely on making things "easy" is not the way forward.
¹ Perhaps it's supposed to be amusingly over the top, but for me it just sets off my internal salesman alarm. It feels like they're trying to guilt me into something instead of presenting a logical case. Politics rather than reason.
I also seem to disagree with people who emphasize "expressiveness, flexibility, and power". I think they are mostly a selection effect: talented programmers tend to be attracted to those features, especially when they are young and haven't yet been burned by them too often.
With such fundamental differences we can probably only agree to disagree.
What do you mean? Learning and doing are quite different.
From a professional programmer point of view:
If it takes me 6 months to learn a tool, and then the tool allows me to complete future work twice as fast (or with half as many defects etc) that is a great trade off.
> It's important for a tool to be simple and expressive even if that makes it harder to learn at first, since it will mostly be used by people who have already learned it!
Why is that important? Why can't a tool be simple, expressive, easy to learn and easy to use? What studies do you site for your viewpoint? There has been a lot of research in this area. Please reference the research that supports your claim.
Reason has been tried by Edwards and many other for decades. It hasn't worked.
Perhaps it can be. But they are all design choices that are often at odds with one another. E.g. I've frequently used software that was easy to learn but hard to use.
Likewise I've used tools that were hard to learn because they had new abstractions but once you understood the new abstractions they were really easy to use. Etc etc etc.
I see people jump to this conclusion on pretty much every post of this type. In this case it is clear from the authors work (http://www.subtext-lang.org/) that his focus is not on making programming familiar/easy to non-technical users but rather on having the computer help manage cognitively expensive tasks such as navigating nested conditionals or keeping various representations of the same state in sync.
> ...you learn something once, but you use it continuously.
Empirically speaking, the vast majority of people do not learn to program at all. In our research we have interviewed a number of people in highly skilled jobs who would benefit hugely from basic automation skills but can't spare the years of training necessary to get there with current tools. There does come a point where the finiteness of human life has to come into the simple vs easy tradeoff.
You also assume that the tradeoff is currently tight. I believe, based on the research I've posted elsewhere in this discussion and on the months of reading we've done for our work, that there is still plenty of space to make things both simpler and easier. I've talked about this before - https://news.ycombinator.com/item?id=7760790
The cost of a barrier to entry is multiplied by everyone it keeps out who could have been productive / creative / or found their passion.
The cost of a limited set of tool features is, arguably, that people will exhaust the tool and be limited. However I have never found this argument convincing given what was achieved with 64kb of memory, or even paper and pencil.
The typewriter, the polaroid camera, the word processor, email. All are increases in complexity and massive decreases in effort to learn and they all resulted in massive increases in the production of culture and exchange of ideas. Some inventions are both easier to learn and less complex (Feynman diagrams) but if I had to pick one, I pick easy to learn, every single time.
Not sure if I agree. Steep learning curves significantly hurt user adoption. This is especially true for tools that have lots of alternatives.
- In programming, our job is to move over the "complexity hump." We hear a problem, we analyze it, we code it, then we simplify. Most really bad code comes from programmers never pushing past the hump. They just slash away at whatever problem is in front of them.
- When we move past the hump, we push complexity off. Sometimes this is done by abstraction, sometimes by a reduction in terms. In either case, our job is to make the complexity go away. If we're still dealing with arcane issues a month from now? We're not past the hump.
- Many times we believe we've simplified a problem, only to have the complexities jump out and bite us again later. That's why it's best to "exercise your code" to make sure that your abstractions or re-organizations will hold up in the real world. Use it in different contexts. When we don't exercise our code enough, we get complexity debt. The old "works on my machine" is now "works in these 12 cases I tried"
- Every tool we pick up has some degree of complexity debt depending on how much it has been exercised in various contexts. Stuff like *nix command line programs? They rock. The reason they rock is because they have a billion different scenarios in which they've been proven.
We should work for end-users disenfranchised by lack of programming expertise. We should concentrate on their modest but ubiquitous needs rather than the high-end specialized problems addressed by most R&D.
The single biggest win for end-users "disenfranchised by lack of programming expertise" in the past 20 years was probably PHP, because all you really needed to know was HTML plus a tiny smidgen of CSS, PHP and SQL. Anything else could be cobbled together with help from StackOverflow. The results were a mixed blessing: A lot of people built sites to suit their needs, but a lot of those sites degenerated into pretty ugly hairballs. Before that, the biggest win was probably Excel: The world runs on useful (but often buggy) spreadsheets. I have nothing against these tools. They're important and they fill a critical need.
Now compare a tool like Python: it's simple and it scales from novices to experts. But it's mostly limited to programmers and scientists, and unlike PHP, you actually need to learn some basic programming to use it in most cases.
I like using tools designed for professional programmers. It's great if they also work well for novices who are willing to learn a bit of programming. But the stuff I build is a lot bigger than the typical amateur PHP website, and I need tools for managing large amounts of complexity in my problem domain, and for managing requirements that will change significantly over the course of years.
From the point of view of an end-user, using Outlook is vastly simpler than learning to program in order to cobble together their own solution out of command-line tools with harsh learning curves. Even for a professional programmer the tradeoff is pretty clear.
People only have so much cognitive capacity and so much time to live. Every hour spent messing with configuration or customising behaviour has to be weighed against the lifetime gain of that improvement (let alone the lifetime cost of maintaining the custom solution as the environment changes).
That's not to say that I don't want software to be configurable and composable, just that the cost of doing so with our current tools is too high for most users.
The manifesto declares boldly "we are not doing science". Right. This is completely and trivially correct. No one thinks you are.
Something about history and being doomed to repeat it springs to mind.
If the idea is to abstract even more in a simple and powerful way that more people can create more/better software, go for it. But don't dream that all that has been done until this day is complex because nobody cared about simplicity at all. I'm not saying the complexity is completely unavoidable but it takes a lot of work.
I would rather say we are doing science of the artificial---which looks a lot like design.
See the Sciences of the Artificial by Herb Simon (a true legend):
OS = Operating Sytem, DB = DataBase, UI = User Interface, PL = I can't work it out but I'm sure it will be stupidly obvious once it's pointed out...
The corporate IT ecosystem is rife with this around areas like security and network administration.
That was 1994--20 years ago--and my anatomy teachersomehow had a lab for Hypercard computers. He had the entire class design animated anatomy presentations that you still couldn't match today using 2014 mainstream presentation software.
There's a lot of nostalgia in programming, but Hypercard was the real deal.
This isn't to say it definitively can't be done, just an explanation of why it hasn't yet.