Hacker News new | past | comments | ask | show | jobs | submit login

The example from the BOOM group is interesting! However, it's not clear they are arguing for the same kind of "simplicity" than Jonathan Edwards.

The BOOM group seem to be using a Prolog variant, and their conclusions argue for declarative, high-level programming languages (I didn't read the paper in full; please correct me if I'm wrong!). But if you take a look at declarative programming languages such as Prolog, or functional languages like Haskell or OCaml (all languages which aim to make complex problems more tractable), this is precisely what many programmers reject as "too complex". It doesn't help that very often, "complex" is used as a synonym for "unfamiliar".

Here is another piece of the puzzle: in another post by John Edwards, he claims simplicity led him to reject higher-order functions for the "simple" language he is designing. But higher-order functions are a tool (not necesarily a simple tool) useful to reduce the complexity of code! It just happens that a programmer unused to them must first learn about them before becoming proficient with them.

So the trade-off between complexity and simplicity isn't so easy to define.




> he claims simplicity led him to reject higher-order functions for the "simple" language he is designing. But higher-order functions are a tool useful to reduce the complexity of code! It just happens that a programmer unused to them must first learn about them before becoming proficient with them.

Higher-order functions, or higher-order anything (e.g. predicates) are not complicated because they are unfamiliar, but are complicated necessarily: HOFs, for example, indirect control and data flow, and now you have to think 2nd or 3rd order (hopefully not more) about what your code is doing!

Complexity is really just in the definition of "higher-order." It even shows up directly in our tools for automatically reasoning about program behavior (1st order is easier to deal with than 2nd or N-order). There is a reason 1st order logic is so popular, not because people don't get higher-order logic, but 1st order logic is easier to deal with. The problem is, of course, expressiveness (you can't do much with a functional language that doesn't admit HOFs).


I disagree that higher-order functions are "complicated necessarily". I believe this is actually a side effect of people usually learning to program using languages that don't support them. Weren't there success stories posted here on HN about novices learning to program with functional languages without difficulty?

It's not surprising you can do a lot with a programming language that doesn't support HOFs (though, of course, there wouldn't be much point in calling it functional) -- you can in fact do everything without them. The key issue here is whether HOFs are a useful tool that, once learned, will help make the complexity of the problem you're attempting to solve more manageable. I believe they are, by helping with modularity and composability.

Even if learning to use more powerful tools is more "complex" (that is, it's "easier" or "takes less time to get used to"), once you master them, presumably you're equipped to deal with the tasks you want to accomplish.

Maybe some power tools can be made easier to handle. This is a valuable goal. But if your goal is to never make a tool that requires the user to be trained, you're severely limiting yourself.


I think he is arguing that higher-order functions are more difficult to learn/use than first-order functions because they require higher-order reasoning. That's not to say that they aren't useful or that beginners can't learn them, but just that they impose an overhead and it's worth exploring alternative methods of reducing complexity that may end up having a lower overhead.

> ...once you master them...

Arguments of this sort tend to assume that a) the learning time can always be written off and b) that once you master a tool there are no more overheads. a) is false in the case where the learning time is too long or simply not available eg Joe Accountant may benefit from learning to program but he doesn't have the time to spend years learning. b) is false in the case where the abstraction is harder to reason about or makes debugging harder.

There is certainly an economic decision to be made here but it must consider opportunity cost, cognitive overhead and maintenance overhead as well. That decision doesn't always favour the slow-to-master tool.

It's like extolling the virtues of investing in an industrial nailgun to someone who is just trying to hang a photo. Sure, the nailgun will make driving in nails faster but the investment won't pay off for them.


The assertion is in the definition of higher-order, it is designed to be more powerful by allowing for variables that can be bound to procedures. It has nothing to do about learning curve, there are actually trivial formal proofs that higher-order X is more complicated than lower-order X (that simply involve the definition of order).

Once you have mastered using HOFs, you have simply become proficient at doing a higher-order analysis in your head, it is still more complicated than a lower-order analysis. Even SPJ's head would probably explode given a function that took a function that took a function that took a function... with HOFs, you really want to keep N (the order) fairly small.

There is actually a whole body of work on point-free programming, where the idea is to make something like f(g) look more like f * g, that you aren't passing g into f (which you actually are, but this is an implementation detail), but you are composing f and g. The semantics of that composition are then more well-defined (and hence restricted) than naked higher order functional programming (which is more an assembly language at that point).


> However, it's not clear they are arguing for the same kind of "simplicity" than Jonathan Edwards.

Jonathan Edwards: "Complexity is cumulative cognitive load. We should measure complexity as the cumulative effort to learn and use a technology."

BOOM: "One simple lesson of our experience is that modern hardware enables real systems to be implemented in very high-level languages. We should use that luxury to implement systems in a manner that is simpler to design, debug, secure and extend — especially for tricky and mission-critical software like distributed services."

> The BOOM group seem to be using a Prolog variant

They are using a variant of datalog, which is dramatically simpler than prolog (no unification, no data-structures, no semantic-breaking cut etc). In our (admittedly limited) user experiments we have found that their data-centric approach is easier to teach non-programmers since it lends itself to direct manipulation, live execution and computer-assisted debugging (eg we can cheaply record the exact cause of every change to runtime state and make it queryable in the same language). We (Light Table) are working on a similar datalog variant with a front-end that is about as complicated as excel.

> So the trade-off between complexity and simplicity isn't so easy to define.

I agree that there is a often a trade-off between simple and easy but I don't think that our current tools are anywhere near optimial in that sense. For the case of distributed programming the BOOM tools are both simpler and easier in that they allow you to focus on specifying high-level behaviour and not worry about most of the details that normally make up the bulk of programmers mental load.


> BOOM: "One simple lesson of our experience is that modern hardware enables real systems to be implemented in very high-level languages. We should use that luxury to implement systems in a manner that is simpler to design, debug, secure and extend — especially for tricky and mission-critical software like distributed services."

Yes, but this is accomplished using declarative high-level languages, precisely the kind of languages that many programmers will claim are "too hard" (i.e. sufficiently unlike traditional imperative languages).

I bet almost everyone agrees that the goal of most modern software development should be "[...] to implement systems in a manner that is simpler to design, debug, secure and extend — especially for tricky and mission-critical software like distributed services."

The disageement here is on whether this can accomplished by making programming languages simplified and easier to learn as the overriding principle. Sometimes you have to use a complex tool in order to make your work better.


> ...precisely the kind of languages that many programmers will claim are "too hard"

Your argument seems to be that Prolog and Haskell are high-level and Prolog and Haskell are complicated and hard to learn. I agree up to that point. That doesn't imply that any high-level declarative language must be hard to learn. SQL and Excel are declarative high-level languages (more so than Haskell and Prolog in terms of abstraction from the execution model). Both have had much more success with non-programmers than imperative languages have.

Bloom is a much simpler language than, say, Ruby. It doesn't have data-structures. There is no control-flow. Semantics are independent of the order of evaluation. The entire history of execution can be displayed to the user in a useful form. Distributed algorithms can be expressed much more directly (eg their paxos implementation is almost a line for line port of the original pseudo-code definition). We have actually tested Bloom-like systems on non-programmers and had much better results than with imperative or functional languages.

> I bet almost everyone agrees that...

The key part of that quote was "We should use that luxury to implement systems..." ie give up low-level control over performance and memory layout in exchange for simpler reasoning. Note that in both Excel and SQL the user doesn't have to reason about what order things happen in or where things are stored. The same applies to successful dataflow environments like Labview. In contrast, programmer-centric languages like Haskell and Prolog require understanding data-structures and Prolog requires understanding the execution model to reason about cut or to avoid looping forever in dfs.

Excel, SQL and Labview are all flawed in their own ways but we can build on their success rather than writing them off as not real programming.


> We have actually tested Bloom-like systems on non-programmers and had much better results than with imperative or functional languages.

Interesting! Do you have any link to your study? I'm interested to see your methodology and how you handled researcher bias :)

By the way, SQL is a pretty complex formal system well grounded on theory. I would never write it off as "not real". Most people don't understand SQL without training, by the way.


> By the way, SQL is a pretty complex formal system well grounded on theory. ... Most people don't understand SQL without training, by the way.

I think lots of people understand, or can understand, the theory of SQL. It's the "install this, go into this directory, edit this config file, make sure it starts with the system by doing this, install these drivers for the database" stuff that stops a lot of people before they start. Same applies to pretty much everything that runs as a service, like webservers.


Really? I agree installation is a major hurdle, but in my experience -- that is, coworkers and students -- most people are helpless with SQL beyond very basic queries unless they've been trained in it and truly used it. Most people don't understand the relational algebra it's based on, either. To make it clear this is not some kind of elitist statement: I struggle with SQL, too, because so far I haven't needed to write truly complex queries in my day job.

This isn't an argument against SQL, by the way. It's an argument against the notion that the overriding principle when assessing a tool/language is whether it's simple and easy to learn.


Ok sure. My point is that there should be a clear path from "I might want to use this thing, what is it about and can I give it a spin" to "ok I get it, where can I learn more." Not to say that everyone will _instantly_ understand the concepts, but that time spent on incidental tasks is pretty much wasted.

See also: http://www.lighttable.com/2014/05/16/pain-we-forgot/


UK schools teach 15-16 year old kids to build simple CRUD apps in MS Access. So the basics are definitely somewhat approachable.


But the basics of almost everything are approachable. The basics of Prolog, Haskell [1], Basic, etc, are all approachable. I learned programming with GW Basic, back when I was a kid; it's learnable, but I wouldn't enjoy building complex software with it.

SQL itself is a complex beast. Approachable in parts and with training, like many complex tools.

[1] http://twdkz.wordpress.com/2014/06/26/teenage-haskell/


We were also taught prolog, asm and (weirdly) pascal. We didn't manage to actually produce anything interesting in any of those. The Access CRUD apps and the little VB6 games were the only working software that came out of that two year class.


> I'm interested to see your methodology and how you handled researcher bias :)

Oh we totally didn't :) Our user testing is very much at the anecdotal phase so far. Fwiw though, the researcher bias was in the other direction - @ibdknox was initially convinced that logic programming was too hard to teach. We were building a functional language until we found that our test subjects struggled with understanding scope, data-structures (nesting, mostly) and control flow. That didn't leave a whole ton of options.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: