That holds water only at the most sugary syntactic level.
At the most simplistic semantic level a binary addition operator is a simple mapping of what the machine wants to do (take 2 registers and put their sum in an accumulator register).
Lets expand that simple piece of code. Now you have more than 2 numbers to sum:
2 + 3 + 40
you have been forced to encode both stages of summing the first pair of numbers and then the result and the third value. Contrast this with a lisp
(+ 2 3 40)
Lisp has takes care of all of the details for me; i merely need to supply the operands and the language worries about mapping it to the underlying operations as needed.
Clearly, here is a situation where lisp notation elevates me from having to think about the machine.
This is a simple expression. How about writing (1+2) * ((3 + 6) / (1+2)) in prefix notation without losing readability?
When precedence is to be overridden, the infix form can be easily understood by anyone with basic knowledge of arithmetic. Prefix on the other hand can be confusing. This makes BigDecimal arithmetic in Java cumbersome compared to languages which allow operator overloading.
I was also skeptical regarding the readability of prefix notation vs. postfix for large problems. However, when I tried it, I actually found it easier to handle. For example, here's a very obnoxious heat transfer correlation that I typed in common lisp:
Now, I admit that my sense of whitespace is pretty inconsistent, but still: Here is a real case of a very annoyingly nested expression in both infix and prefix notation. While I think it would take quite a while for anyone to sort out what the Hell is going on for either of these regardless of experience (versus, say, rendered LaTeX), I would put forth the suggestion that the common lisp is actually easier to understand.
Bullshit. Everyone is taught the first way, everyone writes the first way by hand. The first way is the "human" way. The second way is the obscure theory-of-computation way. Your post seems like pure rationalization. Even if it applies to you, I can't see how you can claim with a straight face that prefix math is more "human" in general.
Math notation has gone through a decent bit of change over time, and has proven amenable to change to suit the needs of specific environments. Lisps don't force you to use "(+ 2 3 4)" instead of "2 + 3 + 4" to be contrary, it's there to preserve a cornerstone of the language family: the power gained by easy manipulation of syntax trees.
Also, math notation is pretty much the only bad syntax example when comparing with other programming languages. It becomes literally the only example when you leave hard asceticism and take the approach that Clojure does of providing literals for vectors, maps, and sets.
So, yeah, maybe dig a little deeper next time instead of dismissing tools out of hand? Many programs never directly perform arithmetic, are Lisp dialects unsuitable for those as well?
I understand why it does it that way; I've written programs in Clojure and quite like it (with math expressions I've been using "let" a lot to break them up and make them more readable, but this also makes them a lot more verbose). But contrary to the GP, I still see prefix math as a cost, not a benefit.
edit: I love Clojure's literals but I still think even with editor help, nested parens are a less readable form of structure than c-style blocks. I understand why it's done that way (uniformity for macros), but that doesn't mean I have to like it in and of itself.
Obviously neither one is more "human"; humans don't know the first thing about math until they're educated. This has everything to do with what's familiar and nothing to do with what's "natural" to the human mind.
If everyone is educated the same way, that's the human way of doing it, at least in the sense of human usability. It's irrelevant if we could theoretically do it another way. Language is learnt too, but that doesn't mean that (for English speakers) writing our programs with kanji keywords would be just as usable and readable as writing them with English ones.
To summarize your posts in this thread, if you have a representation (R) and you discover a new representation (R') where R' is provably better than R. R' however has a cost associated with converting to it, lets call it the impedance.
My reading of your posts is that you advocate ignoring any R' that has an impedance greater than zero. Surely it would be wiser to evaluate the switch to R' if the value of R' less the impedance of R' is positive?
No, it's neither fair nor accurate. Anyway, I see no point in continuing this conversation against a bunch of Lisp fans who will accept any old nonsense if it's pro-Lisp.
Everyone isn't educated the same way, sorry. That invalidates your entire argument, doesn't it?
"One day I asked him, 'Roger [Hui], do you do math in English or Cantonese?' He smiled at me and said, 'I do it in Cantonese because it's faster and it's completely regular.'" - "A Conversation with Arthur Whitney"
I think your defining "human" based on norm. I would prefer to define "human" based on a humans ease of use given familiarity with the notation. I prefer this definition because if you use what I take to be your definition you get oddities like Roman Numerals being more human than what we use today. Given this definition I'm not sure you can say he is purely rationalizing his choice, since he gives a clear example of a situation in which he finds that the notation is easier to use.
Base 12 is better than base 10, but that doesn't necessarily mean it's worth the cost of using it in our programs (I realize that people do use other bases in some contexts, but that's actually an example of coding to the computer's standards).
I don't write my equations by-hand the same way I do on the computer. For example:
( (1 + 2)/(3 + 4) )**2
vs.
, . 2
/ 1 + 2 \
| ------------- |
\ 3 + 4 /
The difference may seem subtle, but it's shockingly important. A better example may be
,-, b
\
\ x + 1 dx
\
'--' a
instead of something like
trapz(a, b, @(x) x+1);
The truly human way of writing down math requires much more flexibility than a typical text editor can really afford. This isn't to say that some sort of pseudo-visual programming is Teh Futar though.
(I do think a LaTeX --> solved expression tool would be awexome though.)