Hacker News new | past | comments | ask | show | jobs | submit login
How I lost my faith (in Lisp) (groups.google.com)
72 points by bootload on Feb 1, 2008 | hide | past | web | favorite | 97 comments

Without being able to see this code that was easier to write in Python than Lisp it's impossible to say for sure, but it seems likely that the differentiating factor was libraries, not the core language.

For me, the Python module system and documentation wins hands down. I keep telling myself that at some point, I'm going to dive into Lisp and never look back, but Python's module system is very easy to use, and this lends itself to a very nice standard library as well as easy-to-install libraries. Not to mention that every module is well-documented often with real world examples a Google search away.

With Lisp, I have honestly tried to find good documentation on its module system, and I have tried my best to understand the workings of ASDF, but when it comes down to it, I just don't have the time to muck around with stuff that is so much more difficult. The simple fact is that if I need to get something done, it's going to be in Python for the time being, and who knows when I'll get to the point where I'm not coding against a deadline, the point where I'll be able to waste some time really learning Lisp.

Lisp code is written to be easily parsed and manipulated by software. Python is written to be easily parsed and manipulated by humans.

When I first came to Python I didn't need to become an expert Python programmer to understand Python source code without much effort. But I still haven't gotten to the point of looking at a piece of Lisp code and right away having an idea of what it does. Eventually your brain recognizes the patterns (but heck if you write assembly long enough you start to see the patters of common C control structures).

But I think it all depends on the problem at hand. Certain things I can write faster in Java than Python (occasionally static typing and interfaces can make large programs easier to keep in one's head and work with). Most things I can write easier in Python. And a few things I can write easier in Lisp (primarily writing programs that manipulate trees and essentially create mini-languages).

Python is written to be easily parsed and manipulated by humans

... used to C

You sound like an English speaker claiming that English is easier for people to understand than other languages. There's nothing intrinsically human about infix syntax. It's just a question of what you're used to. I honestly find prefix syntax (or lack thereof) much easier to read, because that's what I'm used to.

Lisp expressions are different from what most people are taught in early math classes.

As a result more people find y=m * x+b easier than (let ((y (+ (* m x) b))) y) until they get the chance to spend some quality time with a REPL.

it's not just prefix vs infix.

a.b().c().d().e() is easier to read to me than (e (d (c (b a))))

I also prefer object.method() instead of (method object). Now granted the OO model used in Lisp allows for some really powerful things (like methods that specialize on more than one object and adding before, after and around methods to modify code).

To each is own :) After almost 15 years of OO in C++ I jumped in the functional bandwagon (Haskell, Common Lisp etc)

I sure find (e (d (c (b a)))) easier to read.

Here's why: In your first exemple "a.b().c().d().e()" you have to read up to the end to know what you are "really" doing, i.e. calling e() whereas in "(e (...))" up front I know the most important part: I'm calling function "e" on something.

The funniest part is even if you consider it from object perspective the message is more important at least to my view, even OO father Alan Kay thinks that too (see Computer Revolution has not happend).

As someone said before what we know condition us to what will be easier to read, to that I'd like to add that it is also "how we undestand thing". Sure I know C++/Java syntax better but the way I conceptualize OO is more through message than object...

Think about it this way do you like when you speak to someone and they start by a very lengthy introduction which you have no idea where it is leading to realise at the end that they were trying to sell you something? Or do you prefer to know up front and then listen for as much detail as you need before determining if you are interested or not?

Where is the evidence that "Python is written to be easily parsed and manipulated by humans", as opposed to a subset of humans with certain (changeable) habits of mind?

It seems silly to demand science in a discussion of programming languages, but you're making some big assumptions about how human mind works. Considering how recent the discipline of programming is, and how fast it's evolving, what makes you think the designers of Python have discovered some immutable laws of the mind--ones that go against common sense, no less? (Common sense being that fewer tokens is better.)

Because it's pseudocode that runs. Look at how many times non-python programmers have written code examples and someone says to them that "that's almost identical to python."

This is a very easily testable theory.

Yes, it's easily testable. Just write a pseudocode switch statement.

That's almost identical to python, but how do you make it exactly identical? Nobody knows.


Tested. Found false.

Now how about Scheme? I recently tested that myself by using a case expression, a syntax I rarely use. I didn't have to look in R5RS. I just put the parens where they would naturally go for the most obvious syntax tree, and it worked.

For someone steeped in Algol-like languages, Python will be easy to get started with. Scheme/Lisp takes longer to get started, but once you get it, you don't have to keep going back to the documentation. It just makes sense.

Beyond idiotic. Clearly I'm not drunk enough for that inanity.

  case beercount
       0 1      "inanity"
       2 3 4    "good sense"
       else     "genius"
Drink all you want. The pseudocode above still translates into Scheme in a straightforward way. The Python community still can't agree how to translate it.

The python way is to use a dictionary, or in simpler cases, if statements.

Your example falls short because case statements aren't such an obvious method of solving the problem as you think. They are nothing more than an artifact of the machine that C pushes up to you.

Some functions map multiple elements of their domain to single elements of their range. My pseudocode is a concise way to express such a function. It is not an artifact of the machine.

It's still absolutely no more difficult with if statements, and with if statements you don't have to worry about error prone fall through. There is no lack of consensus on how to do this. It's very obvious.

Concise? I've got some perl code for you.

There's no error-prone fall through in the code I posted. You would advocate the following pseudocode?

  if beercount is 0 or 1 return "inanity"
  if beercount is in 2,3,4 return "good sense"
  return "genius"
I wouldn't advocate that. I think my first version is more concise in a clear way, not in a line-noise way.

your formating fucking blows.

That's because pseudocode conventionally uses indentation to denote structure, and Python does too. This explanation is trivial because pseudocode is trivial. The sort of code that can be meaningfully written as pseudocode is easy to understand in any language, and so it doesn't matter.


The question is whether we can say anything relevant at all by means of self observation. It's very hard to do anything more than just observing existing habits. Even the idea that fewer tokens are better breaks down sometimes. Just look at Perl or complex regular expressions.

It seems to me that what counts is not necessarily tokens of the programming language but rather "tokens" of the mental model into which a syntactical expression is translated. But what are the fundamental characteristics of those mental tokens and are they distinct tokens in the first place? We probably need to look to cognitive sciences to find out more about that.

What is pretty clear to me (admittedly through self observation) is that the brain likes to take shortcuts based on the context we preceive ourselves to be in. We see things in one context that we don't see in others even though they are there. So, basically, we form context specific mental models that filter the world and create a vocabulary of shortcuts that work efficiently in that particular context. Syntax can be a visual cue to invoke that context switching facility of the brain, for better or worse.

in lisp you write trees.

in other languages, you write weird things that get turned into trees, seemingly by magic.

other languages seem easier to understand, to most people. this is because they have a lot of experience with all that magic -- it has become a tradition. if you're better at normal languages, go ahead and use them. i don't care. but they are not objectively easier to read, they are much harder and more complicated.

You haven't taken this thought process far enough. In assembly, according to you, you don't write trees. I guess this means you write sequences of commands (procedures), and you store data in a state system. Well, in that case, Lisp gets turned into procedures and a state system, seemingly by magic.

Maybe people have an easier time understanding other languages because they work in the same way the computer does.

you're saying lisp is harder to understand b/c it's higher level and has more abstraction away from the way computers really work and away from assembly language (which are easy to understand, or something)?

you don't need to know what your lisp gets turned into. you do need to know something about what tree your lisp, or infix math, or chained C functions, get turned into. you have to actually know which tokens go into which branches. you have to know what functions are being called on what tokens, in what order.

Nah, I really wasn't making any claims. I just didn't buy the argument.

Personally, I find Lisp easier to write but much harder to read. The syntax certainly plays some role in that, but I don't think it's the key. The problem, I think, is that it's just too dense. When you use a lot of intermediate variables, you get to name them something relevant. But Lisp code (or at least my Lisp code) usually doesn't have all of this context floating around, so it's harder to figure out what's going on.

I agree throwing in a lot of variables with names can make code easier to read. You can also achieve the same effect by making one-line functions that just call a few other functions.

It's quite possible that the prevalent style of writing Lisp code is in some ways worse than the dominant style for C/java/etc. But that is a different issue than the language itself, which can, for example, create a bunch of name intermediate variables, if you want them.

Curi is on to something here. For a period of a couple years I was TA for an intro to programming course using Scheme. One of the questions we always asked students was about the advantage of certain refactoring styles, to which half of the answer was "giving names to things." One of the great powers of Lisp is that the path of least resistance is to write many small expressions where each one must have a name, even just an iteration over a collection. This is great because we always know what operation we're doing. There is the same type of power in imperative languages to a lesser degree, but it manifests in naming data. The path of least resistance in C-like languages, BASIC, and P languages encourages having named accumulators and containers (writing lots of chained expressions creates code that's more concise but less readable). This is great because we always know what we're operating on.

The flexibility of Lisp probably makes it more suited to adapting to encourage naming data, over imperative languages encouraging MSFs. let/let* is a little clunky for simple things -- it reminds me a little of Pascal's variable declaration blocks. It would be better to do a sort of: (lambda (a b) (A <- foo a) (B <- bar A b) (Z <- baz B) Z) or optionally (return Z) instead of just Z, which macros to (lambda (a b) (baz (bar (foo a) b))))

It seems like the sort of thing that someone would already have implemented, but I don't recall it being a core feature anywhere.

Yeah I know the lisp trees thing is something that gets said a lot, but I really don't see how inexplicable the transformation of other languages into trees is. Ultimately in lisp you have the most flexibility in manipulating the tree, but a little study in compilers and formal languages and its obvious that all languages are describing trees.

Objectively easier to read? Thats a strange road to go down. Theres no reason that languages with grammars that generate strange and wacky trees might not suit a human's ability to describe formal solutions better. I think the reason people find languages like C easier to understand is all in the state handling anyway.

> but a little study in compilers and formal languages and its obvious that all languages are describing trees.

That's not the issue - the issue is that lisp folk can easily write programs that manipulate their code. Other folks have to get a parser involved.

In most cases, that means that people who write programs in "not lisp" rarely have automated methods for manipulating their code. They don't define dsls. They have macros weaker than regular expressions.

Programs have a lot of structure that can't be exploited without being able to manipulate code.

I agree completely. However, the idea that only lisp describes trees, or that the describing is where lisp is powerful is wrong. All languages describe trees and the process is not that obscure. Like I said elsewhere I could write a lisp like language with no lists, no lamdas and no macros and it would very clearly describe a tree. It wouldnt be a powerful language.

the reason people put so much state and local variables in their program is that non-lisp languages get hard to read if you nest a lot, so they have to.

and that is basically what i mean about lisp being objectively easier to read: the notation actually works better. so you can nest a lot and it doesn't get horrible. but also, you don't have to, and then it's about the same.

I think people tend to think in terms of explicit state as a default rather than being forced to by poor syntax. I've had experience teaching amateurs and this really tends to be bourne out in what I've seen. Students tend to find functional languages very easy to use until the problems start to involve things that need to change over time and then they can't get past the idea of storage.

nothing about s-expressions is bad for using state and little nesting.

something about C/etc is bad for nesting heavily.

Edit: example. both lisp-style versions are nicer. if C has advantages, it isn't in making state work better.

  (def avg (numbers)
    (/ (sum numbers) (count numbers)))
  (def avg (numbers)
    (= total-number (sum numbers))
    (= total-count (count numbers))
    (= result (/ total-number total-count))
  function avg (numbers) {
    /(sum(numbers) count(numbers))

  function avg(numbers) {
    =(total-number sum(numbers))
    =(total-count count(numbers))
    =(result /(total-number(total-count))

State in procedural languages has more to do with side effects than nice syntax for it. This article has a nice description of how why explicit state is good in procedural languages sometimes: http://prog21.dadgum.com/3.html

lisp can do side effects. as i demonstrated. what's the problem?

"... some people prefer not to commingle the functional, lambda-calculus part of a language with the parts that do side effects. It seems they believe in the separation of Church and state." --Guy Steele

BTW, tell me which of these makes more sense:

(+ 3 4 (- 1 (/ 3 4)) (* (- 9 2) 8 3))

+(3 4 -(1 /(3 4)) *(-(9 2) 8 3))

why oh why would you want a function to be in the same grouping (set of paren) with arguments to another function, instead of grouped with its own arguments?

the less you nest, the less it matters. but it is not a matter of taste which way makes more sense and scales better.

I realize this is an aside, but when I started thinking about learning lisp, I really wished the latter notation existed. After thinking about it, I realized why: it's in essence the Excel syntax, something that a lot of computer-literate people are familiar with.

Most people find infix easier to read. I really think the idea that lisp's advantage comes from having operators in the same group as parameters is ridiculous. To me lisp's power is in the orthogonality of statements, functions (and anonymous functions), data and lists.

You could make a language that had very lisp like syntax but didn't support things like lists and lamda functions and it would equally clearly define a tree. That language would be just about completely useless.

infix and whether to call functions like foo(x y) or (foo x y) are separate issues.

if infix is C's advantage, that is pretty silly. because first of all there is only a limited number of infix operators, and when you make your own constructs they are prefix, but different from s-expressions in the kinda silly way i illustrated.

anyway, you can put infix into a lisp. people don't usually want to because it kinda sucks. unless you're working in certain domains.

infix means:

- memorizing order of operations and remembering it whenever reading code (does == or && have higher precedence? they didn't drill that one into us in middle school, so it doesn't feel quite so obvious as arithmetic order of operations)

- losing characters for use in identifiers

- commas in argument lists

- paren for changing order of operations

- functions that take a predefined number of arguments that has to be 1 or 2

- infix only works with a limited number of built in things, it doesn't scale nicely. i suppose you could change this, e.g. make # a special character, and then you can define infix foo then write arg1#foo#arg2. but like, that's ridiculous. no one wants to do that with functions in general. they only want to do it with math because they hate math and don't want to have to understand anything about it, they just want to use it mechanically like they memorized in school.

- infix is an approach with less generality

People prefer infix with arithmetic operators. Why? Probably because thats what they know. If your example used named functions it would look like this:

(plus 3 4 (minus 1 (divide 3 4)) (times (minus 9 2) 8 3))

plus(3, 4, minus(1, divide(3, 4)), times(minus(9, 2), 8, 3))

Anyway none of this has anything to do with the assertion that people only use explicit state in procedural languages because the languages don't support nesting well. I prefer functional programming to procedural but that doesn't ring true for me at all. As a matter of fact I prefer functional languages that don't have lisp's bracketing syntax.

Where on earth did you get those function names? Perhaps you meant

  (sum 3
       (difference 1 (division 3 4))
       (product (difference 9 2) 8 3))
Moving the function name outside the parens makes as much sense as moving a verb outside a sentence.

P.S. The way to pronounce the < function is "ascending".

You are the first person I've ever heard to pronounce < as "ascending". Really, gt/lt functions are my biggest tripover point when doing prefix-everything expressions, because they work in the opposite way of how we were taught as children to interpret them. a<b is true if b is greater than a, and we can evaluate it visually by looking at the expanded side of the operator versus the pinched side of the operator (the former being greater than the latter iff true). In (< a b), the expanded side is pointing at exactly the element which it does not represent, meaning that we can't use those nice optimized mental pathways which are devoted to visually evaluating the expression.

It's logically consistent with how the rest of the system works, but it sucks because having to unlearn anything sucks. Personally, I just imagine it being rewritten as infix.

I'm a big believe in lifelong unlearning.

The "ascending" tip is from experience. It's really easier than moving operators around in your head.

The human mind is good at overloading operators. Especially since the infix < never appears after an open paren, it takes little time to teach yourself to read it as "ascending". It even looks small on the left and big on the right, visualizing an ascending list. Once you learn it that way, shortcuts like (<= 1 n 10) to see if a number is between one and ten come naturally.

True that infix < doesn't appear right after parens, but in the minds of most people, there isn't really a notion of "infix" as an entity unto itself. The alligator just eats the bigger fish.

Really, the problem is that a small amount of whitespace can change the meaning of the code.

(< 1 2) => #t

(<1 2) => #f

...for a convenience function <1 that semantically means "is less than 1". Maybe you wouldn't define such a function, but having to think about it at all or having to mentally redefine < somewhat validates the idea that the syntax here is a stumbling point.

I do remember stumbling on < early on, but now I don't get it wrong any more often than I do with infix.

you could just rename the operators to lt and gt, so there is no visual cue either way.

you could also change the argument order, but that's probably not a good idea :)

regarding state, see the other example above. then post saying you meant infix math, not state.

edit: and state is not only used to avoid nesting. that is just common. i do it myself in ruby. too much chaining stuff is confusing in ruby, even with OO shortcuts (which are how people actually avoid using the crappy function call syntax too much, even more than via infix math). so you save to a variable and split it up.

> infix only works with a limited number of built in things

Some infix languages have a mechanism for defining new infix operators.

read before you post

Imagine for a minute that I did read before I posted, and apply my comment in that new light. :)

I said you can make a mechanism to define new infix operators, but it's not very good. Then you quoted so as to imply I didn't know that, and said that actually some languages have it.

Here's a fuller response. You said:

     infix only works with a limited number of built in 
     things, it doesn't scale nicely. i suppose you could
     change this, e.g. make # a special character, and then
     you can define infix foo then write arg1#foo#arg2. but
     like, that's ridiculous. no one wants to do that with 
     functions in general.
First, there are languages which have infix functions as a first class concept in the language (e.g., J, and I would suppose APL and K), so it isn't that infix doesn't scale. Secondly, even in languages which have the call() convention, user-definable operators can coexist (see logix, for example, which is an infix macro system built on top of python).

Additionally, you don't have to have an arg1#foo#arg2, as long as you're willing to use spaces to separate things, the way that lisps, forths, and so on do.

so you really advocate tokens in the order:

x foo y


foo x y

? well, right or wrong, you are definitely deviating from the mainstream. that is, C and java coders will agree with me that prefix, and unlimited arguments, is better. the only difference they'll do in the general case of function calls is to add commas in the argument lists, and move the open-paren to the right one token.

i think not using infix function calls has nothing to do with why people are put off by s-expressions.

"so you really advocate tokens in the order:"

No. Nowhere am I advocating that. Rather, I'm pointing out that it's not an obviously ridiculous idea -- people have implemented it, and some people like it.

"i think not using infix function calls has nothing to do with why people are put off by s-expressions."

I disagree; I think it is one reason.

but how can it be a reason people dislike lisp when C and Java programmers put their function calls in prefix order, too? the vast majority of programmers do function calls that way, and prefer it.

and infix function calls in general is obviously a bit ridiculous because it doesn't scale. why would you want all functions to take 2 arguments? or were you going to

(a b foo x y z)

for a function of 5 arguments? and memorize which side to put the extra one on, and what order they go in, etc?

Curi, I'm not saying it's a good reason to dislike lisp. If I thought it was a good reason, I wouldn't like lisp, and yet lispy languages are my favorites (most of my lisp code has been CL, but Arc is interesting at 0.0).

There are a number of possible solutions to your question about infix functions. In J, insofar as I understand it, all functions take either one or two arguments, but the arguments can be arrays, so you often get the effect of more than two arguments.

For example (it's been a while) the form * / 2 3 4 would have the function *, the function /, and an array or list of three integers. J is parsed right to left, so the array is collected first, and then there's a modifier function '/' which takes two arguments: a function on the left, and an array on the right, and applies the function to each of the elements of the array (like map in lisps), outputing the new array. I don't know the details anymore of which symbols are what primitives, but J is interesting, in my opinion.

Alternatively, for your example, you might do

    (a b) foo (x y z) ; or
    (a b, foo, x y z) ; or
    a b Foo x y z     ; if functions have their own
     naming rules like in Erlang, or
    a b foo! x y z;   # where the bang means call, or
something else. Surely you can come up with 10 or 12 yourself. Some of these scale in some ways, and not in others; you could have a rule that you can only have one call per line, to remove ambiguity. That sounds rather restrictive, but so do Python's indentation rules when you first hear of them, and that works out okay, in my experience. I don't think any of them lend themselves to nice general-purpose macros, but I might be wrong.

I know I'm not as smart as you guys, but I very strongly prefer 3 + 4 + (1 - 3/4) + ((9-2) * 8 * 3)

If you code Lisp long enough does your first expression becomes as natural to read as my expression is for me?

Prefix arithmetic is easier to read:

  (+ 3
     (- 1
        (/ 3 4)
     (* (- 9 2)
You can see straight away that the whole thing is one big addition; that the third summand is a subtraction, etc..

Also, a lot of math is prefix: f(x,y) ; d/dx (...) etc..

I never understood why this was an issue. People are really used to writing prefix functions in all languages def foo( bar, baz):

And I've never really had a hard time with reading arithmatic, so the debate really leaves me scratching my head.

I note that infix was so natural that you resorted to parens.

Infix looks reasonable until one has more than 2 operators. Then people start making mistakes. To combat those mistakes, they start parenthesizing. The number of mistakes goes down but there's still confusion. (Some folks know more precedence levels than others and many folks think that they know precedence levels that they don't know.)

Another way to put that is that Lisp is so natural it resorts to forcing parens everywhere even when I don't want them! I'd never deny that confusion about order of operations between &, |, ==, <<, %, and so on, is a major source of bugs. (But I'd blame poor coding style for that, in any language, in the first place.)

However, in every field besides this niche of computer science, including almost all of math, finance, science, and engineering, infix is used. This means that it is at least good enough and I suspect it has advantages.

Many math operations really are just fundamentally unary or binary. Generalizing - or / or x^y or mod to lists is just silly as far as I can see, and adds confusion. I don't need to see parens around the outermost operation. For the two most common associative operations, + and *, order of operations is quite good enough and it has the advantage that everyone since grade 6 has been working with it.

(I'll give you that there are very many cases where list notation is great, but I don't think it's common that they help very much in science, business, or engineering.)

> However, in every field besides this niche of computer science, including almost all of math, finance, science, and engineering, infix is used.

The reason is that the "reader" in all of those domains is another human and humans do error correction almost without thinking.

Also, each of those domains has a very small number of operators - programming languages have lots of operators.

Feel free to demonstrate that you know the precedence/associativity rules for your favorite programming language by typing them without looking them up. (I know two people who can do that for C; the vast majority can't.)

I'm only trying to defend a very narrow point, that when coding mathematical expressions, infix isn't bad.

Code is read by human beings too (perhaps just the person who wrote it) and more find infix arithmetic more natural looking.

I see the appeal in the idea that arithmetic is really just a very special case of a programming structure and should be treated as such, but on the other hand, I and many others can instantly see what a + bc/d - d(e+f)*g means and would like equation-heavy pieces of my programs to somewhat resemble equations everywhere else in life.

Does you have the quadratic formula memorized in list notation? How about the sum of an arithmetic or geometric series, or a formula for an inverse square law force?

> I'm only trying to defend a very narrow point, that when coding mathematical expressions, infix isn't bad.

Programming isn't math.

> Code is read by human beings too (perhaps just the person who wrote it) and more find infix arithmetic more natural looking.

And that's how infix causes bugs. The human reader error corrects and the compiler doesn't.

My goal is correct programs. What's yours?

Lisp notation eliminates a whole class of bugs.

Bugs are expensive - what are you getting for the ability to have more of them?

> I and many others can instantly see what a + bc/d - d(e+f)*g means

Really? It has at least two meanings. Which one is correct?

Yes, I do memorize formulas in a form that doesn't allow for precedence/associativity errors. Why I should prefer a form that does allow for such errors?

Programming isn't math, but mathematical expressions are often found in programs, more or less so depending on the domain. My preference is to have readable mathematical expressions in programs that resemble the forms in which I see or use them elsewhere.

In the general case of complex logical and bitwise expressions, order of operations can cause a tremendous number of bugs. I like to use parentheses to make these cases absolutely unambiguous anyway. But I can't remember introducing a serious bug because I messed up order of operations between +- and X. Anyway, if it's such a problem, there's nothing to say you can't put parentheses around every operation in infix notation, at least in any language that I know of!

Maybe my preference relates to having a fairly visual memory for formulas and such. If I want to find a root of a quadratic equation, I do (-b + sqrt(b * b - 4 * a * c))/(2 * a), and that is easy, and anyone with high school math sees that in someone's code they know what it is. (I probably have to check for a zero denominator and also look at the discriminant unless I'm directly using complex numbers, and there's also the conjugate root, but that doesn't change much.)

If I have to write (/ (+ (- b) (sqrt (- (* b b) (* 4 a c)))) (* 2 a)) then I can do it, but it takes a lot of thought and it doesn't go along with the way I think about the quadratic formula. Granted, this is because I learned it the way I did, but I also know I'm not the only one.

A footnote to that is that to my mathematical sensibilities, in list notation, using the same symbol for negation and subtraction is hideous!

> Anyway, if it's such a problem, there's nothing to say you can't put parentheses around every operation in infix notation, at least in any language that I know of!

Most of us have to read code written by other people. Those other people don't have exactly the same precedence defense habits that we do.

No, we don't have to end up in the nasty middle ground where the paretheization is inconsistent and buggy, but we do. Since the "infix is good" theories and argument predict otherwise, how much weight should we give them?

Forgive my total ignorance; do the kids today not use HP calculators? When I was in college (+/- 1990), we all had HP calculators, and so thinking in terms of (+ 1 2 3 4) felt pretty natural.

Nope. TI graphing calculators are basically standard. TI-83 is standard in high school and a TI-89 is preferred by those who know how much symbolic manipulation and calculus can improve their grades. They both do infix order of operations.

I believe what is popular right now with new calculator buyers are the versions of the TI-83 and TI-89 with USB connectors and faster processors, I think they're called the TI-84 and TI-89 Titanium.

not only that, 2 of the 3 sets of paren were not needed, the order of operations already would have gotten it right

Not necessarily. Infix precedence/associativity is so "natural" that different languages have have different precedences or associativity for the same operator.

If one works in multiple languages, the only safe thing is to ignore precedence and associativity and parethesize everything.

cool. which of the basic arithmetic operators have difference precedence in which languages?

what i am used to is * and / first, then + and -.

apl for one.

And then there's associativity. It doesn't much matter for * and +, but it matters a lot for / and -, and if you think that * and / have the same precedence, it matters for .

And, you're continuing to ignore the fact that there are far more infix operators. Even if infix worked for +-/, that doesn't tell us that it works when there's .,->,<, &, ?, %, $, #, @, ~, ^, |, \, =, and so on (such as digraphs).

No language that uses infix has resisted the temptation to extend it past the point where it causes more problems than it solves.

I think you may have mixed people up, I've been arguing against infix, so I'm not continuing to ignore that ;p

I believe I posted a comment pointing out that lots of people aren't completely sure, offhand, if && or == has higher precedence.

And even in this branch of the thread, when I said he could have omitted some of the parentheses, I didn't mean to say infix is powerful because it can use less parenthesis. it gets rid of them with a dirty trick that doesn't scale. What I was pointing out is that people don't actually know the infix precedence rules by heart (like they try to say they do. they say it's so natural...). so they end up putting parenthesis frequently just cause they aren't sure.

It becomes just as natural, and it's easier to debug. You can drop down lines and autoindent to see the structure of the mathematical forumula. I don't know of any editors that will do that sort of thing for infix.

not if you used infix notation for + - * / as a child e.g. in school and learned Lisp after age 16 -- at least that is my experience.

I very, very strongly agree with you. A couple years ago I went through the exercises in SICP, and I remember being shocked at how hard it was for me to look at and understand Scheme code after 10 years of looking at C, C++, Java and C#. It felt really weird, because I remember being an undergrad and thinking that Fortran and Pascal and C looked really hard, but that the Scheme code in SICP felt very intuitive.

"Anyone can learn Lisp in a day, but if he'd previously been exposed to C, it'd take 3 days..."

"... it seems likely that the differentiating factor was libraries, not the core language. ..."

Absolutely the libs make the difference. [0] But are there other forces going on that limit language choice and adoption?

I'm thinking of say Perl. Perl has CPAN, which kicks pythons pants , hands down when it comes to the variety of usable tested modules. Is it the Perl syntax or core language that scares off the hackers?

My own pet theory is that it's not the hackers who ultimately decide the language(s) they use at work. Additional libs don't matter as much as the initial (fad) language the code base is written. It is the companies (and PHB's) that hackers ultimately work who determine language choice. In the end hackers just give up.

One more reason to "start a startup". You can choose the best tool(s) for the job.

[0] A friend of mine, a strictly "ANSI-c" man wondered how I could code the rings off him. Then I showed him the Python libs I could choose from. He had to download the source code, compile it, learn how to use it at 'c' level and then do the job. For example regular expressions.

Maybe the hacker who can produce error free code in Perl does not exist? If even hackers sometimes make programming errors, perhaps most of them were smart enough to eventually figure out they could minimize that problem by moving away from Perl.

Most companies I know actually asked their developers which language they should choose. Typically there is a decision eventually to concentrate on a limited set of languages, but choosing that or those languages is not usually being done without consulting the developers.

"... perhaps most of them were smart enough to eventually figure out they could minimize that problem by moving away from Perl. ..."

It does require dev's have to be better programmers. There are plenty of ways to make Perl code safe. Maybe you have to work harder at it?

"... Typically there is a decision eventually to concentrate on a limited set of languages, but choosing that or those languages is not usually being done without consulting the developers ..."

I agree with the limiting of languages. But I've yet to see languages chosen purely for good technical reasons. [0] And this is one reason I think Lisp gets pushed out.

[0] PHB's dont like you re-writing code from scratch so you could be forced to work with what ever was previously chosen ~ http://www.joelonsoftware.com/printerFriendly/articles/fog00...

Libraries are important but code has to be easy to read. You have to be able to easily read the example code for the library you want to use or to read other's code and learn from it. That's what scared me away from Perl and really why I still mainly use Python over Lisp. But each person finds different things easy or hard to comprehend. It mainly depends on your background and what kind of patterns the brain becomes accustomed to.

Maybe there is a relationship between code being easy to approach and friendly even to non-brilliant-hackers, and those hackers going on to produce libraries that, while they may not be brilliant, do manage to do useful things, thus making the language more appealing for future decision makers, who are then even likelier to choose the more approachable language with more libraries. If there is also a culture of quality, it will keep the non-brilliant coders honest, and from adding actual bad code.

Funny. I read this piece and was waiting for the profound statement that was going to convince me. But at the end it sounded like the message was "Google guys and gals are productive in other languages than LISP". Huh!? Not very convincing... Why do I care about them? From my point of view it's me and the machine. Google is just some company.

C is beautiful for what it is. Pure. Running on the metal.

And LISP too is beautiful for what it is. Running on abstractions.

The rest are all kind of in between.

I'd say that the conclusion (I came to) wasn't just that googlers were more efficient with other languages, but that after that many years of lisp he's already more efficient with python.

Haha, I agree. It started really well, but he got tired at the end and didn't elaborate much on how he lost his religion.

For those who missed the context of this post, its author is the same Ron Garret who recently wrote "My take on Arc" (http://rondam.blogspot.com/2008/01/my-take-on-arc.html, posted on Hacker News at http://news.ycombinator.com/item?id=107623). Those who noticed the name of this post's author, Erann Gat, could easily be confused: he changed his name to Ron Garret several years ago (http://www.flownet.com/ron/eg-rg-faq.html).

Augh, all that teasing and no meat. I want to know specifically why the author thought Python was better than Lisp.

For the record, I did not submit this article, and specifically declined a request to do so. It's not that I don't stand by what I wrote (I do) but it was written for a specific audience at a specific time and I don't think it deserves the attention that it's getting now.

Nice to see you here, Lisper. Thank you for a nice writeup. If anything, it only heated up my desire to finally learn the language. Somehow it seems that getting disappointed in Lisp is very hard: people get disappointed in their other favorite languages much more easily and they sound more convincing in their blogs. I wish you had your contact information in your profile, I would have asked a couple of questions.

How about if we request you to give us some real examples? Please... Pretty please... :)

Or you could write an update for this new audience you've suddenly found.

Working on it.

"... I want to know specifically why the author thought Python was better than Lisp ..."

Just ask ~ http://news.ycombinator.com/submitted?id=lisper

"... So I can't really go into many specifics about what happened at Google because of confidentiality, but the upshot was this: I saw, pretty much for the first time in my life, people being as productive and more in other languages as I was in Lisp. What's more, once I got knocked off my high horse (they had to knock me more than once -- if anyone from Google is reading this, I'm sorry) and actually bothered to really study some of these other languges I found #myself# suddenly becoming more productive in other languages than I was in Lisp. For example, my language of choice for doing Web development now is Python. ..."

This was written in 2002FEB and pre-dates the reddit rewrite from lisp to python.

Sounds like he's lost his faith in Lisp uber-alles, but not necessarily in high-level languages or dynamic languages.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact