Hacker News new | comments | show | ask | jobs | submit login
Programming Languages as Boy Scouts (willcrichton.net)
53 points by wcrichton 472 days ago | hide | past | web | 84 comments | favorite



Sorry to self-promote a little but, but I just published a dev diary [1] for Eve, where I talk about a lot of the same ideals described here. For instance, the author writes:

> For brownie points, your compiler should explain why the programmer has encountered an error, and for the gold medal it should propose a solution.

This is exactly what we're trying to do with Eve. In one of the examples [2], I show an error that is met with a full explanation of what went wrong, a possible fix, and even a button that offers to fix the error.

On our philosophy toward errors, we've identified that they should be understandable (written in full plain English sentences), relevant (point to the root cause of the error, rather than incidental errors), and actionable (we provide the reason for the error, offer to automatically fix the error where possible, and teach the user why the error occurred and how to avoid it in the future.)

We're still in the early stages of handling errors, but this is at least how we're thinking about it.

[1] http://incidentalcomplexity.com/2016/08/03/july/

[2] http://incidentalcomplexity.com/images/errorExample.gif


One advantage of non-human-readable error messages is that they are easier to Google. Microsoft's compiler warning and error messages all have unique error codes like C1234, which make them easier to find on sites like MSDN or Stack Overflow.

https://msdn.microsoft.com/en-us/library/aa249862(v=vs.60).a...


Well the 2 aren't mutually exclusive.

Throw a unique error code in there, as well as a description on how to fix it.


ERROR 832A: Line 42, missing semicolon, you probably won't a semicolon here:

  foo();
       ^
Or something of the like. If the compiler knows the error, it knows what things might be missing. In the case of limited options it could predict and continue parsing. Then, with a more complex error, that error code is actually useful for googling or reviewing the compiler documentation.


Or one could just apply semicolon inference and emit a warning, allowing the code to still run, at least.


That would probably be a step backwards. I'm all for a debug mode that can let bad code in a running environment. It should not pass a build, though.


Depends if you are compiling/executing the code continuously or not during development. Old-style batch programming environments are already plenty of steps backwards.


Fair. I actually loved the feature of Eclipse where bad code would pseudo compile. I just worry heavily about anything that emits warnings. Most warnings, it seems, either should have been failures, or needed more powerful tooling to know why it was ok.


Warnings are basically errors that don't halt compilation. You shouldn't have any of either in production code.


My point, exactly. I don't want more things that don't halt compilation.

If you have a special environment that can fake it, fine. But the general build run of compilation should fail much faster than it typically does.


Sure. But most of the time you are writing/developing/debugging code, it is not running in production at the same time.


This is true, but at the same time I don't think having to resort to Googling an error is a good user experience in the first place. With Eve, we want to try and avoid that as much as possible.


Which is helpful iff they ever get around to documenting the error codes. I've had the unfortunate experience of dealing with a number of APIs that have that kind of an error code scheme, except the doc was autogenerated minimal shit, and there's apparently about a dozen people in the world that work on this stuff, so the google-fu doesn't turn up much.


I sat down to learn Python a few years back to add another language to my tool belt. I ended up hating it in the first 10 minutes and haven't really had the need to use it since. Why? My sample program looked perfect. It was indented properly without braces - losing braces was the first concession the language asked me to make - but it wouldn't run because of one line which was indented several places. Three of those indents were 4 spaces each, but one of them was a tab. The interpreter chose not to tell me this fact. It merely said my program had illegal indentation or something. Looking at it I didn't see any problem. Solving this mystery was an unnecessary waste of 10 minutes of my life and I decided on the spot that Python was too opinionated for my tastes and not helpful at all. Then I read the designers of Python are forcing coders to change their "print" statement when upgrading Python 2 to Python 3 and think my first impression was right.

In terms of Boy Scouts, Python is the kid with strict dietary requirements who insists the other boys eat his crappy food, just because.


Py3 refuses mixed indentation so that problem can't happen anymore. The change to print is easy enough to deal with:

  > 2to3 -w foo.py
Moreover much of the new Py3 syntax has been usable in 2.x for 10+ years with __future__ imports. There's little excuse for not taking advantage of it.


If you think it should be acceptable to mix tabs and spaces in a source file, then one of the attractive features Python offers its users is the assurance that they'll never find themselves working on a project with you.


Spoken like a true believer. Did you know some editors offer the use of tabs for indent and spaces for alignment? The chances for a space to get into the left hand indent side are pretty high in such a system, which I imagine is dealt with harshly in your world. I am happy to self-select myself out of it.


Admittedly, that part stinks. It's been 20 years and they still haven't fixed some of those basic things.

BUT - Python is a great language for scripting. If you need to parse some stuff, produce a result ... Python is it.

The documentation bytes, I'm not a big fan of how it scales, packages are clumsy, deployment can be clumsy, but if you can get past those things, script away.

As far as brackets: in the long run I think it's better. I was very resistant to it at first. But then I came to prefer it.

I actually wish Java had a bracket-less option.

If Java had that, plus simpler file access, plus amazing array and string access ... man, that would be great.

Even though those are 'small things' - in reality, a lot of scripting code is filled with that, so it really adds up.

Going one step further ... maybe heretic - but having worked with a lot of Javascript ... I suggest that some loose data typing options would be great as well.

JSON in Java is a goddam pain. JSON and Javascript is seamless.

A lot of front-end programming is very data centric, but in the 'micro sense' i.e. passing around a lot of little bits of data. Also, it's highly event drive and async, which JS does well. So - also add more seamless async (Java Lambdas approach this) - and voila, Java could be better for UX type things as well.

But languages don't evolve very well, sadly :) it usually takes someone to come along and reinvent it using best practices and maybe a big company to support it, and it takes years to get to maturity.

Either that or to create a highly 'proprietary' version of something, a-la Microsoft.


>BUT - Python is a great language for scripting. If you need to parse some stuff, produce a result ... Python is it.

That's really interesting to me. I feel almost exactly the same way about Perl. It has a heap of rough edges and things that are "imperfect" but when I need something done quickly (i.e a quick script to parse some text) I reach for perl. Strange how conceptually they are very different languages but have converged...


Almost, but not quite would be http://www.groovy-lang.org/


> I actually wish Java had a bracket-less option

Jython is much older and more stable than Apache Groovy, and is bracket-less so there's no "almost but not quite" about it. It only exists for Python 2.x, not Python 3, which is a result of its stability.


Given that JSON stands for JavaScript Object Notation, I would expect nothing less than seamless integration in JavaScript.

Languages that have a more strict and less dynamic structure would likely have more difficulty working with JSON. On the other hand, give me an xsd and I'll generate some not awful Java. Consider the joys of JavaScript where one has to validate against an xsd.

Might I introduce you to Groovy? http://groovy-lang.org/json.html


> Languages that have a more strict and less dynamic structure would likely have more difficulty working with JSON

Unlike with Javascript, JSON is not a subset of Apache Groovy syntax, which also makes it have difficulty working with JSON.


I find that abstract data as objects, translation, conversion, etc works very well in JS. There's a reason node has seen it's rise as a middle-tier server servicing front end APIs as much as it has.


It should be noted that Scout Law varies depending on what country you're in. For example, some countries have laws saying a scout should smile and whistle under all circumstances. I'm not sure how one would work that into a positive property of a programming language.


What is R2D2 written in?


ada


Who's going to take this a step further, evaluate every programming language against this framework, and tell us who's the Eagle Scout of the group?


If you read between the lines, the author has already chosen the Eagle Scout ;) Although, I'd much appreciate the analysis of "every programming language" too!


That would start a flamewar. Don't forget marshmallows.


Elm wins the gold medal for helpfulness. http://elm-lang.org/blog/compiler-errors-for-humans


Ha. I think Java is the best all round language. Oddly, it's usually not the best language for any specific thing :).

Paradox :)


Reminds me of this quote:

"Perl is like vise grips. You can do anything with it but it is the wrong tool for every job." - Bruce Eckel


I think Lisp is the best all-rounder. If it isn't the best at something, you can MAKE it the best.


Lisp as a 'language' maybe, but the problem is it's more than syntax and compiler. The advantage of JVM, garbage collection, relative platform neutrality, docs that are really consistent, libraries that are very easily shared, and the huge amount of things built into the lang package over time, the number of Q/A on Stack Overflow, resources available, support, 3rd part libraries - these are things that Lisp really lacks, making it not a very useful thing all around.


It depends on what lisp you pick. And while CL isn't my favorite, if you're a Common Lisp user, some of that stuff's already there for you.

But yeah, the JVM is a great platform. But Java is inseparable from it, and Java isn't a great language.


Lisp and JVM are not mutually exclusive, of course. There were JVM Lisps before Clojure (I don't know any off the top of my head), but Clojure is an extremely nice language to work with, if you've never worked with it.


Could you tell me how to eliminate the notion of object identity from Lisp when it's unwanted? (FWIW, “just don't use eq” isn't good enough advice, because you can just mutate one of two supposedly “equal” objects to observe the difference between them.) This is crucial to make functional programming work.


>Could you tell me how to eliminate the notion of object identity from Lisp when it's unwanted?

So by "object identity," you mean doing equality comparisons with pointers instead of actually value equality. Just don't use eq. Use equal instead.

>FWIW, “just don't use eq” isn't good enough advice, because you can just mutate one of two supposedly “equal” objects to observe the difference between them

Yeah, that's true of every programming language in existance. What, do you want every equal object to be mutated, along with the one you mutated? That's just crazy talk.

>This is crucial to make functional programming work.

Well, if you're doing FP, than you aren't mutating values anyway. So just don't use eq.

At this point, your question utterly baffles me. Come back with a better explanation, or some examples of what you want, and I'll try to give you a better answer.


> So by "object identity," you mean doing equality comparisons with pointers instead of actually value equality. Just don't use eq. Use equal instead.

The problem goes further than that. “equal” doesn't really mean equal - it only means equal until the next mutation. Not very helpful in a language with unrestricted mutation.

> Yeah, that's true of every programming language in existance.

It isn't true in ML or Haskell. In those languages, mutation is strictly opt-in, on a per-type basis. When you don't use mutation, equality is by value. When you use mutation, equality is by object identity. You can't “accidentally distinguish” between two copies of the same immutable value.

> What, do you want every equal object to be mutated, along with the one you mutated? That's just crazy talk.

I want to distinguish between objects meant to be mutated and values not meant to.

> Well, if you're doing FP, than you aren't mutating values anyway. So just don't use eq.

If I'm doing functional programming, I want to mutate as little as possible, but certain things require mutation to be done efficiently. What I want is the language to help me distinguish between functional values and imperative objects.

(0) ML: If you attempt to mutate a value, you get a compile-time error.

(1) Racket: If you attempt to mutate a value, you get a runtime error. Pretty sensible in a dynamic language.

(2) Common Lisp: Only primitives are values, and mutating anything else is permitted.


> Pretty sensible in a dynamic language.

For a dynamic language I would expect to be able to change objects, in many ways. Otherwise it would not be 'dynamic', but 'static'.

> (2) Common Lisp: Only primitives are values, and mutating anything else is permitted.

But not necessarily possible. In this Common Lisp example there is no way to change the closure bindings.

    CL-USER 107 > (let ((f (let ((a 10) (b 10))
                             (lambda (x) (+ x a b)))))
                    (* (funcall f 1) (funcall f 2)))
    462


> For a dynamic language I would expect to be able to change objects, in many ways.

That objects must support mutation was never in question. All general-purpose languages have mutable objects. The point is whether you also want to have compound values. I think the answer should be “yes”.

> In this Common Lisp example there is no way to change the closure bindings.

Point taken. But I hope you aren't suggesting me to Church-encode compound values. Even ignoring the usability issues of Church encoding, there's still the problem that Common Lisp will treat two otherwise extensionally equal functions as different objects. :-p


...but, the functions are side-effectless, and guaranteed not to change, so you can just equal? the results. Basically, you can use function as const.


> ...but, the functions are side-effectless, and guaranteed not to change, so you can just equal? the results.

Just because (f 1) and (g 1) are equal, it doesn't automatically follow that f and g are equal. Functions shouldn't be comparable for equality at runtime, because extensional equality of functions is undecidable.


sigh not what I meant.

I MEANT that you can model immutable variables using functions.


Will `equal` work correctly on immutable values modeled as functions? No? I guessed just as much.


well, you'd have to unwrap the values first. The call would be:

  (equal variable (constant))
whether you think that's acceptable or not is up to you.


The encoding of compound values as functions is actually more complicated than that:

    ;; pair constructor
    (defun make-pair (x y)
      (lambda (f)
        (funcall f x y)))
    
    ;; projection from pairs
    (defun former (x y) x)
    (defun latter (x y) y)
    
    ;; test pairs
    (defvar foo (make-pair 1 2))
    (defvar bar (make-pair 1 2))
    
    ;; equality test
    (defun equal-pairs (x y)
      (and (equal (funcall x #'former) (funcall y #'former))
           (equal (funcall x #'latter) (funcall y #'latter))))
    
    ;; try it
    (equal-pairs foo bar)
And this was just for pairs of primitive values. You don't even want to imagine how painful it is to represent arbitrarily complex compound values this way.


> For a dynamic language I would expect to be able to change objects, in many ways. Otherwise it would not be 'dynamic', but 'static'.

That's not what "dynamic" vs "static" means. Are you suggesting that "static" languages don't allow you to change objects?!


>The problem goes further than that. “equal” doesn't really mean equal - it only means equal until the next mutation. Not very helpful in a language with unrestricted mutation.

Yeah, once again, true in all programming languages that allow any mutation.

>It isn't true in ML or Haskell. In those languages, mutation is strictly opt-in, on a per-type basis. When you don't use mutation, equality is by value. When you use mutation, equality is by object identity. You can't “accidentally distinguish” between two copies of the same immutable value.

So you want eq, unless the object hasn't been mutated, in which case you want equal. Easy enough. The following example is in CHICKEN, a lisp I am more familiar with, but translation to Common Lisp should be simple:

  (define (fn-equal? a b)
    (if (or (get a 'mut) (get b 'mut))
        (eq? a b)
        (equal? a b)))

  (define-syntax fn-set!
    (ir-macro-transformer
      (lambda (exp inject compare)
        (let ((var (cadr exp)
              (val (caddr exp))
          `(begin 
             (set! ,var ,val)
             (put! ,var 'mut #t))))))
now, just use fn-eq? and fn-set! instead of the included, and it should keep track of what vars have been mutated. This doesn't protect against other setters: They would also have to be wrapped.

>I want to distinguish between objects meant to be mutated and values not meant to.

I don't know if Common Lisp has const. If it doesn't, you could always just create functions that return the immuatable. It's not fast or pretty, but it works.

>Racket: If you attempt to mutate a value, you get a runtime error. Pretty sensible in a dynamic language.

No, it's not. Scheme does the best by having mutation separate from single assignment, but pushing immutability on everybody by default is crazy. Which is why, save cons cells (which are bad enough), Racket doesn't do so. Observe:

  (define x 7)
  (set! x "oh noes!") ;no error
  (display x);prints "oh noes!"


> So you want eq, unless the object hasn't been mutated, in which case you want equal. Easy enough. The following example is in CHICKEN, a lisp I am more familiar with, but translation to Common Lisp should be simple:

I want the language to make it impossible to use eq on values. When I'm manipulating values, I don't care about their location in memory, and I don't want it to be possible to care.

> If it doesn't, you could always just create functions that return the immuatable. It's not fast or pretty, but it works.

This is actually even worse, because Common Lisp will gladly tell me two function objects are different, even if they are otherwise extensionally equal.

> No, it's not. Scheme does the best by having mutation separate from single assignment, but pushing immutability on everybody by default is crazy.

I said Racket for a reason.

> Which is why, save cons cells (which are bad enough), Racket doesn't do so. Observe:

`x` is a variable, not a value. The value 7 isn't being mutated, 7 is always 7, of course! What has changed is that the variable `x` now stands for a different value.


>I want the language to make it impossible to use eq on values. When I'm manipulating values, I don't care about their location in memory, and I don't want it to be possible to care.

Oh. Well why didn't you say so?

In SBCL:

  (unlock-package 'common-lisp)
  (fmakunbound eq)
Note that this will probably break a ton of stuff.

In Scheme:

  (set! eq? #f)
>`x` is a variable, not a value. The value 7 isn't being mutated, 7 is always 7, of course! What has changed is that the variable `x` now stands for a different value.

Hey, watch this!

  (define x '(hello world))
  (set! x (cons 'goodbye (cdr x)))
  x
  =>(goodbye world)


> Note that this will probably break a ton of stuff.

To me, the correct way to interpret “It's possible. Just do this thing that breaks pretty much all existing code.” is “It's not possible.” And your solution still doesn't fix the lack of compound values.

> Hey, watch this! (snippet)

(0) I said Racket for a reason.

(1) A better objection would've been using `set-car!` to replace "hello" with "goodbye" in-place. But Racket doesn't have `set-car!` or `set-cdr!`.


To me, the correct way to interpret “It's possible. Just do this thing that breaks pretty much all existing code.” is “It's not possible.” And your solution still doesn't fix the lack of compound values.

Oh, it just breaks stuff that uses eq. Which you didn't want anyway, right?

>I said Racket for a reason.

Last I checked, that's totally valid Racket. And it has the same effect as using set-car!.


> Last I checked, that's totally valid Racket. And it has the same effect as using set-car!.

No. There's an important difference. This is valid Scheme and Racket:

    (define x (list "hello" "world"))
    (define y x)
    (set! x (list "goodbye" "world"))  ;; y is still (list "hello" "world")
But this isn't valid Racket:

    (define x (list "hello" "world"))
    (define y x)
    (set-car! x "goodbye")  ;; now y is (list "goodbye" "world") too
Racket has a separate (dynamic) type of mutable cons cells, together with `set-mcar!` and `set-mcdr!` operations. In Racket, cons cells are bona fide compound values, but the object identity of an mcons cells is a primitive value.


Are there people saying that you can't implement Common Lisp in OCaml/Haskell, with heterogeneous structures, dynamically typed and so on? You certainly can, but that's not an easy task.

If your language does not support natively the set of rules you really like to enforce, you can still work in a style you like provided you establishes conventions and abstractions around it. So if you were actually interested in developing with, say, purely functional collections in Common Lisp, you would use the FSET[0] library:

    (equalp (fset:bag 1 2)
            (fset:bag 1 2))
    T

    (equalp (fset:bag 1 2)
            (fset:bag 2 1))
    T

That's exactly the same approach that people use when they do "C with classes", by reifying the abstraction into the host language. Unsurprisingly, data-structures in FSET are implemented as Common Lisp structures, which allows having read-only slots as well as type specifiers. Since we only want to have read-only slots, let's define a wrapper around defstruct (http://pastebin.com/30D6BnRg):

    (deftype tree () '(or node null leaf))

    (deft leaf data)
    (deft node
      (left nil :type tree)
      (right nil :type tree))

    (equalp (node (node (leaf 3) (leaf 4)) nil)
            (node (node (leaf 3) (leaf 4)) nil))
    => T

    (compile nil
             (lambda ()
               (setf (data (leaf 3)) 0)))

    ; compilation unit finished
    ;   Undefined function:
    ;     (SETF DATA)
    ;   caught 1 STYLE-WARNING condition

    (compile nil
             (lambda ()
               (node "strange" "node")))

    ; caught WARNING:
    ;   Constant
    ;     "strange" conflicts with its asserted type
    ;     (OR COMMON-LISP-USER::NODE NULL COMMON-LISP-USER::LEAF).
    ;   See also:
    ;     The SBCL Manual, Node "Handling of Types"
The above messages are caught while compiling the functions, not when calling them. The language specification does not give you another way to mutate structure slots other than opt-in accessors. Granted, your implementation is probably smart enough to let you mutate it with (SETF SLOT-VALUE), but as a CL developer, you know that its bad to use it and that you should rely on accessors provided by the API.

The advantage of Lisp at this point is that you can wrap your abstraction so tightly that it is indistinguishable from core features.

But maybe that's not enough for you, because there are not enough checks and you have your own idea of how things should work and you think you should do more compilation and less interpretation.

The compiler is available right here too, so you can even use it from your own compiler (during macroexpansion if you need) and do the analyzes you like. Or, you can even output a completely different executable file. Then you code a compiler in your new language because that's what grown-up languages do.

So, in order to do ML in Lisp, you develop your own DSL, an interpreter and later you bootstrap a compiler which is then self-sufficient to exist alone. That's roughly what happened decades ago: "Edinburgh LCF, including the ML interpreter, was implemented in Lisp"[1][2]. ACL2, an applicative, side-effect free subset of Common Lisp used for first-order theorem proving (Boyer-Moore), shares the same approach.

But the language that exists now, even though it does perfectly what it was designed for, is not Lisp anymore, and can't do exactly the things we actually like to do in Common Lisp.

[0] https://common-lisp.net/project/fset/Site/index.html [1] https://ocaml.org/learn/history.html [2] https://www.cl.cam.ac.uk/~mjcg/papers/HolHistory.pdf


> If your language does not support natively the set of rules you really like to enforce, you can still work in a style you like provided you establishes conventions and abstractions around it.

That only works if the language provides some way to define abstract types. For instance, ML doesn't have any concrete type only inhabited by balanced binary trees, but an abstract type can provide operations never destroy balance. All is good as well as long as clients don't inspect the internal representation, which ML can and does enforce. However, unenforced conventions are just wishful thinking.

> but as a CL developer, you know that its bad to use it and that you should rely on accessors provided by the API.

As a C programmer, I know that it's very bad to use memory after freeing it.

> But maybe that's not enough for you, because there are not enough checks and you have your own idea of how things should work and you think you should do more compilation and less interpretation.

More precisely, I think computers are several orders of magnitude better than humans at applying formal logic, which ultimately boils down to mechanically interpretable rules of inference. I want a division of labor between humans and computers that leverages the strengths of both to the maximum possible extent. Humans shouldn't be in the business of performing perfectly automatable trivial checks.

> and can't do exactly the things we actually like to do in Common Lisp.

Actually, mutable objects, even objects whose class (not to be confused with type!) can change at runtime, are easy to implement in ML. The main reason why this programming style isn't as popular in ML-land is because it results in code that's harder to understand both for humans and for machines.

Metaprogramming itself is admittedly not ML's strength, though. Here I look up more to Racket's `syntax-parse`, which is both friendlier and more scalable than `defmacro`.


>Metaprogramming itself is admittedly not ML's strength, though. Here I look up more to Racket's `syntax-parse`, which is both friendlier and more scalable than `defmacro`.

...No, not really. syntax-parse and defmacro are good at different things. While unhygenic macros are useless, procedural macros have their place.

>unenforced conventions are just wishful thinking.

Tell that to everyone developing java. :-).

But one of the good things about lisp is that using syntactic extensions, we CAN enforce our own conventions. Just so long as you remember to unbind the native functions...

>Actually, mutable objects, even objects whose class (not to be confused with type!) can change at runtime, are easy to implement in ML. The main reason why this programming style isn't as popular in ML-land is because it results in code that's harder to understand both for humans and for machines.

Look, you're clearly a hardcore ML fan. You're not going to like Lisp.

Lisp and ML are polar opposites of each other. Lisp is hyper-dynamic, ML is incredibly static. If you're the kind of person who can't last 5 minutes without typechecking and immutability guarantees, you'll either flee lisp quickly, or build your own bizarre ML inside of it.


> syntax-parse and defmacro are good at different things.

Yes, `syntax-parse` is good at being compositional - writing macros that don't step on each other's toes, just like normal functions.

> Tell that to everyone developing java. :-).

There's a reason why writing Java programs is so painful. The language is like a third world country's bureaucracy: it doesn't do anything useful by itself, and gets in the way of anyone who wants to do anything useful.

> Look, you're clearly a hardcore ML fan. You're not going to like Lisp.

It's not a matter of what I think. You said: “If it [Lisp] isn't the best at something, you can MAKE it the best.” All I did was give an example of something you can't make Lisp do.

> Lisp is hyper-dynamic, ML is incredibly static.

Every programming language has both static and dynamic parts. For example, lexical scope is intrinsically a static notion. And ML's exception construction is dynamic. The question is which particular combination of static and dynamic parts is most profitable for one's purposes.

In any case, I really didn't want to turn this into a debate about types. My original objection was that Lisp doesn't have compound values. A dynamically typed language can have compound values.


>Yes, `syntax-parse` is good at being compositional - writing macros that don't step on each other's toes, just like normal functions.

That's hygene. defmacro COULD have hygene, and indeed, some schemes provide ER macros, which are defmacro with hygene. What I was talking about was the fact that defmacro is imperative, and syntax-parse is declarative. The lack of hygene in defmacro is actually awful.

>There's a reason why writing Java programs is so painful. The language is like a third world country's bureaucracy: it doesn't do anything useful by itself, and gets in the way of anyone who wants to do anything useful.

I'll agree with that, but pretty much every java program out there is an example of guarantees maintained by the programmer, not the language. Not saying it's a good thing, am saying it's possible.

>It's not a matter of what I think. You said: “If it [Lisp] isn't the best at something, you can MAKE it the best.” All I did was give an example of something you can't make Lisp do.

Okay, but I didn't get it, because I'm still not sure what it was...

>In any case, I really didn't want to turn this into a debate about types. My original objection was that Lisp doesn't have compound values. A dynamically typed language can have compound values.

Do you mean ADTs? because if not, I'm not sure what you mean.


> Okay, but I didn't get it, because I'm still not sure what it was...

Manipulate compound values directly!

> Do you mean ADTs? because if not, I'm not sure what you mean.

A compound value is built from simpler values. Examples of compound values include lists, trees, graphs, etc. To clarify, by “list”, I mean the actual sequence of elements, not object identity of the first cons cell.


>Manipulate compound values directly!

I fail to see how Lisp can't do this. What does set-cdr! do if not directly modify a cons cell, which is a compound value? what about hash-table-set! and vector-set!?


> What does set-cdr! do if not directly modify a cons cell, which is a compound value? what about hash-table-set! and vector-set!?

These functions are precisely why lists and vectors are not first-class values in Lisp. When you have a “list”, what you actually have is a mutable object whose current state is one list, but a later point in time, its state might be a different list! You can't bind a list value to a variable - you're only binding the identity of an object that temporarily holds the value.

As the Buddhist saying goes: “The finger pointing at the moon is not the moon.” The identity and the state of an object are different things. The identity is first-class but it's not compound. And the state is compound, but it's not first-class.


>These functions are precisely why lists and vectors are not first-class values in Lisp. When you have a “list”, what you actually have is a mutable object whose current state is one list, but a later point in time, its state might be a different list! You can't bind a list value to a variable - you're only binding the identity of an object that temporarily holds the value.

...And thus we get to the crux of the matter. You seem to think that an object must be immutable to be first class. This is wrong. I usually don't like to say that people are wrong, I just make my case for my opinion, but that is objectively wrong. First-Class objects, as defined most people, have only a few properties:

-They can be created anonymously

-They act as an lvalue in assignment

-They can be given to a function as an argument

Lisp's lists, maps, and vectors are ALL first class objects, by that definition, which is the most common one.

I give up. You probably have a reasonable idea you're trying to express, by you seem to be at an utter loss to express it in a way people like me can understand.


> ...And thus we get to the crux of the matter. You seem to think that an object must be immutable to be first class.

No, that's not what I'm saying. You're still not getting the distinction I'm making:

(0) A value is something you can bind to a variable. (At least in a call-by-value language, which Lisp most definitely is.) It doesn't matter, or even make sense, to ask whether “this 2” is different from “that 2”. There is always one number 2, regardless of how many times the number 2 is stored in physical memory. Nor does it make sense to ask whether 2 will suddenly become 3 tomorrow. 2 and 3 are always distinct values.

(1) An object is an abstract memory region. Every object has an identity and a current state. The identity is always a primitive, indecomposable value. The state may be compound, but it isn't always a value. (This depends on the language.) Even if the current state is a value, the state at a later point in time might be a different value. Objects with immutable state are largely impractical - why bother distinguishing the physical identities of entities that never change?

The benefits of values and objects are largely complementary:

(0) Values are easier to reason about. Algebraic laws can be stated and proven to hold for large classes (not in the CLOS sense) of values. It's very convenient to delimit these classes using static types, although this isn't strictly necessary. Values shine as the backbone of data structures and algorithms.

(1) Objects can evolve over time, by mutating their slots, or even by changing their class (in the CLOS sense). Using static types to describe object structure is counterproductive. (Hence the pain of programming in Java!) Objects are very convenient when implementing the parts of a program that must adapt to changes in the environment.

> Lisp's lists, maps, and vectors are ALL first class objects, by that definition, which is the most common one.

Of course Lisp has first-class compound objects. But objects are not values! Practical functional languages have both compound values and compound objects.


Okay, thank you, I get it.

However...

>Practical functional languages have both compound values and compound objects.

Lisp isn't functional. Not even close. You can DO functional programming in lisp, of course, but lisp itself isn't functional. That is why it doesn't have your compound values.

The other reason it doesn't have compound values is that compound values, from a low-level perspective, don't exist: They are an abstraction that doesn't make sense in a language that doesn't have strong typing and/or strong immutability guarantees. Most lisps have neither.

And Lisp's opinion on physical identity is that if only matters if you want it to: If you don't want to use eq, than don't use eq. You can even remove eq from scope, making it a runtime error to call any function that uses eq.

>Values shine as the backbone of data structures and algorithms.

Than treat the data there as values! Lisp isn't stopping you from not mutating your values, and it doesn't stop you from using equal on them. OTOH, it doesn't stop you from mutating them either.

Lisp isn't big on guarantees and limits.


> Lisp isn't functional. Not even close.

This much is fine.

> You can DO functional programming in lisp, of course, but lisp itself isn't functional. That is why it doesn't have your compound values.

Even non-functional languages can benefit from compound values. Rust is a clear example of this.

> The other reason it doesn't have compound values is that compound values, from a low-level perspective, don't exist:

From an even lower-level point of view, not even primitive values exist - all memory is addressable! Values are a useful abstraction in high-level programming, though.

> They are an abstraction that doesn't make sense in a language that doesn't have strong typing and/or strong immutability guarantees. Most lisps have neither.

Immutability is a red herring - you're still thinking in terms of memory locations. The entire point to using values is not worrying about memory locations when you don't need to!

> And Lisp's opinion on physical identity is that if only matters if you want it to: If you don't want to use eq, than don't use eq.

That makes it no different from Java: “You don't have to use == on value objects! Just use .equals() and you'll be fine!” The harsh reality is that programming with make-believe values is painful. I've been a longtime C++ programmer, and I know only too well how this “let's use a good subset of the language” story ends.

> You can even remove eq from scope, making it a runtime error to call any function that uses eq.

But I don't want to program only with values. A practical language needs objects too.


Okay then, I'm still not sure what you want lisp to do.

>Rust is a clear example of this.

No, it isn't. I entirely fail to see where rust has compound values.

I think you want the programming language to decide what equality operator to use for you. In which case, I wrote the code for that above. You have to wrap all mutators in macros for it to work, but you can fairly trivially write a macro to generate those.

>That makes it no different from Java: “You don't have to use == on value objects! Just use .equals() and you'll be fine!” The harsh reality is that programming with make-believe values is painful. I've been a longtime C++ programmer, and I know only too well how this “let's use a good subset of the language” story ends.

I fail to see how values are any more "make-belive" in Lisp than in any other language. This isn't Java. eq and equals are equally first class, unlike == and .equals(). Just know what kind of comparison you want, and choose accordingly. Is that really so hard?


> No, it isn't. I entirely fail to see where rust has compound values.

    struct Vec2(usize, usize);
    let origin = Vec2(0, 0); // value
    let center = Vec2(0, 0); // same value
    let faraway = Vec2(123,456); // another value
> I think you want the programming language to decide what equality operator to use for you.

I want the programming language to let me compare the values I care about. I don't want to worry about irrelevant details, like the location, physical or abstract, where a value's representation is stored. Of course, I do care about the physical identity of objects. But objects are not values!

> I fail to see how values are any more "make-belive" in Lisp than in any other language.

The language forces me to care about a low-level implementation detail. Namely, that the compound values in my problem domain are realized as compound objects in Lisp. It fits the definition of “make-believe”. In any case, it wasn't my intention to single Lisp out. Most languages out there only have make-believe compound values.


here's something fun about that rust example:

  struct Vec2(usize, usize);
  let a = Vec2(0,0);
  let b = Vec2(0,0);
  a == b //true
  &a as *const usize == &b as *const usize //false
And now the (near enough for this example) equivalent Scheme:

  (define-record vec2 x y)
  ;;There's no anonymous values in the default scheme record implementation, but it makes no difference for our purposes
  (define A (make-vec2 0 0))
  (define B (make-vec2 0 0))
  ;;No constants at a language level, but constant values are uppercase by convention.
  (equal? A B) ;true
  (eq? A B) ;false
True, the syntax is a bit different (rust doesn't like identity checks), but the semantics, save the above notes, are identical. What you call "values" in one language are identical to what you call "objects" in another. Just use equal like you would in any other language.

>I want the programming language to let me compare the values I care about. I don't want to worry about irrelevant details, like the location, physical or abstract, where a value's representation is stored.

Than just use equal.

>The language forces me to care about a low-level implementation detail

You need only care about it if you want to. If you don't, just use equal.

Are you starting to see a pattern?


> here's something fun about that rust example:

    &a as *const usize == &b as *const usize //false
Yes, in Rust, `a` and `b` are objects with identity. But their state is a compound value that can be easily copied to a different object, passed by value to a function, etc.

> You need only care about it if you want to. If you don't, just use equal.

A high-level language is supposed to provide abstractions that don't leak.


>Yes, in Rust, `a` and `b` are objects with identity. But their state is a compound value that can be easily copied to a different object, passed by value to a function, etc.

And you can do the same in Lisp, but, like Python and other languages, it doesn't happen for you.

>A high-level language is supposed to provide abstractions that don't leak

That's not a leak: It's an escape valve. eq is an escape from the world of values for when you have to care about objects, the same way C has the & unary operator, and rust has the *const cast, and ruby has the equals? message, and python has the is operator. The only reason Haskell doesn't have the same is that it's pure, at least in theory, so there isn't a meaningful distinction. But in languages where state is mutatable, you need eq, or an equivalent, for when you have to care about object identity.


> And you can do the same in Lisp, but, like Python and other languages, it doesn't happen for you.

No, the state of a Lisp or Python object isn't a value. You can't pass it around without manually deep-cloning everything. Cloning a value doesn't even make sense - there's always only one of it!

> The only reason Haskell doesn't have the same is that it's pure, at least in theory, so there isn't a meaningful distinction.

ML isn't pure, but it confines physical equality to mutable objects, and not everything is a mutable object. This discussion has absolutely nothing to do with purity.

> But in languages where state is mutatable, you need eq, or an equivalent, for when you have to care about object identity.

Haskell and ML have lots of mutable state, and don't need a distinction between eq and equal. The single equality testing operator does the right thing on values of different types.


>Haskell and ML have lots of mutable state, and don't need a distinction between eq and equal. The single equality testing operator does the right thing on values of different types.

No, it doesn't. If you truly have a single equality operator, no pointer casting (like C or Rust have to solve this problem), and mutability, than you can't have object identity as needed. And I think Haskell actually does have generic pointer types, so you can effectively do pointer casting, so it does, in effect, have eq.

>No, the state of a Lisp or Python object isn't a value. You can't pass it around without manually deep-cloning everything. Cloning a value doesn't even make sense - there's always only one of it!

Values are purely abstract. We can compare them, create them, mutate them, etc., but they're all objects under the surface. It doesn't make sense to treat objects as values consistantly, except in functional languages. Everywhere else, it's expensive, impractical, and annoying (sometimes you want eq without jumping through hoops to get it). You seem to think that this has nothing to do with mutability, but if you're data's mut, your proposed semantics require deep copy on funcall by default, which is unacceptable. So it has everything to do with mutability.


This is brilliant!

> Reverent. Whether you belong to the Kingdom of Nouns or the Church of Lambda, be respectful of all faiths. Better yet, take the best of all worlds when designing your language (see: my previous post). Don't force users to pick one paradigm over the other, but rather be flexible enough to accommodate all walks of life.

If only.

But, that being said, I think all languages are coming around to adding first class functions.


> But, that being said, I think all languages are coming around to adding first class functions.

Object-oriented languages have always been higher-order. Functional programming is great, but object-oriented languages have always had first-class procedures, whether they compute mathematical functions or not. And not all procedures compute mathematical functions in functional languages either.

What object-oriented languages lack is an emphasis on values, rather than the memory locations (aka object identities) that contain said values. The benefits of not worrying about object identities cannot be overstated: Removing the spurious distinction between memory blocks that hold equal values automatically endows the language with a richer equational theory that can be used both by language users (for reasoning about correctness) and language implementors (for optimization purposes).


What's funny, is to me F# and JavaScript would rank pretty highly in my mind. Though JavaScript does allow for more varied paradigms of adoption, but not so friendly.


Comparing JS and F# is interesting... I would be very psyched if web browsers implemented F# rather than JS. Type safety, with inference, would be a big win.


I saw trustworthy, loyal, and my mind immediately filled in the rest. I haven't had anything to do with Boy Scouts in over 20 years and I still remembered it flawlessly.

It's a shame the clean rule isn't found in programming languages more. I really like low noise syntax of Python, Coffeescript, and Elm but it seems like it is not really being adopted.


> Such a language should have ASLR enabled

Or, you know, just be memory safe.


All valid points. C# seems like it fits the criterias the best out of the languages I have used.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: