
Technical Issues of Separation in Lisp Function Cells and Value Cells (1988) - adgasf
http://www.nhplace.com/kent/Papers/Technical-Issues.html
======
taeric
My favorite quote:

    
    
        We feel that the time for such radical changes 
        to Common Lisp passed, and it would be the job 
        of future Lisp designers to take lessons from 
        Common Lisp and Scheme to produce an improved 
        Lisp. 
    

I don't know if it is truly indicative of a general outlook, but the
implication I read is that future Lisps are expected and encouraged.

An open question I have, and if this is covered somewhere already, I'd love to
see it. It is interesting to me that while Lisp was looking to condense
namespaces dramatically, it seems many other languages went the opposite
route. Is that a trend I just don't understand, or is there something inherent
in Lisp that favors fewer namespaces?

~~~
netsettler
I dunno about expected and encouraged. Neither really, nor discouraged though.
Really the point was that this was written in the context of a particular
standardization effort and was to say "this is out of scope". As it happens,
RPG and I disagreed completely on whether Lisp1 or Lisp2 was a good idea. Most
readers read this like a Rorschach, assuming that the article confirms their
own beliefs, not stopping to think that the article just presents two sides of
an issue. In the context of the design process, Lisp2 won out (and I happen to
be happy about that). But it is reasonable and appropriate for both to thrive
if there are users who like those sorts of things.

I created the Lisp1/Lisp2 terminology as a dodge because we started out
writing this paper using terms something akin to "Scheme-like" and "CL-like",
and Scheme was winning for reasons unrelated to the issue at hand. I wanted
people to separate their warm fuzzies for Scheme from the particular design
choice, as I really think there are very strong and reasonable reasons to have
a Lisp2. The net result was that in the context of CL, those of us liking
Lisp2 successfully argued that it was an unnecessary change to the language
from a stability standpoint.

This paper is called "Technical Issues..." because the committee document
(titled "Issues...") was longer and went into other issues that RPG didn't
think were pertinent to formal publication.

I frankly don't understand (or perhaps don't agree) with any claim that Lisp
was looking to condense namespaces. By convention and John McCarthy's official
request, Lisp is the name of a family of languages, not of a particular
language. Lisp has no preference. It spans languages with broadly varying
points of view.

Ironically, CL is really at least a Lisp4, by the way. block tags and go tags
are legitimately different namespaces. They just messed up the discussion so
we left those out. Whether you also count the type/class homes as a namespace
is something that doesn't fit neatly into the terminology so is the reason I
say "at least a Lisp4". Left to personal subjectivity. -kmp

~~~
aerique
I'm not trying to start a flamewar here but could you go into why your
preference is with Lisp2 instead of Lisp1?

A lot of commenters here have a strong preference to Lisp1's so I'd be happy
to hear a different point of view.

(Disclaimer: I'm mostly a CL user and appreciate it being a Lisp2.

Also, nice to see you here! I always really liked your posts in
comp.lang.lisp.)

~~~
Grue3
Not having to name your list variables "lst" is a big one.

~~~
netsettler
Yes, I am with Grue on this one. The overcrowding of namespaces means the very
common case of receiving a thing that is known by its type (list as "list" or
"string") occludes a common constructor that you might want and makes you
spell it badly. I really very strongly prefer proper spellings of things, but
moreover I think this is "natural" in a way that is neglected in this
discussion.

Simplicity can be measured in different ways, and there is a tendency in the
Scheme community to think of simplicity as a measure of the size of a formal
semantics. But two things come to mind about that metric: (1) I have often
claimed that small languages make big programs, and big languages make small
programs. So to some degree it's the case that tiny languages mean you have to
laboriously reconstruct as program or library what the language did not let
you do in its zeal to not offer functionality. (2) Human beings are not
designed in the Scheme way. The natural languages include no language at all,
out of many hundreds, in which words have only one meaning and do not enjoy
contextual distinctions. So in my view, simplicity can also measure the lack
of dissonance between the program model and the brain model. My brain is, I
believe, well-adapted to understand word meanings differently for noun and
verb, and Scheme affirmatively chooses not to rely on that, leaving my brain
bothered by the lack of ability to use it natural mode and forced to use what
seems more cumbersome. I don't want to reach for more words, I want to use
obvious context. And I claim that this is at least a valid way of thinking, if
not uniquely valid. I'm not trying to disallow the way others think (or think
they think, we being poor at actually introspecting sometimes), but to say
that the way others think is no more or less valid than my own.

This is, as Aerique implies, somewhat a religious matter and not to be warred
over. So I don't want to provoke debate on whether my way is THE right way,
only A right way among MANY simultaneously right ways.

At another level, though, this is the very essence of what it is to be a CL
user, or a member of the CL community, at least in my mind: The language
expressly accommodates and encourages a pluralistic society in which multiple
paradigms are simultaneously supported. I don't want to accuse the Lisp1
community of being a bunch of intolerant folks, but I will note that in my
mind it's hard to escape the sense that it bears a striking resemblance to
that. It's a community that wants people to learn the Scheme way, not a
community that wants to accommodate the various ways people naturally think.

I keep coming back to the old joke "There are two kinds of people in the
world: People who think there are two kinds of people in the world, and people
who don't."

CL has preferred casifications, but expressly goes out of its way to
accommodate others. It has ways of thinking about loops in various paradigms.
It has macros and read syntax things for letting you override almost any
decision that we could figure a way for you to override. There are places
where we do a poor job or an incomplete job, but that's more an artifact of
the energy and funding than of design. There was pressure to reduce out the
redundancy and we opted not to.

People think in multiple namespaces. We know that when you license a
production you can create a license (British respelling notwithstanding, it is
possible to use the same word in different contexts without confusing whether
it is a noun or a verb). In Spanish, a normal speaker would not flinch at the
sentence "Como como como." (I eat how I eat, where the middle "como" is "how"
and the outer two are "I eat".) These are natural, and so simplicity in this
context, at least for me, is being to write thing the way I think.

The imagined need to crowd out these names doesn't really come up in CL. Nouns
and verbs mostly operate in different orbits and don't interact, and that
feels pretty natural. Languages are more about ecologies than about a la cart
features, and there's a danger in liking a feature and thinking it can just be
injected into an ecology and will behave either as expected or even just
pleasantly.

~~~
kazinator
Although I'm with you on the issue of some Lisp-1 people being parochial, we
can argue that "como como como" is an example of something bad that we don't
necessarily want in a programming language. Not everything natural, in natural
language, is good in a computer language. Never mind that a word can be a noun
and verb in different ways: in natural languages, even if the word has the
same role, like noun, it can have different "bindings" at the same time in
that same space due to homonyms. Do you want the same symbol to have several
completely unrelated global bindings in the same space, the dispatch being
resolved based on semantic context (perhaps not even known until run-time)?
Ouch.

Simply the fact that you have exactly two namespaces, in each of which there
can only be exactly one binding for a symbol at a given lexical level, is
different from natural languages.

I'm convinced that Lisp-2 and Lisp-1 have merit, and made a Lisp dialect that
offers both, in a reasonable way that manages to be relatively clean.

An ideal Lisp dialect supports the reasonable request of him or her who wants
(list list) to Just Work, and it supports that programmer also who has a
function-valued variable f and just wants (f x y) to work. That ideal, if
taken too literally, is contradictory, but an acceptable compromise is to have
[f x y] work, where [] changes the evaluation of atomic forms that are
bindable symbols to Lisp-1 style (utterly, with deep support from the macro-
expander and evaluation semantics).

~~~
netsettler
> Do you want the same symbol to have several completely unrelated global
> bindings in the same space

Absolutely. Yes.

~~~
kazinator
Are you sure? Global, in the same space? As in X having three different
bindings as (say) a variable, at the same environmental level ("top-level")?

------
DonaldFisk
I read the article some time ago.

I have my own Lisp, Emblem, which I released well over ten years ago. (There
were few takers. I have improved it considerably since then, but not made a
re-release.) I decided that it shouldn't be gratuitously different from Common
Lisp. However, it is a Lisp 1. The reasons are: (1) my programming style was
never inconvenienced by the lack of separate function and value cells when
programming in Scheme instead of Common Lisp; and (2) Emblem uses a world
(i.e. an image) to store its compiled code in, and an extra, rarely used, cell
on every symbol increases the image size.

------
kazinator
In TXR Lisp, I implemented a design which achieves a harmony between the
separation of function/value bindings, and their union. You can program in
both styles in the same scope.

[http://www.nongnu.org/txr/txr-
manpage.html#N-02DC9E04](http://www.nongnu.org/txr/txr-
manpage.html#N-02DC9E04)

TL;DR: The underlying Lisp dialect is a Lisp-2. However, when a form is
denoted by square brackets, Lisp-1-style evaluation applies to each of its
positions: every position is evaluated as a form, and any form which is a
symbol is evaluated in a conflated single namespace. This rule does not
recurse into the forms: any nested forms also have to use square brackets to
work the same way.

Thus we can do:

    
    
      [mapcar cons '(1 2 3) '(a b c)] ;; no (fun cons)
    
      [f arg] ;; f is a variable
    

And so on.

This works by making [...] a syntactic sugar for (dwim ...) where dwim is a
special operator. This operator is deeply integrated into the language (both
macro-expansion semantics and evaluation), which is why it is able to change
the semantics of name lookup.

A rule I imposed is that macros are not allowed in this notation:

    
    
      [let ((a 3)) ...] ;; nonsense: let not recognized as operator here
    

so it is _actually_ implements the Lips-1 mantra _literally_ : "evaluate all
positions of the form equally". There is no exception for recognizing a
function-like macro in the leftmost position of a form, but not doing so in
other positions.

~~~
kazinator
It gets tricky. Bracket/dwim forms can designate access to objects, because
sequences are funcallable. So we can do:

    
    
      (set [a b] c)
    

What does that look like?

    
    
      1> (sys:expand '(set [a b] c))
      (let ((#:g0138 (sys:lisp1-value
                       b))
            (#:g0137 c))
        (sys:lisp1-setq
          a (sys:dwim-set (sys:lisp1-value
                          a) #:g0138
                          #:g0137))
        #:g0137)
    

The a and b forms turned into (sys:lisp1-value a) and (sys:lisp1-value b).
These secret operators bring in the name lookup semantics anywhere it is
required, disembodied from the dwim operator. Also sys:lisp1-setq is used to
store the updated sequence back into a. sys:setq can't be used because a could
be a function binding in the given scope.

There are hacks you can do in Common Lisp to achieve a bit of a Lisp-1 style,
but not to such detail. See this piece of documentation under labels and flet:

[http://www.nongnu.org/txr/txr-
manpage.html#N-0209307D](http://www.nongnu.org/txr/txr-
manpage.html#N-0209307D)

"Furthermore, function bindings introduced by labels and flet also shadow
symbol macros defined by symacrolet, when those symbol macros occur as
arguments of a dwim form."

A test case for this is the following, which must produce (1 1 1):

    
    
      (let ((g (lambda (x) 0)))
        (symacrolet ((f g))
          (flet ((f (x) 1))
            [mapcar f '(a a a)]))) ;; Lisp-1 f: flet shadows symbol macro
    

However, this must produce (0 0 0):

    
    
      (let ((g (lambda (x) 0)))
        (symacrolet ((f g))
          (flet ((f (x) 1))
            (mapcar f '(a a a))))) ;; now Lisp-2 var reference

------
chriswarbo
For those who are more used to non-lisp scripting languages, I've found the
awkwardness of functions in a lisp-2 (Emacs Lisp) compared to a lisp-1
(Scheme) to be similar to the awkwardness of functions in PHP compared to e.g.
Javascript or Python.

~~~
aerique
Why would functions in a lisp-2 be more awkward than in a lisp-1?

PHP has been too long ago for me to understand your comparison.

Would you please explain it differently?

~~~
chameco
In any languages with multiple namespaces, anything inhabiting some non-
primary namespace (i.e., not the matrix containing normal variables) feel
second class. Emphasis "feel", and hence "awkward" over "technically
limiting".

Additionally, it makes the transition to lambda calculus more difficult to
grok, which somewhat hinders understanding. It's pretty trivial to make some
simple rewrite rules from (most of) Scheme to lambda calculus: it's a two-hour
project at most. Doing this for CL is much less intuitive/elegant, possibly
making it more difficult for those with that sort of theoretical background.

~~~
netsettler
It is expressly not the goal of CL to be all things to all people, and yet to
be many things to many people. Scheme has a kind of fixed set of things it
definitely wants to be and sacrifices others, it just does it in a different
shape so that the particular examples you pick are easy.

One way I sometimes conceive it is that in any given language there are a
certain number of small expressions and a certain number of large ones.
Differences in semantics don't make things non-computable (which is why turing
equivalance is boring) but they change which expressions will be easily
reachable. There are certain things Scheme wants to be able to say in not many
characters and different things CL does. Neither is a flawed design. But they
satisfy different needs. It's possible to dive into either and be fine. As
others have pointed out here, it's not as big a deal in practice as it seems
like in theory. What matters in practice is to have an intelligible and usable
design, which both languages do. But to assume that the optimal way to say
something in one language should stay constant even if you change the syntax
and semantics of the language is to not understand why you would want to
change the syntax and semantics of the language.

