
Local Variables - matt_d
http://www.craftinginterpreters.com/local-variables.html
======
ahaferburg
For those wondering, it looks like this is the final version of the latest
chapter of a book that's work in progress. Fresh out of the press, if you
want.

[https://twitter.com/munificentbob/status/1109876226104057856](https://twitter.com/munificentbob/status/1109876226104057856)

------
TheAsprngHacker
The chapter raises the question of the semantics of

    
    
        {
          var a = "outer";
          {
            var a = a;
          }
        }
    

and

    
    
        {
          var a = a;
        }
    

. Some programming languages have the idea of "let rec" versus "nonrecursive
let." In Haskell, `let` is recursive, and because Haskell is lazy, `let a = a`
is a nonterminating computation. In OCaml, `let` is non-recursive and `let
rec` is recursive, and only certain recursive definitions are allowed. (You
can do `let rec a = 0::a`, but not `let rec a = a`.)

Scheme has `let`, where none of the variables are in scope for any definition,
`let*`, where each variable is in scope in the successive definitions, and
`letrec`, where all of the variable are in scope for all definitions.
According to
[http://www.r6rs.org/final/r6rs.pdf](http://www.r6rs.org/final/r6rs.pdf),
Scheme initializes letrec'd variables to a "black hole."

~~~
AnaniasAnanas
> and because Haskell is lazy, `let a = a` is a nonterminating computation

I am pretty sure that lazyness has nothing to do with this.

~~~
TheAsprngHacker
Maybe I worded that wrong. In OCaml and other strict languages, `let (rec) a =
a` doesn't make sense, because `a` doesn't have a value when you are defining
it to be itself. When I write `let a = a` in Haskell, what happens is that `a`
is set to a thunk whose code pointer points to its own evaluation code. When
the thunk gets forced, it tries to evaluate itself.

Doing my research, I was wrong about the "nonterminating computation" part;
when the thunk gets forced, it gets marked as a "black hole," detecting the
infinite recursion in this case.

------
smueller1234
I thought the choice to implement locals and all of their semantics using just
the main stack was both curious and surprising. (I'm most familiar with perl's
implementation, which requires a bunch of stacks flying in for formation to
implement it's complex scoping semantics.)

In the book, Bob says he designed the language such that it would be possible
to use just one stack. What does that mean? It sounds very discouraging for
newbies because it implies a lot of foresight or trial and error.

Overall, while this is among the most interesting subjects covered, I find the
chapter a bit lacking in either one of two ways: either it is for beginners,
and then the chapter feels very magical or is for intermediates and then a bit
of extra verbiage on the design choices that led you there would be
appropriate.

Nitpick: I would totally have stopped low and done the -1 initialization trick
as well. But yuck. ;)

~~~
Drup
The constraint on the language for this to work well is fairly natural and is
actually described in the article: the language should follow lexical scoping.

Unfortunately, since some dynamic languages do not respect lexical scoping,
programmers in these languages tends to think of local variables and scoping
as something very complicated. It doesn't need to be.

~~~
chrisseaton
You can have plain lexical scoping but still be unable to use a simple stack
in all cases, due to closures.

The author describes this additional constraint that they have on top of
lexical scoping.

> We have to be OK with only allocating new locals on the top of the stack,
> and we have to accept that we can only discard a local when nothing is above
> it on the stack.

~~~
Drup
Well, there are lot's of well known compiler techniques to handle closures in
that context. It's a little bit out of scope of the article, but it's not very
complicated either.

~~~
chrisseaton
I know that but you said the constraint needed was lexical scoping - that
constraint is insufficient and you need additional constraints on the language
design.

~~~
klmr
If you’re happy for the compiler to copy variables out of the closed-over
scope when returning the closure, this can still be handled with a single
stack. That’s what C++ lambdas do: “Closures” in C++ are locally-allocated
structures that hold used variables as local (stack-allocated) member
variables. Creating a closure copies closed-over variables (or
pointers/references to them). Returning a closure from a function returns a
logical copy (which can be optimised away) of the structure.

(Don’t get me wrong, this obviously still implies additional constraints, but
it gets fairly close to universal closures.)

~~~
chrisseaton
I don't really understand that point of view - you can use a single stack as
long as you actually use the heap in addition to a single stack?

~~~
kccqzy
C++ closures do not, by themselves, use the heap. Every closure in C++ gets
translated by the compiler to a unique type that contains either copies of or
references to the objects being closed over. If you choose to use copies, then
whether or not anything is allocated on the heap depends on the copy
constructor; if you use references then there is no copying, but it's up to
you to ensure lifetime.

~~~
jdmichal
This is roughly how Java works with anonymous types closing over variables
also. That's why the variables must be declared `final`. It just copies the
local values over into the anonymous type and calls it a day. Of course, since
the only thing allocated on the stack are primitives and pointers, and
everything on the heap is subject to garbage collection, this is a pretty
straight-forward operation.

I don't know if lambdas work the same way. I know in some ways they work like
anonymous types, and not in others.

~~~
_old_dude_
yes, it works the same way with lambdas, the lambda proxy (the class that
implements the functional interface) contains the copy of the local values.

Here is the code that generate the constructor of a lambda proxy
[http://hg.openjdk.java.net/jdk/jdk/file/3cabb47758c9/src/jav...](http://hg.openjdk.java.net/jdk/jdk/file/3cabb47758c9/src/java.base/share/classes/java/lang/invoke/InnerClassLambdaMetafactory.java#l356)

------
imAsking9836
Been following this book for a bit but I can't quite find a good way to learn
from it. Should I first read each chapter and create my own language?
implement his language using language other than Java? Type by hand his source
code following along and trying to comprehend the details?

~~~
bibyte
> Type by hand his source code following along and trying to comprehend the
> details?

Isn't that the only way to learn from any programming book ? At least that's
what I have always done. I am curious as to what other way people learn.

~~~
dahfizz
I've found that it's easy to zone out if I'm just typing exactly what's on the
page in front of me. Following along but in a different language can keep you
more engaged and think more about what's going on.

~~~
imAsking9836
It also becomes a bit of a battle against the desire of copying/pasting the
code (It's even more tempting since it's an e-book with the code already
formatted), not to mention that some parts are tedious to type like
enumerators with many string elements

------
pinjasaur
I found the page layout extremely well-suited to grokking the content better.
The right "sidebar" (in quotes because it's not a dedicated column, just
emulated using CSS positioning) containing notes for the text & code along
with figures worked great. I may have to look into recreating this kind of
layout for my blog or other documentation.

