
History of T (2001) - tosh
http://www.paulgraham.com/thist.html
======
skrebbel
Being an extremely young programmer (I'm 35), I had never quite understood
what "dynamic scoping" is - the whole idea that scoping can somehow be done
differently than "like in all the languages" was foreign to me until a few
minutes ago. If you're an ignorant youth like me, here's some clarification.

Wikipedia's description is horrid, but C2's[0] is a gem:

"In _dynamic_ scoping, by contrast, you search in the local function first,
then you search in the function that _called_ the local function, then you
search in the function that called _that_ function, and so on, up the call
stack. "Dynamic" refers to _change_ , in that the call stack can be different
every time a given function is called, and so the function might hit different
variables depending on where it is called from."

Sounds totally insane to me and I wonder how people made working software at
all in the '80s. That said, I get happy when I remember that people often
complain about how horrible it is that $OBSCURE_AWESOME_TECHNOLOGY never made
it and how $TRENDY_FAD solves the same problem so much less elegantly. I mean,
apparently an $AWESOME_TECHONOLOGY sometimes does win, against all odds - in
this case lexical scoping. Yay Guy Steele!

[http://wiki.c2.com/?DynamicScoping](http://wiki.c2.com/?DynamicScoping)

EDIT: I just realized that shells copying all environment variables into child
processes is basically a present-day example of dynamic scoping (although,
fortunately, without mutation - we'd have entire data centers _on fire_ if you
could mutate parent process envs). It gives me a slightly better idea how this
stuff could be used to make working software.

~~~
kragen
I don't think dynamic scoping is a good idea, but I wanted to point out that
it does have some advantages:

\- Sometimes you actually do want to change one of a large number of
parameters for the dynamic extent of a single function call without having to
make all of them parameters to that function. Classic examples are turning
logging on for a single function call, changing the pen color or clipping
region for a call to a subroutine that draws some kind of graphic, and running
a function with the output redirected to a memory buffer instead of stdout.
Dynamic scoping works great for this. My .emacs.d/init.el has an example where
I temporarily set the "deactivate-mark" variable to a true value while I
invoke some search and replace functions which would normally deactivate the
mark, and another example where I temporarily set the "case-fold-search"
variable to a false value so that the searches invoked from there will be
case-sensitive.

That said, PostScript solves this problem differently, using a graphics
context stack and gsave/grestore operators to enable you to make local changes
to that context. Languages that support exceptions for normal control flow (as
opposed to aborting the job or whatever) need some kind of defer/RAII/context-
manager approach to making sure you don't forget to call grestore during the
exception unwinding.

\- As the article explains, dynamic scoping can be more efficient, but it
doesn't explain why. We usually read variables more frequently than we supply
them with values as parameters. In a single-threaded environment, dynamic
scoping allows you to compile a read of a variable as a load from the fixed
memory location of that variable, even if it's a local variable, even if your
functions are recursive. Static scoping, by contrast, requires you to compile
reads of your local variables as indexed loads from a frame pointer which
points at your function's activation record, again unless you forbid
recursion. The indexed load involves an addition of a constant to a register
to compute the address to fetch from, and until the 1980s this actually made
your program run slower. (Nowadays it just makes your program toggle more
transistors, since the addition can nearly always be computed in parallel with
other operations, and it probably isn't a significant contribution to total
power consumption.)

Carrying this logic to the extreme, Multics Emacs (and I think GNU Emacs as a
spiritual descendant) would swap in the values of not only function-local
variables in this way but all the buffer-local variables whenever you switched
buffers. The logic was that Lisp code would read the variables considerably
more often than you switched buffers, so it was worth making a buffer switch
slower to make the Lisp code run faster.

(There's another way to implement dynamic scoping called "deep binding", but I
don't know of real systems that used it.)

Dynamic binding originated as a bug in the Lisp interpreter — you'll note that
it's in both McCarthy's original paper and the Lisp 1.5 metacircular
interpreter — but the divergence of its semantics from the static-scoping
semantics of the λ-calculus weren't appreciated until later.

Also, as a side note, it's pretty fucking sad that it's 2018 and we're still
dealing with language implementations on a daily basis that do things like box
all their fucking fixnums, incurring not only unnecessary inefficiency but
also unnecessary memory allocation and nondeterminism, so that adding two
small integers can throw an out-of-memory error or produce a timing
information leak that reveals secret information to an attacker. CPython, I'm
looking at you, you fucking piece of shit.

~~~
cryptonector
Dynamic scoping is useful for things that resemble shell I/O redirection,
where you want to be able to refer to some resource in an indirect / implicit
manner such that an actual resource can be injected at any time. For
everything else it's just no good.

------
lisper
> Another implementation feat of T's was that it allowed interrupts between
> _any_ two instructions of user code.

That was the goal, but the T3 compiler actually turned out to have a very
subtle bug that manifested itself as the single hardest bug I have ever dealt
with in my career. We used T to write research code for rovers and robotic
arms at JPL. This code ran on two different machines: Sun3's running Solaris,
and brandless embedded systems running vxWorks, but both running the same T
compiler targeting the 68020 processor.

On the embedded systems we would get random crashes, but the same code running
on Solaris ran flawlessly. Post-mortems revealed massive and random heap
corruption, so the crash was manifesting itself long after the root cause.
Because the problem was random an could not be reliably reproduced, it was
impossible to find the root cause. And that is how matters stood for nearly a
year until someone (I don't remember who, but it wasn't me) finally figured
out how to reproduce the problem reliably. Then I spent a couple of days
single-stepping through machine code until I finally found the problem: the
stack pointer was being decremented while a live value was still on the stack.
That value was accessed by the very next instruction. Those two instructions
were simply out of order.

On Solaris this didn't matter because all the code ran in user space and was
single-threaded. But vxWorks does not have a protected kernel address space.
When an interrupt happens, that interrupt is processed on whatever process
stack is currently active. So if you got an interrupt exactly between the
stack pointer decrement and the subsequence load, the value was corrupted, and
a few thousand instructions later everything would go kablooey.

~~~
x1798DE
I'm curious - what was the solution to the problem?

~~~
lisper
The solution was trivial: just reverse the two instructions in the code
generator part of the compiler.

------
sctb
A couple of recent related discussions:

[https://news.ycombinator.com/item?id=17716056](https://news.ycombinator.com/item?id=17716056)

[https://news.ycombinator.com/item?id=15054404](https://news.ycombinator.com/item?id=15054404)

------
modells
Stanford has/had a strong AI group, SAIL, that got passed around and now
resides affiliated with GSB. When I was at SMI/BMIR, they had an old SAIL sign
over in MSOB.

------
FullyFunctional
Anyone have more detail on Douglas W. Clark's GC? Ideally the cited article or
a description of the algorithm.

------
teejmya
Poor title.

