Hacker News new | past | comments | ask | show | jobs | submit login

I've heard that dynamic scoping was considered faster than lexical for a time. Anyone know of any papers that would justify such a stance?

I thought the main reason was ease of interpreter implementation, not speed (which is why some early LISPs used dynamic scoping in interpreter mode and lexical scoping in compiled mode) ... just have a hashmap in each activation record, and a pointer to the parent activation record's <hashmap, parent_pointer> tuple, and walk the chain until you find your first entry.

With lexical scoping, each variable is at a fixed offset from your global data pointer, your frame pointer, your closure pointer, or a fixed number of machine words back up the call stack in a parent caller's activation record.

You could store dynamically scoped variables in fixed locations and have each function restore previous values upon leaving the function. This makes reading as fast as lexically scoped variables, but adds significant write overhead.

Due to the halting problem, without the above global-emulation-trick, you can't pre-calculate variable locations, so searching up the call stack is necessary.

The only case where I can think of that searching up the dynamic scope chain would be faster than accessing memory at a fixed offset would be if accessing stack memory is much faster than accessing general RAM and the average lookup chain is short. Maybe you're memory constrained and swapping out a lot. Maybe you're on a stack-based processor that has special cache or higher caching priority for the stack. Maybe your processor has a very small TLB (address translation cache) and accessing RAM is likely to cause a TLB miss, whereas the top of your stack is very likely to be in the TLB (or your architecture avoids address translation for stack access). Maybe your OS allocates huge pages (if supported by your architecture) for stack pages, but your malloc() implementation rarely allocates huge pages.

In any case, it would seem to need some kind of rare case where dynamically searching for an offset is faster than using a compile-time statically determined offset.

The classic (or maybe, time-honored) technique for implementing dynamic binding in Lisp interpreters was simpler yet. When the reader produced list structure, that was in the form of cons cells and "atoms" with string name. The current value was part of the "atom" data structure -- code that rebound it would push the "saved" value on the stack, alter the value to suit, and then pop the initial value off the stack and restore it when done. (This is sometimes referred to as "shallow binding".)

Since the interpreter was interpreting the source-code-as-list-structure directly, it could always get the current value by a one-step pointer indirection, with no chasing down hashmaps at all. (The Lisp 1.5 manual documents a slightly more elaborate version, with the current value stored as an entry on a property list, but the idea was again to limit pointer-chasing by the interpreter.)

Mostly in North America. I recall my old prof(french dude, this was probably at inria) saying something "and then this young American man that had made quite a name for himself overseas came to the uni. He was shocked to hear we were using lexical scoping. We were equally shocked to hear that we had been so Wrong the last couple of years. Turns out he held the same beliefs we had had a couple of years prior. They apparently were standard at American unis".

In my mind I hope that American man was Olin Shivers, who has written about how he came to France and learnt how lexical scoping was faster, but I don't know the year when any of these events happened.

This reads to me just like one of those lovely historical anecdotes about great mathematicians and their travels and learnings. The evolution of computer science is really no different.

I've heard this too. One possible root of this claim is Stallman in regards to Emacs Lisp. I think he made a post where he said that dynamic scoping was faster than lexical scoping for elisp (though this was later shown to be mistaken: https://emacs.stackexchange.com/questions/2129/why-is-let-fa...).

Sorry that I don't have any sources to support this. I can't find the post, and it may have been secondhand info anyway.

I thought he meant lexical scope was unfit if not incompatible with an extensible program. Meaning you had to loosen up the binding to allow for user code to hook in. Lexical scope made this "impossible" or maybe possible but required radically different way to program, probably using faux-monadic style lisp.

Stallman famously worked on specialized LISP hardware at MIT. I wonder if these machines had special optimizations for stack access that made lexical scoping faster on that architecture, and Stallman over-generalized.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact