With lexical scoping, each variable is at a fixed offset from your global data pointer, your frame pointer, your closure pointer, or a fixed number of machine words back up the call stack in a parent caller's activation record.
You could store dynamically scoped variables in fixed locations and have each function restore previous values upon leaving the function. This makes reading as fast as lexically scoped variables, but adds significant write overhead.
Due to the halting problem, without the above global-emulation-trick, you can't pre-calculate variable locations, so searching up the call stack is necessary.
The only case where I can think of that searching up the dynamic scope chain would be faster than accessing memory at a fixed offset would be if accessing stack memory is much faster than accessing general RAM and the average lookup chain is short. Maybe you're memory constrained and swapping out a lot. Maybe you're on a stack-based processor that has special cache or higher caching priority for the stack. Maybe your processor has a very small TLB (address translation cache) and accessing RAM is likely to cause a TLB miss, whereas the top of your stack is very likely to be in the TLB (or your architecture avoids address translation for stack access). Maybe your OS allocates huge pages (if supported by your architecture) for stack pages, but your malloc() implementation rarely allocates huge pages.
In any case, it would seem to need some kind of rare case where dynamically searching for an offset is faster than using a compile-time statically determined offset.
Since the interpreter was interpreting the source-code-as-list-structure directly, it could always get the current value by a one-step pointer indirection, with no chasing down hashmaps at all. (The Lisp 1.5 manual documents a slightly more elaborate version, with the current value stored as an entry on a property list, but the idea was again to limit pointer-chasing by the interpreter.)