i have no idea what subtle or nuanced distinction you're trying to strike so what exactly do you imagine is the difference between a lisp repl and a python repl?
Edit: people that aren't familiar with python (or how interpreters work in general) don't seem to understand that being able to poke and prod the runtime is entirely a function of the runtime, not the language. In cpython you can absolutely do anything you want to the program state, all the way up to, and including, manually push/pop from the interpreter's value stack (to say nothing of moving up and down the frame stack), mutating owned data, redefining functions, classes, modules, etc. You can even, again at runtime, parse, to AST, and compile source to get macro-like functionally. It's not as clean as in lisp but it 100% gets the job done.
CL-USER 43 > (+ 1 (foo 20))
Error: Undefined operator FOO in form (FOO 20).
1 (continue) Try invoking FOO again.
2 Return some values from the form (FOO 20).
3 Try invoking something other than FOO with the same arguments.
4 Set the symbol-function of FOO to another function.
5 Set the macro-function of FOO to another function.
6 (abort) Return to top loop level 0.
Type :b for backtrace or :c <option number> to proceed.
Type :bug-form "<subject>" for a bug report template or :? for other options.
CL-USER 44 : 1 > (defun foo (a) (+ a 21))
FOO
CL-USER 45 : 1 > :c 1
42
Note that we are not in some debug mode, to get this functionality. It also works for compiled code.
Lisp detects that FOO is undefined. We get a clear error message.
Lisp then offers me a list of restarts, how to continue.
It then displays a REPL one level deep in an error.
I then define the missing function.
Then I tell Lisp to use the first restart, to try to invoke FOO again. We don't want to start from scratch, we want to continue the computation.
Lisp then is able to complete the computation, since FOO is available now.
Hmm, what advantage does Lisp offer here over Python?
>>> 1 + foo(20)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'foo' is not defined
>>> def foo(a):
... return a + 21
File "<stdin>", line 2
return a + 21
^
IndentationError: expected an indented block
>>> def foo(a):
... return a + 21
...
>>> 1 + foo(20)
42
>>>
Mind the hilarious indentation error, as I had not touched the old-school REPL in ages.
In normal day to day operations, I do the same thing daily with Jupyter Notebooks. I get access to as much state as I need.
With notebooks workflow it is normal to forget to define something and then redefine in the next cell. You could redefine function signatures etc. Ideally then you move cells in the correct order so that code can be used as Run All.
I "feel" ridiculously productive in VS Code with full Notebook support + copilot. I can work across multiple knowledge domains with ease (ETL across multiple database technologies, NLP-ML, visualization, web scraping, etc)
Underneath it is same as working in old school Python REPL just with more scaffolding.
I have been playing again with CL recently and am doing some trivial web-scraping of an old internet forum. I don't use a REPL directly, but just have a bunch of code snippets in a lisp file that I tell my editor to evaluate (similar to Jupyter?). I haven't bothered doing any exception (condition) handling, and so this morning I found this in a new window:
Condition USOCKET:TIMEOUT-ERROR was signalled.
[Condition of type USOCKET:TIMEOUT-ERROR]
Restarts:
0: [RETRY-REQUEST] Retry the same request.
1: [RETRY-INSECURE] Retry the same request without checking for SSL certificate validity.
2: [RETRY] Retry SLIME interactive evaluation request.
3: [*ABORT] Return to SLIME's top level.
4: [ABORT] abort thread (#<THREAD tid=17291 "worker" RUNNING {1001088003}>)
plus the backtrace. This is in a loop that's already crawled a load of webpages and has some accumulated some state. I don't want a full redo (2), so I just press 0. The request succeeds this time and it continues as if nothing happened.
You got a lot of correct but verbose responses. Put in layman's terms you had to run 1 + foo(20) again. If 1 + foo(20) were replaced by a complex and long winded function you would have lost all of that state and needed to run it all again. What if 1 + foo(20) had to read several TB of data in a distributed manner. You would have to do that all again.
There are ways around this and of course you could probably develop your own crash loop system in python but in lisp you simply continue where it failed. It's already there.
You mention doing things in Jupyter and ETLs which are often long running. This could be hugely beneficial to you.
From what I see in your example, you invoke the form again. In Common Lisp you don't need that. You can stay in a computation and fix&resume from within.
You are not fixing the issue in the dynamic context of a running program. Doesn't matter in this trivial example but is very noticeable when you have a loaded DB cache and a few hundred active network connections.
The advantage is that Python has just diagnosed the error and aborted the whole thing back to the top level, whereas in the Common Lisp, the entire context where the error happened is still standing. There are things you can do like interactive replace a bad value with a good value and re-try the failed computation.
In lispm's example, the problem is that there is no foo function, so (foo 20) cannot be evaluated. You have various choices at the debugger prompt; you can specify a different function, to which the same arguments will be applied. Or just specify a value to be used in place of the nonworking function call.
Being able to fix and re-try a failed expression could be valuable if you have a large running system with hundreds of megabytes or even gigabytes of data in the image, which took a long time to get to that state.
> Hmm, what advantage does Lisp offer here over Python?
In lisp, I never edit code at the REPL, yet the REPL is what enables me to edit code anywhere. I edit the source files and have my editor eval the changes I made in the source. This gets me the benefit that should my changes work, I don't have to retype them to get them into version control. This works because the Lisp REPL is designed to be able to switch into any existing package, apply code there, and also switch back to the CL-USER package after. My editor uses the same mechanism and only has to inject a single prefix (`in-package :xyz`) before it pastes the code I've selected for eval.
In Python, editing a method in a class inside some module (i.e., not toplevel) is less easy. At least, I haven't found any editor support for it. What I did find is the common advice to just reload the whole module/file.
Okay, so let's reload the whole module, then? Well, Python isn't really built for frequent module reloads and that can sometimes bite. In Common Lisp, the assumption that any code may be re-eval-ed is built in. For example, there's two ways of declaring a global value in CL: defvar and defparameter. The latter is simply an assignment of a value to a variable in the global scope, but the former is special. By default, `defvar` defines a variable only if it's not already defined. So that a CL source file may be loaded and reloaded any number of times without resetting a global variable.
Then there's classes. Oh my. Common Lisp has the most powerful (in terms of flexibility) OO system I know of. Not only can you redefine functions and methods, you can even redefine classes dynamically. Adding a property to a class adds that property to all existing objects of that class. Removing a property from a class removes it from all existing objects of that class. This feature is no longer CL-exclusive, but it is sufficient to offer a massive advantage over Python. I don't need to talk about method combinations, multi-methods and the many other cool features of the Common Lisp Object System here.
Then there's the debugging system. In Python, when an exception is thrown, it immediately unwinds the stack all the way up until it is first caught. So not only do you need to know beforehand where to catch what exception, if you get it wrong you cannot inspect the site of the error. In CL, a condition ("exception") does not unwind the stack until a restart is chosen. Not when it is caught, but rather when — after being caught — a resolution mechanism has been chosen. This allows interactive debugging (another cool CL feature) to inspect the stack frames at (and above) the site of error, redefine whatever code needs to be corrected, all before the error is allowed to unwind and destroy the stack. You still need to set-up handlers (and restarts) before the error happens, but you can be absolutely wildly lax and use catch-all handlers anywhere on the stack and restarts that take absolutely anything (even functions) at debug-time so you don't really need to be prescient with your error handling code unlike in Python.
I'm sure there's more, but I think this is pretty sufficient.
>Note that we are not in some debug mode, to get this functionality.
Jesus Christ I swear it's like you ascribe mysterious powers to the parens. Do you think the parens give you the ability to travel through time or reverse the pc or what? Okay it's not in a debug mode but it's in a "debug mode". Like seriously tell me how you think this works if it's not effectively catching/trapping some sigkill or something that's the equivalent thereof?
I have never in my life met this kind of intransigence on just manifestly obvious things.
Common Lisp programs run by default in a way that calls to undefined functions are detected.
Here the Lisp simply tries to look up the function object from the symbol. There is no function, so it signals a condition (aka exception). The default exception handler gets called (without unwinding the stack). This handler prints the restarts and calls another REPL. I define the function -> the symbol now has a function definition. We then resume and Lisp tries again to get the function definition. The computation continues where we were.
That's the DEFAULT behavior you'll find in Common Lisp implementations.
>Common Lisp programs run by default in a way that calls to undefined functions are detected.
Cool so what you're telling me is that by default every single function call incurs the unavoidable overhead of indirecting through some lookup for a function bound to a symbol. And you're proud of this?
I thought you know Lisp? Now you are surprised that Lisp often looks up functions via symbols -> aka "late binding"? How can that be? That's one of the basic Lisp features.
Next you can find out what optimizing compilers do to avoid it, where possible or where wanted.
At no point in time did I claim to know lisp well. I stated my familiarity at the outset. But what you all did was claim to know a lot about every other interpreted runtime without a grain of salt.
>Next you can find out what optimizing compilers do to avoid it, where possible or where wanted.
But compilers I am an expert in and what you're implying is impossible - either you have dynamic linkage, which means symbol resolution is deferred until call (and possibly guarded) or you have the equivalent of RTLD_NOW ie early/eager binding. There is no "optimization" possible here because the symbol is not Schrodinger's cat - it is either resolved statically or at runtime - prefetching symbols with some lookahead or cabinet is the same thing as resolving at calltime/runtime because you still need a guard.
What you're missing is that, unlike any other commonly used language runtime, compilation in CL is not all-or-nothing, nor is it left solely to the runtime to decide which to use. A CL program can very well have a mix of interpreted functions and compiled functions, and use late or eager binding based on that. This is mostly up to the programmer to decide, by using declarations to control how, when, and if compilation should happen.
It should also be noted that by spec symbols in the system package (like + and such) should not be redefined. This offers “unspecified” behavior and lets the system make optimizations out of the box.
Outside of that you can selectively optimize definitions to empower the system to make better decisions at the cost of runtime protection or dynamism. However these are all compiler specific.
To be fair, any dynamic language with a JIT will mix interpreted and compiled functions, and will probably claim as a strength not leaving to the programmer the problem of which to compile.
You are incorrect; optimizations are possible in dynamic linking by making first references go through a slow path, which then patches some code thunk to make a direct call. This is limited only by the undesirability of making either the callling object or the called object a private, writable mapping. Because we want to keep both objects immutable, the call has to go into some privately mapped jump table. That table contains a thunk that can be rewritten to do a direct call to an absolute address. If we didn't care about sharing executables between address spaces we could patch the actual code in one object to jump directly to a resolved address in the other object. (mmap can do this: MAP_FILE plus MAP_PRIVATE: you map a file in a way that you can change the memory, but the changes appear only in your address space and not the file.)
Okay well when pytorch, tensorflow, pandas, Django, flask, numpy, networks, script, xgboost, matplotlib, spacy, scrapy, selenium get ported to lisp, I'll consider switching (only consider though since the are probably at least another 20 python python packages that I couldn't do my job without).
i said ported not implemented; the likelihood that any of those libraries sprout lisp bindings is about as likely as them being rewritten in lisp. so it's the same thing and the point is clear: i don't care about some zany runtime feature, i care about the ecosystem.
Stop moving the goalposts: your answer to a commenter who stated that Common Lisp was faster than Python (a fact) was a list of packages, many of which are (1) not even written in Python and (2) some of them actually do have Common Lisp bindings.
Firstly, functions that are in the same compilation unit that refer to each other can use a faster mechanism, not going through a symbol. The same applies to lexical functions. Lisp compilers support inlining, and the spec allows automatic inlining between functions in the same compilation unit, and it allows calls to be less dynamic and m more optimized. If f and g are in the same file, where g calls f, then implementations are not required to allow f and go to be separately redefinable. So that is to say, if f is redefined only, the existing g may keep calling the old f. The intent is that redefinition has the granularity of compiled files: if a new version of the entire compiled file is loaded, then f and g get redefined together and all is cool.
Lisp symbol lookup takes place at read time. If we are calling some function foo and have to go through the symbol (it's in another compilation unit), there is no hashing of the string "foo" going on at call time. The calling code hangs on to the foo symbol, which is an object. The hashing is done when the caller is loaded. The caller's compiled file contains literal objects, some of which are symbols. A compiled file on disk records externalized images of symbols which have the textual names; when those are internalized again, they become objects.
The "classic" Lisp approach for implementing a global function binding of a symbol is be to have dedicated "function cell" field in the symbol itself. So, the compiled module from which the call is emanating is hanging on to the foo symbol as static data, and that symbol has a field in it (at a fixed offset) from which it can pull the current function object in order to call it (or use it indirectly).
Cross-module Lisp calls have overhead due to the dynamism; that's a fact of life. You don't get safety for nothing.
(Yes, yes, you can name ten "Lisp" implementations which do a hashed lookup on a string every time a function is called, I know.)
> If f and g are in the same file, where g calls f, then implementations are not required to allow f and go to be separately redefinable. So that is to say, if f is redefined only, the existing g may keep calling the old f. The intent is that redefinition has the granularity of compiled files: if a new version of the entire compiled file is loaded, then f and g get redefined together and all is cool.
That depends. The Common Lisp standard says nothing on the subject. CMUCL[1] and its descendent SBCL[2] do something clever called local call. It's not terribly difficult to optimize hot spots in your code to use local call. Outside of the bottlenecks, the full call overhead isn't significant for the overwhelming majority of cases. It's not like full call is any more expensive than a vtable lookup anyhow.
Do you think Python or Ruby or PHP are any different? And yet, not one of them actually chose to use this in a sane way, where a simple lookup error doesn't have to crash the whole program.
Restarting from the debugger keeps state without third party Python hacks that you mention. In this example Python increments x twice, Lisp just once:
>>> x = 0
>>> def f():
... global x # yuck!
... x += 1
...
>>> def g(y):
... h()
...
>>>
>>> g(f())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in g
NameError: name 'h' is not defined
>>>
>>> def h(): pass
...
>>> g(f())
>>>
>>> x
2
Versus:
* (setf x 0)
* (defun f() (incf x))
* (defun g(y) (h))
* (g(f))
debugger invoked on a UNDEFINED-FUNCTION in thread
#<THREAD "main thread" RUNNING {1001878103}>:
The function COMMON-LISP-USER::H is undefined.
Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [CONTINUE ] Retry calling H.
1: [USE-VALUE ] Call specified function.
2: [RETURN-VALUE ] Return specified values.
3: [RETURN-NOTHING] Return zero values.
4: [ABORT ] Exit debugger, returning to top level.
("undefined function")
0] (defun h() nil)
; No debug variables for current frame: using EVAL instead of EVAL-IN-FRAME.
H
0] 0
NIL
* x
1
Can you connect to a running server or other running application, inspect live in memory data, change live in memory data, redefine functions and classes and have those changes take immediate effect without restarting the server or app?
I think that is that is the big difference.
It’s a triple edged sword bonded to a double barreled shotgun though, and the very antithesis of the idea of functional programming vs mutable state.
>Can you connect to a running server or other running application, inspect live in memory data, change live in memory data, redefine functions and classes and have those changes take immediate effect without restarting the server or app?
The answer to all of these things, at least in python, is emphatically yes. I do this absolutely all the time. You can debug from one process to another if you've loaded the right hooks. You don't need to take my word for it or even try to do it; you just need to reason a fortiori: python can do it because it's an interpreter with a boxed calling convention and managed memory, just like lisp interpreters.
It's amazing: people will die on this hill for some reason but lisp isn't some kind of mysterious system that was and continues to be beyond us mere mortal language/runtime designers. the good ideas in lisp were recognized as good ideas and then incorporated and improved upon.
The answer to all these things should be "just doesn't work in practise", not for real programs anyways. Unlike Lisp, Python doesn't lean itself well to this mode of development.
Primitive CLI-like tinkering, figuring out language features, calc-like usage - maybe. But not a single time in 15 years of doing Python across the industry I saw anybody using these features for serious program development, or live coding, or REPL-driven development.
>Primitive CLI-like tinkering, figuring out language features, calc-like usage - maybe. But not a single time in 15 years of doing Python across the industry I saw anybody using these features for serious program development, or live coding, or REPL-driven development.
I swear you people are like ostriches in the sand over this - Django, pytest, fastapi, pytorch, Jax, all use these features and more. I work on DL compilers and I use those features every day - python is a fantastic edsl host for whatever IR you can dream of. So just because you're in some sector/area/job that doesn't put you in contact with this kind of python dev doesn't mean it's not happening, doesn't mean that python doesn't support it, doesn't mean it's an accidentally supported API (as if such a thing could even be possible).
Really what this convo is doing is underscoring for me how there really is nothing more to be learned from lisp - I had a lingering doubt that I'd missed some aspect but you guys are all repeating the same thing over and over. So thanks!
> Really what this convo is doing is underscoring for me how there really is nothing more to be learned from lisp - I had a lingering doubt that I'd missed some aspect but you guys are all repeating the same thing over and over.
No, you keep on misunderstanding what people are trying to tell you. It’s a communication failure. The thing that you think you are doing in Python is not the thing that people are doing in Lisp.
As an example, I suppose that when you’re developing code in Python’s pseudo-REPL you often reimport a file containing class definitions. When you do that, what happens to all the old objects with the old class definition? Nothing, they still belong to the old class.
If you did this on a REPL connected to a server, what would happen to the classes of objects currently being computed on? Nothing, they would still belong to the old class.
In Lisp, it’s different. There is a defined protocol for what happens when a class is redefined. Every single object belonging to the old class gets updated to the new class. You can define code to get called when this happens (say, you added a new mandatory field, or need to calculate a new field based on old ones — and of course ‘calculate’ could also mean ‘open a network connection, dial out to a database and look up the answer’ or even ‘print the old object and offer the system operator a list of options for how to proceed’). And everything that is currently in-flight gets updated, in a regular and easy-to-understand way.
People are telling you ‘with Lisp central air conditioning, I can easily heat my house in the winter’ and you are saying ‘with Python, I can easily build a fire whenever my house gets cold too!’
As others here, I don't understand how those features (seamless continuation of a program with the exact state) could possibly work in Python.
What I do know is that the Python community has a propensity for claiming that approximations of complex features by gigantic hacks work and are sound, while they are not.
The Python community also has an extreme tolerance for unsound and buggy software that is propped up by censoring those who complain. Occasional complaints are offset by happy (and selected) marketing talks at your nearest PyCon.
I think in just about every response you left in these threads, you misunderstood what was being said. Possibly through impatience, or just plain arrogance. I really encourage you to spend some time trying to understand how interactivity/restartability (as in Lisp restarts, not process restarts) is built into the language. Especially if you're specializing in the compilers of dynamic languages.
You might also check out Smalltalk, which has a similar level of dynamism.
Listen, I've been on many sides: Lisp stuff, Python stuff, C stuff, etc. I don't think that "something has to be learned". Lisp has many good ideas, Python has good ideas. But REPL-driven development is not one of them. But let me explain.
You see, it's not about how REPL in Python just does not allow something (even though it is rather primitive). Python makes it superhard to tweak things, even if you can change a certain variable in memory. Here's why.
Think about Lisp programs, including OOP flavours. These fundamentally consist of 2 things: a list of functions + a list a variables. If you replace a function or a variable then every call will go through it. And that's it. You change a function - all calls to it will be routed through the new implementation. Because of REPL-centric culture of things people really do organise their programs around this style of development.
Python was developed with an dynamic OOP idea in mind where everything is an object, everything is a reference. Endless references to references of references to references. It's a massive graph, including methods and functions and objects and classes and metaclasses. There is no single list of functions where you can just replace this name-to-implementation mapping.
TL;DR Replacing a single reference doesn't change much in the general case. It does work in some cases. But that's not enough for people to rely on it as main development driver.
Python fundamentally makes a different tradeoff than your average lisp.
I've mentioned this in a sibling thread, but it's interesting to compare this to Ruby. Ruby does support the sort of redefinition you're talking about. And yet REPL-centric development isn't primary there, either. Yes, there are very good REPL implementations, but I don't know of anyone who develops at the Ruby REPL the same way you would in a Lisp REPL. Maybe it's a performance thing? Maybe it's the lack of images?
BTW, you mentioned that classes can be redefined in Ruby.
How does this work for existing class instances? Anonymous pieces of code, methods, etc? Even lisp itself does not save from all the corner cases, it's the dev culture that makes all these wonderful things possible.
The first time you do `class A....end` you're defining the class. Instances when they are created keep a reference to that class - which itself is just another object, an instance of the class `Class` which just so happens to be assigned to the constant `A`. If you later say `class A... end` and redefine a method, or add something new, what you're actually doing is reopening the same class object to add or change its contents, so the reference to that class doesn't change and all the instances will get the new behaviour. If you redefine a method, calls to that method name will go to the new implementation.
So in that sense it works like you'd expect, I think. As I said, Ruby is very lispy - Matz lists Lisp as one of the inspirations, and I think I'm right in saying he even cribbed some implementation details from elisp early on.
It just shows that there's no understanding of the depth of the problem.
Years ago I tried doing something like this (redefining functions, classes, etc) in a dev environment of a MMO game. This would be crazily useful as the env took 5-10 mins to boot. And game logic really needs tweaking A LOT.
I really wanted this to work. After all, it really feels as if python has everything for it. Banged my head against the wall for weeks, failed ultimately and gave up on live development in python completely.
In contrast, as a heavy emacs user, I tweak my environment a couple of time a day. I restart this lisp machine a couple of time a month.
No, you're fine. For certain things Python REPL-like live development is ok indeed. Say, if your program boils down to a list of functions. Think request handlers or something.
I need to be very clear so that no one misunderstands: this is not proprietary pycharm functionality - this is all due to to sys.settrace and the pydev debug protocol
>The answer to that question is the differentiating point of repl-driven programming. In an old-fashioned Lisp or Smalltalk environment, the break in foo drops you into a breakloop.
do you want me to show you how to do this in a python repl? it's literally just breaking on exception...
Since Smalltalk was mentioned, please consider following points:
1. Smalltalk has first class, live activation records (instances of the class Context). Smalltalk offers the special "variable" thisContext to obtain the current stack frame at any point during method execution.
If an exception raised, the current execution context is suspended and control is transferred to the nearest exception handler, but the entire stack frame remains intact and execution can be resumed at any point or even altered (continuations and prolog like backtracking facilities have been added to Smalltalk without changing the language or the VM).
2. The exception system is implemented in Smalltalk itself. There are no reserved keywords for handling or raising exceptions. The implementation can be studied in the live system and, with some precautions, changed while the entire system it is running.
3. The Smalltalk debugger is not only a tool for diagnosing and fixing errors, it also designed as tool for writing code (or put differently, revising conversational content without having to restart the entire conversation including its state). Few systems offer that workflow out of the box, which brings me to the last point.
4. I said earlier that Racket is different from Common Lisp. It's not only about language syntax, semantics, its implementation or other technicalities. It is also about the culture of a language, its history, its people, how they use a language and ultimately, how they approach and do computing. Even in the same language family tree you will find that there are vast differences, if you take said factors into account, so it might be worthwhile to study Common Lisp with an open mind and how it actually feels in use.
No, it’s not: an exception unwinds the stack all the way up to where the exception is caught. By the time the enclosing Python pseudo-REPL sees the undefined function error, all the intervening stack frames have dissolved. The way it works is that a function tries code, and catches exceptions.
In Lisp (and I believe Smalltalk), it doesn’t work that way: there is an indirection. Rather than try/except, a function registers a condition handler; when that particular condition happens, the handler is called without unwinding the stack. That handler can do anything, to include reading and evaluating more code. And re-trying the failed operation.
It would be possible to implement this in Python, of course, but it doesn’t offer the affordances (e.g. macros) that Lisp has, and it’s not built into the language like it is in Lisp (e.g., every single unoptimised function call in Lisp offers an implicit ‘retry’).
Edit: people that aren't familiar with python (or how interpreters work in general) don't seem to understand that being able to poke and prod the runtime is entirely a function of the runtime, not the language. In cpython you can absolutely do anything you want to the program state, all the way up to, and including, manually push/pop from the interpreter's value stack (to say nothing of moving up and down the frame stack), mutating owned data, redefining functions, classes, modules, etc. You can even, again at runtime, parse, to AST, and compile source to get macro-like functionally. It's not as clean as in lisp but it 100% gets the job done.