I also think it’d be a bad idea to make it too easy to write emacs extensions in anything but a Lisp, because that would fragment the ecosystem. If we’re going to replace elisp, it should be with a better Lisp, not with a lesser language (e.g. Lua, Python, Ruby … whatever).
The great thing about a JIT is that we get a speedup without changing the actual implementation language or fragmenting the ecosystem.
Take that, put a little work into it and one could presumably replace the C core of Emacs with Common Lisp.
But in our actual world, it's also legal but publishers may play dirty and somehow claim rights because you published something based on their edition?
Just thinking out loud.
maybe not totally, but mostly. Beyond that he does not care about it and isn't interested in it.
I code in C regularly, yet I do not care and am not interested in C after C90. Only some library features of C99 and that's about it.
Stallman's concepts of what is Lisp, how to use Lisp, how to implement Lisp, were formed long before Common Lisp.
If you're in that position, it's easy to have allergic reactions to requirements you don't agree with.
Maclisp and its successor Lisp Machine Lisp, were probably his biggest influences. Emacs Lisp is based on Maclisp - without the features of Lisp Machine Lisp like keyword arguments, Flavors as the object-system, ...
> So we started to make Scheme the standard extensibility language for GNU. Not Common Lisp, because it was too large.
I'd hardly say he hates it based solely on that essay.
I kept keyword arguments out of TXR Lisp. That is, out of the function call mechanism. It doesn't seem like a good idea to deal with a dictionary like mechanism in function calls.
I invented something called "parameter list macros". A parameter list macro can be written which adds keyword argument ability to a function (and such a macro is provided). You can do (lambda (:key a b c -- x y) ...) and now you have keyword parameters x and y. They support the (sym dfl-val present-p) syntax and all. The :key parameter macro implements them via a source transformation. :key is able to rewrite both the parameter list and the body of the function to make this work.
One reason Common Lisp programmers use keyword macros is that there is no way to invoke the default value of an optional parameter which precedes another optional parameter to which an argument is being specified. If a function has optional arguments X and Y, the caller cannot default X if it specifies Y.
I fixed that in TXR Lisp: the symbol : (colon) can be passed to an optional parameter to activate its default value. So if a binary function f has two optional parameters we can invoke (f : 3) to default the first optional, and specify the second one as 3. This special feature is only supported in function calls, not in macro/destructuring parameter lists.
That : symbol also serves as the &optional keyword, separating the required parameters from the optionals. It harmonizes with the consing dot that demarcates the &rest parameter: (lambda (x y : z w . r) ...): x y required, z w optional, r rest. The : symbol is nothing more than the symbol named "" (empty string) in the keyword package. It's provides a very useful third value to the nil and t duo. Common Lisps have this symbol! Unfortunately, they tend to print it funny back at you like :||, and neglect to get any mileage out of its notational convenience.
As for those member functions, I provide a memq, memql and memqual that use the three different equalities. The more general member function takes a key and test function, but not as keyword arguments but positional optional parameters. If you want the default test, but custom key: [member foo bar : mykey].
The reason for guile Emacs is that guile is the official extension language of the GNU project. It is a very capable implementation, and the elisp implementation is a lot cleaner than the one in Emacs.
You would also gain a lot of features not available in elisp. proper threading (with both pthreads and fibers, delimited continuations, and much more).
Not only that, you would get a runtime for elisp that works outside Emacs. Imagine being able to write programs and just load guile-org-mode and be able to script the org process without invoking Emacs.
Weren't people saying the same about python and Java long ago?
How do you arrive at that conclusion?
> Indeed, elisp’s weaknesses are IMHO mainly where it’s most akin to Scheme (e.g. its flat namespace)
Racket at least wasnt flat iirc, is guile scheme?
> The great thing about a JIT is that we get a speedup without changing the actual implementation language or fragmenting the ecosystem.
> Weren't people saying the same about python and Java long ago?
I spent roughly a decade as a professional Python developer, and in retrospect I’d probably agree that it’s not well-suited to large applications. Why, exactly, is for another discussion.
As for Java — it’s certainly not meant for building small ones!
> How do you arrive at that conclusion?
The same way that the Scheme committee did, when they came up with R6RS: by trying to use Scheme for large systems.
It lacks features which aid building large systems (e.g. namespaces); it lacks features which enable building industrial-strength systems (e.g. the Lisp condition system or CLOS); it has features which make code more complex and tend to hinder performance (e.g. call/cc); and it even has a broken feature (dynamic-wind).
Then there are things like: conflating functions, variables & all other names; breaking NIL into NIL, () & #f. Those are partly a matter of taste, but I think also an indication of Lisp’s pragmatic nature: in practice, doing things the Lisp way is better, even though in theory doing them the Scheme way is.
> Racket at least wasnt flat iirc, is guile scheme?
Racket isn’t Scheme anymore. That’s not a bad thing, and indeed Racket is pretty amazing. I wish that the same amount of effort had been expended making Lisp better, but everyone’s gotta scratch his own itch.
The only thing missing is really the condition system, but that could be bolted on (that is the correct term) using continuations (preferably delimited). There is even a suggestion for r7rs-large to include them. I hope it gets voted in.
I’m sorry to be that guy, but did you ever try it out for yourself?
Just starting Emacs took as long as a full regular Emacs elc-build, if not longer.
So not exactly faster by a universal standard.
I see no technical reason why guile-elisp should be any slower than normal elisp. It's just that some of the optimizations have not yet been put into place. After a switchover all the improvements from Guile will come for free in the future.
See the branches named "master", "lightning" and "wip-elisp" here: http://git.savannah.gnu.org/cgit/guile.git
It doesn’t take much to terminally derail an effort driven by only a single contributor.
Good on Aleksey for picking it up and modernizing libjit to this point.
And for everyone who thinks of LLVM in this context, here's a thread with some "primary sources" .
 - http://lists.gnu.org/archive/html/dotgnu-libjit/2004-05/msg0...
Richard> I don't think a 3% speedup is worth those drawbacks.
Richard> Or even a 10% speedup.
Richard> A really big speedup would justify the costs.
Tom> It is 3x, not 3%.
In some simple benchmarks, it is about 3x faster than the bytecode interpreter.
I'm always skeptical of statements of these, because workloads vary so much.
JITs seem to do well for numerical benchmarks, e.g. summing a list of numbers or the mandelbrot fractal.
They seem to do worse with string-based workloads, because the bottleneck is in memory allocations, and I have yet to see a JIT that does anything about that (i.e. analyzing code to reduce allocations).
I imagine that ELisp is used mostly for string workloads and not numeric workloads. So I won't be surprised if the 3x number doesn't hold up. I'm interested in hearing more details and happy be to be corrected.
From what I gather, it's a lot more important in PyPy because integers and floats are boxed!
You mean you've never seen a JIT that does anything about memory allocations for ELISP? Or do you mean you've never seen a JIT do anything at all about memory allocations?
Because removing memory allocations through escape analysis and scalar replacement is a key feature of any sophisticated JIT, and there are definitely many JITs which do this.
The JIT for Ruby I work on will effectively remove the allocation of string objects.
I have been reading some papers on JITs and I don't see escape analysis mentioned that often. In the PyPy paper (which is over a decade old) they mention it as future work.
Still, I actually tried PyPy on a string-based workload and it was slower than CPython and used more memory. I don't know why but that contributes to my feeling that JITs are bad for string-based workloads.
I'm interested in seeing any pointers to benchmarks that show the improvements resulting from escape analysis in JITs. I haven't seen anything like that and I've done a decent amount of research.
A cursory look at this blog post makes me think it's not super straightforward:
That post is less than a year old! i.e. the fact that v8 has been around for 10+ years and they're still updating escape analysis makes me wonder what the issue with it is. Is it hard to implement or does it not produce that much speedup? I appreciate any pointers.
If you've done a decent amount of research in the field of JITs and you aren't aware of what escape analysis achieves in practice then I'm very surprised.
That paper is relatively recent, so you can follow the chain of papers from its references.
Now that I'm implementing a interpreter for a relational language and from what I know for why python is slow:
Is challenging to be dynamic and also fast. So, you need to design the language/runtime with performance in mind, or at least, to minimize what could be very slow (that is what I'm triying).
Stuff like that adds up. It's why Lua is so fast compared to JS, too.
(defun get-x () x)
(defun foo (let ((x 7)) (get-x)))
(foo) ;; => 7 (not shadowing x)
(let ((x 4)) (foo)) ;; => 7 (is shadowing x)
(setq old-get-x #'get-x)
(defun get-x () (let ((x 10)) (funcall old-get-x)))
(foo) ;; => 10
As an aside, I see there a gitlab project for the benchmark game - but I was surprised there doesn't appear anyone has put together an automated, open, bring your own language/code/patches version? Maybe with some community voting (eg: prefer idiomatic vs max speed)?
Why haven't you? That's probably why no one else has :-)
Instead be surprised that someone has continued to push the benchmarks game along, and that people have continued to contribute programs gratis.
I suspect it might have to do with resources - dedicating multiple cores to benchmarking isn't going to be easy to do for free. Might be feasible for low cost, though.
Of course it is. That guy's an amazing wizard.
applying some commonly-recommended optimizations, you can shave Emacs init time to about 2 seconds, from whatever your previous init time was (7 in my case, but I also read about 60s -> 2s improvements).
Init time alone greatly affects how a given system is perceived.
If you then write your init file(s) in a certain way (use autoloads) you can maintain this performance while retaining all your customizations.
The reason almost nobody does this, e.g. I don't, my Emacs starts up in many seconds, is because the workflow with Emacs for those who use it is not to be restarting it all the time.
I don't reboot my computer either every time I need to open a new browser tab, and I don't need to restart Emacs every time I need to open a new file, so I really don't care if it takes 10 seconds to start it.
Sure thing, but most real-world setups have multiple heavy packages installed.
> If you then write your init file(s) in a certain way (use autoloads) you can maintain this performance while retainng all your customization.
Worth noting that autoloading is useful up to a point - after a threshold, the gains are only theoretical (your reported startup was 0.1s, but immediately after Emacs startup, it froze 2 seconds because all the autoloads are now doing its work).
In the end when I open Emacs I want a bunch of stuff to happen (open files, color them with syntax, etc) - some work has to be performed sooner or later.
> the workflow with Emacs for those who use it is not to be restarting it all the time.
I start it a few times a day, for not accumulating state, particularly that related with Clojure nREPL connections. Not a costly operation.
Sure, I wouldn't use it in this mode, but this line to the effect of "it takes seconds to start up" is usually uttered by people who are more used to the likes of nano or vi. You can also use those command-line options to start Emacs in that sort of one-off mode.
> your reported startup was 0.1s, but immediately after Emacs startup, it froze 2 seconds.
That's odd, I can start `emacs --no-init-file --no-site-file --no-splash -nw <file>` and write something to the file, C-x C-s C-x C-c to write + exit with no more noticeable delay than doing the same with mg, nano or vim on my system. This is with Emacs 25.2.2.
> I want a bunch of stuff to happen (open files, color them with syntax.
Opening a file to have it syntax-colored takes less than second (feels like at most 1/8 of second) in that mode.
> I start it a few times a day[...]
Doesn't Clojure have some equivalent of M-x tramp-cleanup-all-connections? Occasionally I'll mass-close buffers, but the uptime of Emacs tends to be 1=1 my laptop, usually about a month (for security updates and the like).
I believe this mode of use is more typical than restarting it this often.
> Opening a file...
Maybe my example wasn't clear enough. When I open Emacs a lot of functionality will be 'there', as it happens with an IDE. Think of a project tree, terminal, misc functionality, and unavoidable dependencies.
> Doesn't Clojure...
The Clojure(Script) tooling stack has many layers from different authors, so it's reasonable to not trust things to work after several connect/disconnect cycles.
My emacs gets restarted when my laptop reboots. Which happens on a timescale of months. I honestly don't care if it takes minutes to start, so long as it's super responsive once it's running.
I feel like maybe something's been missed if you're starting emacs all the time. But then, I feel the same way about Linux, and clearly many many people disagree with me.
So the comment in that thread about AOT compilation was extra interesting for me :-)
A clear example being restarting your computer. Sometimes I do it for no good reason other than knowing that state will be pristine when next time I boot it.
git clone firstname.lastname@example.org:emacs-mirror/emacs.git
git checkout feature/libjit
# Instructions for macos from
brew install autoconf automake texinfo
Richard> I don't think a 3% speedup is worth those drawbacks [ed: added complexity]. Or even a 10% speedup. A really big speedup would justify the costs.
Tom> It is 3x, not 3%.
Bitcode is still useful, even with a JIT (just like in Java) - the JIT needs to do the heavy parsing, semantic analysis etc everytime it starts.
However, the JIT means it can execute the bitcode faster than an interpreter can, because it can keep more intermediate steps in registers directly (like spill to an xmm register, instead of heap) & can do a tiny bit of cpu guided optimization (like is AES-NI available etc).
>To replace an interpreter by a JIT compiler means more oomplexity and
> also more possible problems. (For example, if there are platforms
> someday that libjit does not support.) Reading a Lisp interpreter is
> very useful for learning.
> If the plan is to add a jit and keep the Lisp interpreter as well,
> we don't lose its advantages for study, and we can still support all
> the platforms -- but we add complexity even more.
> I don't think a 3% speedup is worth those drawbacks. Or even a 10%
> speedup. A really big speedup would justify the costs.
Why do they still let RMS allows to say something about technical today?
No wonder Emacs is losing the users.
Jitting engines bring extra dependencies and one more layer of complexity to the C core of the editor. And not too many editors support a full language of their own. Notice that Emacs is mostly a volunteer effort and the resources are very limited.
For example this particular engine seems to offer a good speed boost on numeric operations but one of the replies mentions that realistic complex code is sometimes slower than a pure byte-code VM. Probably a bug but still...
After all those failed attempts certain level of scepticism is understandable.