If you are used to any reasonably modern programming language, writing any serious Elisp is an exercise in frustration, it is hard to even find out at all how to accomplish basic things, and once you do, there are tons of pitfalls, it is a legacy platform and it feels like its evolution over time was largely accidental. I am talking about doing basic things, working with lists, maps and strings, opening files, running processes, making HTTP requests, etc.
For example, if you want to use a map, you have three choices: you can use alists, plists or hash maps. There are no namespaces in Emacs Lisp, so for each of the three data types you get a bunch of functions with weird names. For alists get is assoc and set is add-to-list, for hash maps get is gethash and set is puthash, for plists get is plist-get and set is plist-put. For each of those types it is easy to find basic use cases that are not covered by the standard library, plus it is easy to run into performance pitfalls, so you end up rewriting everything several times to get something working. The experience is the same across the board, when working with files, working with strings, running external processes etc. There are 3rd party libraries for all those things now because using the builtins is so painful. In many ways it feels like programming some old web browser where you either need to patch the standard libary with 15 dependencies or learn complicated incantations for really basic tasks.
The ideas behind emacs and emacs lisp are great, so I really wish someone succeeded in creating a modern implementation from scratch and it gaining some traction, basically what Clojure did for Common Lisp. By the way even Common Lisp is way more modern than Emacs Lisp...
Some of the things you mention are also discussed in the linked paper. For example, quoting from Section 8.11 (Module system):
"... other than inertia one of the main arguments in favor of the status quo was that Elisp’s poor man’s namespacing makes cross-referencing easier: any simple textual search can be used to find definitions and uses of any global function or variable ... "
A few of the issues you mention can likely be improved, or resolved by using a better abstraction, or by deprecating features that are no longer needed. The Emacs community is typically very responsive and willing to discuss many issues in great detail. For example, if you run into performance issues or missing features, you can use Emacs itself to file a bug report, using:
M-x report-emacs-bug RET
As an example for switching abstractions and also addressing one of your points, I find that working on strings can be extremely frustrating and error-prone, both in Elisp and also in other languages. In Emacs, it is often much more convenient to work on buffer contents instead. So, to perform complex string operations in Elisp, I often spawn a temporary buffer (with-temp-buffer), insert the string, and then perform the necessary modifications on the buffer content. A significant advantage of this is that I can use common editing operations such as moving forward or backward by several lines or characters. When done, I fetch the whole buffer content (using for example "buffer-string"), and use that as the result. In many cases, and especially if the data are logically structured, it is better to use Lisp forms instead of plain strings.
However, some of the basic things you mention are in my experience quite straight-forward. For example, with only a few lines of code, I can copy the front page of HN as HTML text (in fact, the whole HTTP response) into the current Emacs buffer:
This automatically spawns a TLS connection, and I find this API quite convenient. I can then operate on the response as on any buffer content. Also, I can inspect the connection for example with list-processes, and many other functions. Spawning processes is quite similar, using for example start-process.
To write code that performs decently in Emacs-Lisp, you have to work with buffer contents rather than substrings, and it is just another frustration, the primitives for working with buffer contents are low level and heavily stateful, you end up writing imperative code that looks a lot like pascal or whatever, while loops conditioned on regexp searches, tons of setq's, moving point and mark around etc. It isn't a convenient way of programmatically modifying text. The basic examples of doing search and replace described here:
I think the issue is not that there are missing features, it is that there are too many features. Elisp would benefit a lot from pruning and standardization of its many, many, many builtins.
This is a very hard problem to recover from, because backwards compatibility is so important for something like Emacs. Breaking old versions would be completely unacceptable, which makes it very difficult to remove any features.
The great irony is that elisp, as with most of the languages of its venerable era, is not very good at processing text.
One of my emacs experiences is playing "escaping the regex" about once annually, where I need to use backslashes in a regex, but first have to escape an emacs string.
I'm not very good at elisp, so this is what a basic begineer regex looks like to escape a ' in a shell (so, I want the bash command to go from ' to \'). The regex in emacs I came up with was:
(replace-regexp-in-string "'" "'\\\\''" $)
Now maybe there is some sort of shortcut for passing a string straight to a regex that will make me look silly, but for a text editor with emacs' power and flexibility to make a regex look that hard bought a smile to my face.
Emacs has dedicated functions that let you handle such cases more conveniently. For example, in your case, the function regexp-quote may be useful:
(regexp-quote "\\") ⇒ "\\\\"
In addition, structured data are often better represented as trees. In Emacs, you can use the function "rx" to translate Lisp forms from a dedicated tree-based mini language into regexp strings. The primary advantage is that you can more readily reflect the logic of the intended regular expression in this way.
For example:
(rx (or "a" "b")) ⇒ "[ab]"
and:
(rx (and (or "a" "b" "test") "c")) ⇒ "\\(?:test\\|[ab]\\)c"
I'd be curious what you think are good languages at processing text. A sibling post points out that you are probably just not using some functions that would help. To that end, look into them. I suspect the problem is that you are wanting tools from your current toolbox to look a lot more familiar, though. Hence why I am asking what languages you think are good at processing text.
Elisp is odd to me, because you need to get used to thinking in terms of buffers of text. Most commands require a trip through a buffer. Which is not at all bad, but is very foreign to my expected view.
That said, once you get used to basically programming how you interact with a buffer, it does get very fast very quickly. And can often read rather nicely.
Well, I suppose there are two ways to be 'good at processing text'. The first way is to quickly and efficiently let the user do what they want, which is how Python approaches the problem, eg, str[0:len(str)-1] + "'\''". I've only learned python in the last 12 months, and I need access to a lot less documentation than I need in elisp to write that.
The second way is to have a model that is so strong that it is worth the user learning that model (this is a rarer approach, but is more common in the lisp families - Clojure, for example, does this with variables). If elisp has this, it would be a good idea for someone to mention that in the documentation, because I don't recall ever seeing anyone say "Wow! This elisp model of processing text is amazing! I've been doing it wrong my whole life!". I have seen variants of "which is not at all bad, but is very foreign".
I don't know of a best language, but manipulating text strings is the foundation of most web serving. So I'd say that the popular languages for that (python, perl, php, etc) are quite a bit better than elisp at processing text.
This actually takes it in a direction I disagree with.
Once you get used to manipulating buffers, elisp actually is easier than most other languages. What it is bad at is processing strings. Because it is good at processing text buffers.
I am on my phone, but if this thread is still going this evening, I'll show roughly what that snippet would look like. Will be, roughly, (forward-word) (move -1) (insert "/"). Obviously, some differences if you were moving words or paragraphs. But basic idea is the same.
That is, don't try and manipulate strings. Manipulate the entire buffer. If needed, build a new buffer with what all you want.
The irony is that elisp has a decent dsl for manipulating text. But you have to embrace the mutable nature of it. Which is unexpected for many working with a lisp.
TXR Lisp is a dialect that has decent features for munging text. It's part of a two-language combination called TXR; the other language is the TXR Pattern Language: a notation for matching small and large scale structures in text documents and streams.
TXR Lisp allows various classic Lisp operations to also be used on vectors and strings (think car, cdr, mapcar* and numerous others). It has array referencing that support Python's negative index convention. Sequence subranges are assignable places, so if s holds "firetruck" then (set [s 1..-1] "lic") changes s to "flick".
TXR Lisp is loaded with useful features, all packed into a small executable accompanied by a small library of Lisp files.
The phrase "what Clojure did for Common Lisp" reads awkwardly for me. What did Clojure do for Common Lisp?
I think most of these are supported through generic functions. Though, I think moving beyond alists is something you probably only ever have to do once you are doing some heavy lifting programs. In which case, I am curious if elisp is the best place to write that program.
Your list of things that are painful in Elisp: working with lists and maps? For real? Opening files? `find-file is hard?
As for hard to find out how to accomplish basic things what about the extensive in-editor help system? C-h f for functions, C-h a for apropos and C-h i for info? It would be hard to find a better documented software system anywhere.
Sure you need a few libraries to make things better, but the fact that there is a healthy ecosystem of packages is a good thing.
Maps are fiddly in Common Lisp too, all the data structures feel a bit crufty compared to Clojure.
Elisp is being improved with every release. Lexical scoping was added without much ado; this release got concurrent threads, radix trees. It's not that broken we need to throw it away and start again. IMHO it's not broken at all.
Rewriting to modernize would effectively kill emacs lisp.
but if you work with file contents as strings in Emacs Lisp, you are going to have a bad time, because performance of this is terrible enough for it to easily turn into a problem. So instead you insert-file-contents into the temporary buffer and then do a ton of low-level, error-prone imperative processing on the buffer contents itself using things like goto-char, search-forward, using markers etc. Most of the major emacs packages are littered with code like this, here is one random example:
That's indeed completely unreadable and I wrote it. But it also isn't typical. In this particular case I replaced an earlier "functional" implementation with this abomination to fix a performance bottleneck. Though I think I might have gone overboard -- unfortunately the commit message isn't very good either and doesn't explain why it was necessary to switch to this style of doing things in addition to what the message actually talks about.
(--map (list (substring it 3)
(aref it 0)
(aref it 1))
(magit-git-items "status" "-z" "--porcelain" "-u" "--ignored"))))
Generally speaking you don't have write code like what you linked too unless you have a very good reason to do so (I doubt that I had in this case, but who knows).
> creating a modern implementation from scratch and it gaining some traction, basically what Clojure did for Common Lisp
it's not a new implementation - it's a fully incompatible new language. Zero CL code runs in Clojure because the operators are incompatible/different and any existing CL code needs to mostly re-architected, since Clojure has a very different approach (more functional, less object-oriented, less state, lazy data structures, no linked lists, hybrid libraries /language partly using hosted language, ...).
Well, guile supports elisp. The project was to replace the Emacs elisp runtime with the guile VM which would bring some benefits (a runtime that could be used without emacs, speed etc).
For example, if you want to use a map, you have three choices: you can use alists, plists or hash maps. There are no namespaces in Emacs Lisp, so for each of the three data types you get a bunch of functions with weird names. For alists get is assoc and set is add-to-list, for hash maps get is gethash and set is puthash, for plists get is plist-get and set is plist-put. For each of those types it is easy to find basic use cases that are not covered by the standard library, plus it is easy to run into performance pitfalls, so you end up rewriting everything several times to get something working. The experience is the same across the board, when working with files, working with strings, running external processes etc. There are 3rd party libraries for all those things now because using the builtins is so painful. In many ways it feels like programming some old web browser where you either need to patch the standard libary with 15 dependencies or learn complicated incantations for really basic tasks.
The ideas behind emacs and emacs lisp are great, so I really wish someone succeeded in creating a modern implementation from scratch and it gaining some traction, basically what Clojure did for Common Lisp. By the way even Common Lisp is way more modern than Emacs Lisp...