
The Emacs dumper dispute - jordigh
https://lwn.net/SubscriberLink/707615/286fbe1405669d74/
======
drfuchs
OK, if you promise to stay off my lawn, I'll explain the history behind
undump. Back in the 70's, the big CS departments typically had DEC 36-bit
mainframes (PDP-10, PDP-20) running the Tops10/Tops20/Tenex/Waits/Sail family
of operating systems. These are what Knuth used to do all of TeX, McCarthy
LISP, and Stallman and Steele EMACS. Not Unix; and Linus hadn't touched a
computer yet.

Executable program files were not much more than memory images; to run a
program, the OS pretty much just mapped the executable image into your address
space and jumped to the start. But when the program stopped, your entire state
was still there, sitting in your address space. If the program had stopped due
to a crash of some sort, or if it had been in an infinite loop and you had hit
control-C to interrupt it, the program was still sitting there, even though
you were staring at the command prompt. And the OS had a basic debugging
capability built-in, so you could simply start snooping around at the memory
state of the halted program. You could continue a suspended program, or you
could even restart it without the OS having to reload it from disk. It was
kind of a work-space model.

Translating into Linux-ish, it's as if you always used control-Z instead of
control-C, and the exit() system call also behaved like control-Z; and gdb was
a builtin function of the shell that you could invoke no matter how your
program happened to have been paused, and it worked on the current paused
process rather than a core file (which didn't exist).

The OS also had a built-in command to allow you to SAVE the current memory
image back into a new executable file. There wasn't much to this command,
either, since executables weren't much more than a memory image to begin with.
So, the equivalent of dump/undump was really just built into the OS, and
wasn't considered any big deal or super-special feature. Of course, all
language runtimes knew all about this, so they were always written to
understand as a matter of course that they had to be able to deal with it
properly. It pretty much came naturally if you were used to that environment,
and wasn't a burden.

Thus, when TeX (and I presume the various Lisp and Emacs and etc. that were
birthed on these machines) were designed, it was completely expected that
they'd work this way. Cycles were expensive, as was IO; so in TeX's case, for
example, it took many seconds to read in the basic macro package and standard
set of font metric files and to preprocess the hyphenation patterns into their
data structure. By doing a SAVE of the resulting preloaded executable once
during installation, everyone then saved these many seconds each time they ran
TeX. But when TeX was ported over to Unix (and then Linux), it came as a bit
of a surprise that the model was different, and that there was no convenient,
predefined way to get this functionality, and that the runtimes weren't
typically set up to make it easy to do. The undump stuff was created to deal
with it, but it was never pretty, since it was bolted on. And many of use from
those days wonder why there's still no good solution in the *nix world when
there are still plenty of programs that take too damn long to start up.

~~~
agumonkey
Seems like everything was better in the old days.

------
kazinator
I ported undump from Emacs to GNU Make once!

I was working in an organization that developed a big network switch, with a
large C++ application running on it whose non-recursive Makefile took 30
seconds just to load and parse all of the include makefiles throughout the
tree, before actually building anything.

Half a minute of waiting just waiting for that second that it then takes to
recompile a single .cpp to a single .o and link everything.

I got tired and added a "make --dump" option which used the GNU Emacs undump
code to dump an image of make with all the rules loaded from the Makefile.
Then "make --restart" would _instantly_ fire off the incremental rebuilds. (Of
course, any changes to the makefiles or generated dependency makefiles
required a new --dump to be taken to have an accurate rule tree.)

Another idea would be just to add a darn REPL to make, so you can keep it
running and just re-evaluate the rule tree.

~~~
sqeaky
Isn't the ninja build system just a faster replacement for make. I use CMake
primarily it lets me create makefiles, ninja files or vs projects. I noticed
that in the general case ninja is 5% to 10% faster than make on builds that
are more than few seconds long.

Then I found that if I did serious file manipulation at build time, like copy
trees of files dependent on other thing in the build, I could have tens of
thousands of targets, one per file usually. Ninja might hiccup for a fraction
of a second on these shenanigans but make often sits and spins for 20 or
minutes.

Unless you want to write the makefile yourself why not use ninja?

~~~
kazinator
Because then I have to ask users to install ninja before they can build my
program. In the projects I'm working on, I don't have any of the issues that
ninja solves.

~~~
viraptor
You can generate both. That gives you ninja for development (which had a
problem you described) while the official builds still can use the makefiles
that should result in the same output.

~~~
kazinator
> _You can generate both._

That means that a user who wants to patch the build rules has to have the
generator, and learn the generating language instead of the Make language he
or she already knows.

Autoconf has this disease. You can build from an official tarball, but touch
anything (or use a git checkout) and you need auto-this to generate auto-that.
Not just any auto-this, but a specific version, that is seven releases behind
current, or else three releases ahead of what your distro provides.

> _should result in the same output_

It should; but someone has to ensure that it _does_. That's just another
unnecessary concern that doesn't actually have to do with anything with the
functionality of whatever is being built. We would like to spend our QA cycles
validating the program, not three ways of building it.

Best to have just one way to build, and don't require users to install
extraneous tools.

~~~
avar
You've confused autoconf with automake. The output from autoconf is just a
list of variables that are sourced by your handwritten Makefile, which you can
supply yourself if you don't feel like executing autoconf.

It's automake that writes your Makefile for you, but you can just skip using
that. E.g. the Git project uses autoconf optionally but not automake.

~~~
kazinator
Rather, you might be confusing the ./configure script with autoconf (the tool
which generates the script from "configure.ac").

~~~
avar
I know the difference between autoconf and its generated ./configure target.

Your comment overall indicated to me that you were talking about automake, not
autoconf. But if not, fair enough.

E.g. you talk about "learn the generating language instead of the Make
language". I know you were using that as an example, but there's no general
non-horrible replacement for autoconf that you can write by hand, as opposed
to automake where you can write a portable Makefile

You can of course write a bunch of ad-hoc shellscripts & C test programs to
probe your system, but this is going to be a lot nastier and buggier than just
using autoconf to achieve the same goal.

You also don't generally _need_ autoconf to build projects you clone from
source control in the same way that you need automake (because that actually
makes the Makefile).

The output of autoconf is generally just a file full of variables the Makefile
includes, if you don't have that file you can just manually specify anything
that differs from the Makefile defaults, e.g. NO_IPV6=YesPlease or whatever.

The Git project, whose autoconf recipe I've contributed to, is a good example
of this. You can "git clone" it and just issue "make" and it works, but if the
default config doesn't work then "make configure && make" generally solves it,
but you can also just e.g. do "make NO_IPV6=YesPlease" if it was lack of IPv6
that was causing the compilation failure. It'll then get your NO_IPV6 variable
from the command-line instead of from the ./configure generated
config.mak.autogen.

~~~
nkurz
_The output of autoconf is generally just a file full of variables the
Makefile includes_

You may just be using a terminology I don't recognize, but like 'kazinator', I
think you are missing a step.

 _The Git project, whose autoconf recipe I 've contributed to, is a good
example of this._

Great, let's use that as a specific example:
[https://github.com/git/git/blob/master/configure.ac](https://github.com/git/git/blob/master/configure.ac).
Line 2 says "Process this file with autoconf to produce a configure script."

I interpret this as saying that autoconf takes configure.ac as input, and
produces a runnable 'configure' script as output. But you are saying that "the
output of autoconf is generally just file full of variables the Makefile
includes". How can these both be true?

~~~
avar
I was using "autoconf" to mean both the software itself and all its output,
including the generated configure script.

Confusingly, sorry about that, but for the purposes of discussing what
software you need to generate the configure valuables you ultimately need when
cloning from source control, it makes no difference.

~~~
sqeaky
Chains of discussion like this are why I like CMake. Much less confusion.

Few confuse the output of CMake with things that ought to be committed.

~~~
kazinator
I like something called GNU Make for the same reason.

Few confuse the output of Make (namely your built program) with something that
ought to be committed (like its source code or the Makefile).

------
jmount
Emacs is my primary editor, however Emacs dumper has always been a dumpster
fire.

Basically they coded up so much ill-concieved and inefficient Emacs Lisp that
the editor would never start up in an acceptable time. Instead of engineering
around this (lazy loading services, fixing things, not doing things nobody
needs) they had the great idea they could start the editor once and then core
dump the in-memory state of a running editor. Then on later editor starts they
would map-in the core dump and instantly be in a (somewhat) good state. Fails
all kinds of smell tests and really speaks to bad taste having unbounded
consequences. It is an idea that should not work and it is only happenstance
that it ever worked (and it gets harder and harder as we have things like
address layout randomization, file handles, and so on).

[edited "file" -> "fire", sorry! And yes I know lispers always dumped, but
they are dumping the C memory environment here- not just their precious Lisp
state. They should have had some appreciation for how the C environment
actually worked since they decided to use it.]

~~~
fatbird
Sounds like it needs the neovim treatment: neomacs.

~~~
sooheon
God a neomacs written in a faster, more modern lisp would be a dream.

~~~
bryanlarsen
It's called guile emacs.

[https://lwn.net/Articles/615220/](https://lwn.net/Articles/615220/)

[https://www.emacswiki.org/emacs/GuileEmacs](https://www.emacswiki.org/emacs/GuileEmacs)

~~~
fatbird
Interesting. Any idea on why it's not got more traction? Or is it actually on
track to become the canonical emacs?

Reading that LWN article, it looks like Guile wasn't a great choice from a
community perspective, but more than that, by working internally to the emacs
project it's subjected itself to internal standards of interoperability that
it'll never really be able to hit (where a pseudo-hostile fork like xEmacs was
might have sufficient momentum to force some accommodation).

~~~
sedachv
> Reading that LWN article, it looks like Guile wasn't a great choice from a
> community perspective

I think going forward it will be. Between Guile-Emacs, Guix, and GNU Shepherd
you have the best-supported Lisp Machine operating system analogue available
right now:
[https://www.gnu.org/software/guix/](https://www.gnu.org/software/guix/)

I am excited about GuixSD and I think a lot of other people will be.

~~~
armitron
No it wont, at least as far as Emacs is concerned.

Nobody is stepping up to do the work, plus Guile has serious bugs on Windows
and OSX, single-digit number of developers and few, if any, users. So Guile
Emacs is really a pipe dream that people like to bring up from time to time.

~~~
sedachv
> Nobody is stepping up to do the work, plus Guile has serious bugs on Windows
> and OSX, single-digit number of developers and few, if any, users. So Guile
> Emacs is really a pipe dream that people like to bring up from time to time.

You forgot to mention that BSD is dying.

This state of affairs is different from any other Lisp implementation how? And
why would it stop progress from being made?

~~~
armitron
I've been happily running SBCL and CCL on Linux and OSX for years without
issues. CCL on Windows too.

Guile has 1-2 people working on it part-time, and Windows/OSX are not a
priority because:

\+ GNU project, duh.

\+ Nobody in Guile-land cares about Windows/OSX enough to step up and fix
issues.

\+ Even if they did, Stallman would tell them not to.

So the state of affairs is indeed very different from the CL Lisp
implementations. Not to mention, Guile has pretty much no userbase to speak
of. There are commercial entities releasing products with SBCL and CCL in
addition to the very healthy opensource community.

------
rurban
Interestingly Emacs is not the only project affected by this glibc/dumper
dispute.

I added the stone-old patches for perl to my cperl fork, to be able to
dump/compile perl scripts to native binaries the fast way.

Improved dumpers are here:
[https://github.com/perl11/cperl/commits/feature/gh176-unexec](https://github.com/perl11/cperl/commits/feature/gh176-unexec)
Mostly unified error handling and a few darwin segment instabilities. It is
very fragile to use with a static library, but ok as dynamic library. Emacs
uses the dumper in the main exe, not in a library. Solaris is the easiest to
use.

So I know a little bit of the troubles they are talking about here. Dan's
portable dumper would be nice to have, XEmacs had this decades ago, but it
never made it over to Stallman emacs. Wonder why :)

So looking at the new pdump, it really is horrible. I don't think I want to do
that. I'll rather add a proper static malloc to cperl, such ptmalloc3, which
is better than glibc malloc, i.e. ptmalloc2, anyway. They never switched to
the better version, because it had more memory overhead. And I really can make
use of the arena support there. Emacs should try the same. Much easier and
much faster. Good bye glibc.

------
dzdt
I am actually surprised the dumper paradigm doesn't get more love. Startup
time is an issue for most large programs. The dumper route is a generally
applicable way to drastically improve startup time. Think of it as splitting
your code into two parts: setup phase which is run at compile time (i.e. pre-
dump) and run phase which runs at run time. Undumping substitutes a simple
load of a file for the setup phase. What is not to love?

~~~
tener
> What is not to love?

The complexity of that solution. By nature it is very fragile and leads to
nasty bugs. But I can see certainly see the benefit of this approach - would
be interesting to see this applied to some big frameworks/VMs, such as Java or
.NET.

~~~
mobilerisotto
I've read something about dumping JVM state to decrease Clojure startup time
(don't recall the details and I'm on mobile right now to check), so if I'm not
wrong it was tried before...

------
jblow
Zaretskii's stance is weird. If you are going to run out of people who can
work on the core of the editor's source code, then the editor will die. So the
lack of ability to work on the code is the real problem. This is probably
because it has accreted way too much complexity at this point, and way too
many hacks. Shedding some of those hacks is a very good idea.

If you wanted to keep it the old way, and depend on the nuances of how an
allocator stores memory, then ship your own allocator. Video game people do
this as a matter of course; it's not a big deal.

------
Philipp__
Emacs and Vim are great tools. Amazing pieces of software. But in some areas
you really see the prospect of time and aging. I used and learned both in some
point in time, liked something from both, sticked to Vim, but I kinda felt the
best text editor would be hybrid of these two. (Now don't run and tell me to
install EVIL in Emacs, I tried that, but modal editing is not the only thing
that gives Vim edge over Emacs)

What got my attention recently is Xi (developed by Raph Levien). It is written
in Rust, looks fairly interesting, can't wait for it to be in advanced state.
I really wouldn't mind some nice, modern, terminal text editor. (I use NeoVim
at the moment, and I think it is closest to that, but VimL :cringe:)

~~~
eridius
Huh, apparently Xi is actually a Google project? I didn't realize anyone at
Google was even touching Rust.

[https://github.com/google/xi-editor](https://github.com/google/xi-editor)

Edit: Well, not actually an official Google project, but it's still on their
GitHub account. The last line of the README says

> _This is not an official Google product (experimental or otherwise), it is
> just code that happens to be owned by Google._

~~~
AceJohnny2
Interesting. For me the killer feature of Emacs is its customizability. That's
not quite the right word, because Emacs's integration with ELisp lets you do
things that almost nothing else does. The ability to modify _existing_
functions live is central to its power, and the boundary between "The Editor"
and "Extensions" is extremely fuzzy.

Xi seems to take a harder approach to customization, which means that
customizers will always be subject to the limitations of the plug-in
interface. There will always be a hard boundary between "The Editor" and
"Extensions", and I believe that will ultimately limit its usefulness.

~~~
agumonkey
Emacs customizability is something rare. I had the pleasure to run QBASIC and
Turbo Pascal 7 not long ago; and was amazed at the capabilities and speed of
these old IDEs. Yet, they were locked. TP7, which is an epic[1] thing, made me
feel sad, because the editing features are so basic, almost crippling (no
block selection); you physically feel how you miss emacs, where anything is a
few LoC away.

[1] text based multifile edition with overlapping windows (including .. ascii
window shadowing), invisible compilation times (on a Pentium2), exhaustive
help system; all in 800KB.

~~~
krylon
> you physically feel how you miss emacs, where anything is a few LoC away

That is my problem with IDEs, in a nutshell. There is the running joke that
Emacs is a decent operating system in want of a decent editor. The same can be
said - more strongly - about Eclipse or Visual Studio.

Some things these IDEs do spectacularly well, for sure, but when it comes to
basic text editing, I keep thinking how easy this or that would be in emacs.
;-|

~~~
agumonkey
Same, and I started in the Eclipse fad, with eclipse plugin being a thing,
before I knew how to program emacs (beside default config). The day I realize
how general lisp was and how dynamic emacs was I had to pause for a minute.

Last winter I had to use Eclipse (for scala), one day of mild use trigger
nasty wrist pain (I play music, I'm used to pushing the mechanics, that was
more). And people say emacs causes RSI ;)

Also the Eclipse crowd is completely off the user side. It's all about tech.
Microsoft might be better, I didn't use VS since ages. IDEA is said to be
really great at ergonomics. But rarely someone brings a lot to the table. (the
only recent thing I noticed was parinfer, ambitious and useful). Also people
underestimate what a elisp can do when used correctly. See yasnippet, of Fuco
litable.el.

~~~
krylon
> And people say emacs causes RSI ;)

A couple of years back, I actually developed a mild case of emacs pinky. I
used to think that was just a joke.

Then I found out how to remap Caps Lock to be a Ctrl key. Never looked back.
;-)

~~~
agumonkey
I don't even remap. I think my hands ended up morphing into an emacs stockholm
syndrom. Or maybe music did it before that. Still I was surprised that Eclipse
would revive such painful sensations.

------
AceJohnny2
In a related discussion, it's interesting to read this overview of work
required to get Emacs to support double-buffering on X11. Interestingly, it's
by the same guy who proposed the new dumper patch, Daniel Colascione (who
lurks around here)

[https://www.facebook.com/notes/daniel-colascione/buttery-
smo...](https://www.facebook.com/notes/daniel-colascione/buttery-smooth-
emacs/10155313440066102)

(Notes are Facebook's newish blog format)

HN discussion:
[https://news.ycombinator.com/item?id=12830206](https://news.ycombinator.com/item?id=12830206)

------
e40
_Rather than try to capture the state of the C library 's memory-allocation
subsystem, it simply marshals and saves the set of Elisp objects known to the
editor._

This is what Allegro CL (and I presume all the other Common Lisps) have done
for 30+ years. I'm surprised they didn't move to the marshalling idea before,
for fear of the Glibc hacks going away.

EDIT:

Actually, unexec() was used in the early days, so the 30+ years is wrong. It's
been more like 20+ years.

------
chriswarbo
So according to the article there are three approaches:

\- The existing implementation, which misbehaves with newer glibc

\- The partially-implemented patches, which replace the existing core dump
hacks with a big pile of C

\- Some unspecified, potential future optimisations to the elisp loader

To me, that looks more like a roadmap than a dispute.

~~~
korethr
To me, the dispute is coming from what appears to be Zaretskii wanting nothing
less than step 3. I can sympathize; I don't think I would want to receive a
big pile of C were I a maintainer. It seems to me that temporary hacks tend to
become permanent.

However, I think he's mistaken. As you said, that list makes a decent roadmap.
IMO, the best approach to hand is to take the currently offered patch, and
continue to work towards specifying and implementing optimizations to the
elisp loader.

------
AdmiralAsshat
Why is the core of Emacs still written in C? Is it just fear of trying to
rewrite it from the ground up in Lisp, or is there something fundamental to
the architecture that makes Lisp unsuitable?

~~~
AceJohnny2
Because that's the part that needs to interact with the OS at a low-level.
Think of it as the "OS bindings" if you will. C is the lowest-common-
denominator to do that.

Edit: I'm wrong, see avar's response below. It looks like it's for performance
reasons.

~~~
avar
Relatively speaking this really couldn't be further from the truth.

There's plenty of codebases that implement some programming language and only
use C bindings for truly low-level primitives, such as external library calls,
memory allocation etc, leaving any substantial logic that's built on top of
those OS primitives to a higher level language.

Emacs is not such a codebase, most of the C code by volume is things that
could perfectly well be implemented in Emacs Lisp itself, but aren't because
Emacs Lisp is relatively slow.

So while it's fine for "scripting" Emacs itself, things like regexes, anything
that has to do with low-level character handling, most of the GUI layout of
Emacs itself (i.e. the buffer logic etc, not actually calling ncurses or X)
etc. is written in C.

Just browse through the C files in src/, most of this clearly has nothing to
do with interacting with low-level OS primitives:
[http://git.savannah.gnu.org/cgit/emacs.git/tree/src](http://git.savannah.gnu.org/cgit/emacs.git/tree/src)

~~~
torrent-of-ions
Yes. Basically the Emacs "C bit", is not just a Lisp like Clisp or SBCL, it
also has a load of text editor code. Have a "CLmacs", ie. an emacs written in
Common Lisp, is something a lot of people would really like. The problem, of
course, is the vast amount of elisp that we currently use in our editors.

~~~
kazinator
You'd think that an Elisp implementation plus the necessary API's could be
provided to make a good bulk of that code work.

------
gonzo
I ported emacs undump to the Convex machines 29 years ago.

[http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-relea...](http://ftp-
archive.freebsd.org/pub/FreeBSD-Archive/old-
releases/i386/1.0-RELEASE/ports/emacs/src/unexconvex.c)

Emacs is old and moldy. Let it die.

------
oelmekki
> it's also the sort of thing that could give vi a definitive advantage in the
> interminable editor wars

Is editors war still a thing? I was under the impression that emacs, vim and
more recent editors (sublime and atom, to name two) each found their core
audience and were quite distinct.

Or maybe it's just that I've been using vim for long enough. I can change my
main language every few years, but I can't see myself changing my editor.
Maybe the "editor war" is more about hesitating between editors at the
beginning.

~~~
flukus
It's still a thing, we're just temporarily united against some common
enemies...

------
dpc_pw
I don't get it. In 2016, on i7 with 32GB of RAM, 2 stripped SSDs, emacs
(spacemacs actually) still takes around 1-2s to start which forces me to use
emacsclient, and that is already the second stage of it's initialization
started from the snapshot? I'm impressed.

------
gkafkg8y8
I've loved and used Emacs for ~20 years, but if Emacs were to become slow,
then if I were to have a replacement editor that could do the following (w/no
X or window manager) without additional config in Linux, I'd use it instead:

    
    
      arrow keys to move
      add and delete text anywhere
      paste from terminal buffer
      ctrl-s -> search (and continue to find next match)
      ctrl-v -> down
      ctrl-esc -> up
      ctrl-k -> kill line
      ctrl-x ctrl-s -> save
      ctrl-x ctrl-c -> quit
      ctrl-a -> goto beginning of line
      ctrl-e -> goto end of line
    

I don't even use selection anymore, because I can just use the terminal window
copy/paste.

~~~
da4c30ff
If that's all you need, maybe μEmacs[1] could do it?

[1]:
[https://en.wikipedia.org/wiki/MicroEMACS](https://en.wikipedia.org/wiki/MicroEMACS)

~~~
pawadu
Don't forget mg. It seems to be more used than uemacs these days, possibly
thanks to the openbsd project:

[1]:
[https://en.wikipedia.org/wiki/Mg_(editor)](https://en.wikipedia.org/wiki/Mg_\(editor\))

~~~
gkafkg8y8
It's also available in deb/rpm/pkg:

[https://pkgs.org/download/mg](https://pkgs.org/download/mg)

Note: pkgs.org list is not all-inclusive; there are more Linux distros in
which it's included.

------
yason
Never even heard about the dumper in Emacs. It sounds like a crazy idea that
is prone to break at the slightest change. Any shelving-unshelving
implementation for processes should come from the kernel which owns the
virtual memory mappings: kernel can basically already shelve a running process
by just swapping it out to disk completely. And even that scheme is prone to
break as soon as anything will do I/O.

FYI, as for Emacs: I just fire up "emacs -nw" inside tmux and let it run for
months. I call make-frame-on-display to add a window on my X session but I can
close that or restart X without having to kill my Emacs inside tmux, along
with a few other long-running processes such as mail reader and IRC clients.

------
gumby
The dumper approach in Emacs predates GNU Emacs; it was how the original TECO
Emacs worked.

The real fix would be to make dynamically linkable compiled elisp files (i.e.
.so files) and let the system linker make Emacs start quickly just like any
other program.

~~~
sedachv
> The real fix would be to make dynamically linkable compiled elisp files
> (i.e. .so files) and let the system linker make Emacs start quickly just
> like any other program.

Dynamic linking is not quick!
[https://en.wikipedia.org/wiki/Prelink](https://en.wikipedia.org/wiki/Prelink)

~~~
gumby
That's just the cacheing that makes it fast. Obviously it's slower at some
level than a static binary like a dump, but no big deal.

------
cmiles74
I'm not sure I understand, but this seems like a lot of work for little
reward. Newer versions of Emacs function with the C library that's missing
these dumper hooks and if you have a newer C library, you can update to a new
version of Emacs.

This seems like a lot of work for people who want to stick with an older
release.

~~~
massysett
Perhaps a related question: how essential is this functionality? I use Emacs
on Mac OS X, which does not even have glibc. Yet the article suggests that
this is only present on glibc. So should I be appalled at how long it takes my
Emacs to load? Seems to me it's just as fast as it was on GNU/Linux.

~~~
to3m
It is supported on OS X as well: [https://github.com/emacs-
mirror/emacs/blob/master/src/unexma...](https://github.com/emacs-
mirror/emacs/blob/master/src/unexmacosx.c)

(And Windows: [https://github.com/emacs-
mirror/emacs/blob/master/src/unexw3...](https://github.com/emacs-
mirror/emacs/blob/master/src/unexw32.c), [https://github.com/emacs-
mirror/emacs/blob/master/src/w32hea...](https://github.com/emacs-
mirror/emacs/blob/master/src/w32heap.c#L27))

------
gomijacogeo
Why not use LLVM and create a proper .so or binary?

------
branchless
I'm a little confused by this. I can understand why someone might want to
_preserve_ a repl env across invocations however the main reason given is
startup time.

Given emacs daemon, which allows connecting with a thin client, how much
startup time are we talking here? Can they not start once and connect like the
rest of us?

~~~
cmiles74
My understanding is that this dumping of state is done during the Emacs build
process. You build Emacs, initialize it and then dump out it's state. Every
time you launch Emacs, your instance is starting from the dumped state.

~~~
branchless
But why? We have an emacs init process that is fast enough, why not put your
init script in version control and bootstrap emacs this way?

~~~
Jtsummers
Emacs has two init stages, you only see the latter.

The first is what gets it to the dump file.

The second is where your personal .emacs file gets loaded and executed.

The first is the one that takes too long and motivated them to create the
dumper system to begin with, and needs some resolution (elimination and making
that first init process faster, switching to a serialization of the objects
rather than the C program memory, or something else).

~~~
branchless
So am I using this dumper system every time I launch emacs? I start emacs
"normally" in that I simply launch it from the cmd line with the daemon flag.

~~~
Jtsummers
In a sense, yes. You're using the product of the dumper. It's used to create
an image of the state of the running system during the emacs build, and then
delivered to end users like us as part of the emacs installation. That
executable image state is loaded, and then your .emacs is called.

For a comparable model, check out the way Smalltalk images (particularly with
Squeak and now Pharo) are distributed.

For kicks, try running "emacs-undumped". It's the base version without
everything loaded (the dump-file), and part of what's used for creating the
dump file during the emacs build process. At least for me it's pretty much
unusable thanks to the terminal colors that it seems to insist on using for
plain text.

~~~
branchless
Fascinating, thanks. I wonder what the startup ratio is for emacs without the
dump vs emacs -q.

I build emacs from src from gnu savannah and I have no emacs-undumped btw.

~~~
Jtsummers
That was on OS X. No idea if it shows up on other systems. The man page says
it's not meant for end users like us, and that it's to be used with dumpemacs.

------
BipolarElsa
I have a question: what's the benefit of using a text editor when an IDE can
perform compiling/interpreting for you at the ready?

Is a Emacs/Vim for coders who don't have to worry about minor errors?

I'm relatively new to programming and I'm curious.

~~~
swolchok
A text editor will always be with you, no matter what you're working on. IDEs
tend to be for a specific host platform, language, and sometimes target
platform. For example, Xcode only runs on a Mac, understands a limited set of
languages, and is best used for building Mac and iOS apps. MS Visual Studio
only runs on Windows, understands more but different languages, and is best
used for building for Microsoft platforms. Android Studio is best used for
Android things. IntelliJ and Eclipse are best used for Java.

If anyone has heard of your programming language, there is probably an Emacs
mode for it.

~~~
CJefferson
Although often, is not very good. The Emacs support for C++11 was awful for
years. May be better now, I left Emacs because it.

~~~
kevbin
[https://github.com/Sarcasm/irony-mode](https://github.com/Sarcasm/irony-mode)

~~~
CJefferson
If I ever try Emacs again, I'll have a look. In the past I've found clang-
based plugins seem to have a half-life of about 6 months.. while Emacs might
live forever if your plugins don't, then you are relearning a bunch of things
anyway.

PS at first i thought your URL was some kind of joke I didn't get, until I
clicked it and found it linked to a genuine project.

------
zeveb
From the comments to the article:

> If Emacs adopted one of the proposals to use a standard Lisp dialect

That should be 'if Emacs adopted _the_ standard Lisp dialect.' There's only
one standard Lisp, and that's Common Lisp[1][2].

I do indeed think that it would be wonderful if the elisp engine were
reimplemented in Lisp, but it's a tremendous amount of work, with a lot of
potential incompatibilities in the short term, with very little to show until
the long term.

[1] That doesn't mean that other Lisp-like languages aren't awesome. Racket,
in particular, leaps to mind as something which is massively cool. Clojure
isn't to my taste, but I can understand why some folks like it. Scheme has its
virtues too. But none of them is Lisp, unlike elisp, which really is _a_ Lisp.

[2] There are also EuLisp & ISLISP, but they're effectively dead.

~~~
AceJohnny2
I've been using Emacs for over a decade now. Every now and then (say, once a
week), I feel the need to fiddle with some ELisp code.

I read Stevey's Emergency Elisp guide: [http://steve-
yegge.blogspot.com/2008/01/emergency-elisp.html](http://steve-
yegge.blogspot.com/2008/01/emergency-elisp.html)

I reached the part about the _then_ clause of if/then/else needing a (progn
...) if it was to be a multi-statement clause, but the _else_ clause doesn't.
But you can avoid the _progn_ if you don't have an _else_ clause by using
_when_ instead of _if_!

I facepalmed so hard I needed reconstructive surgery. I haven't fully
recovered from that. (in reality it's because, though ELisp itself is bad
enough, Emacs' API is daunting)

~~~
qwertyuiop924
That's common in Lisps. Just use cond and when if it bothers you.

~~~
AceJohnny2
Why do Lisps separate these related semantics? Are there implementation
reasons, or is it mostly "the way it's always been"?

Edit: thanks folks! You've helped me get over the facepalm from years ago :)

~~~
zeveb
> Why do Lisps separate these related semantics?

It's not really separating related semantics; it's that an if-then-else option
has to have both a then clause and and else clause. (IF _then_ _else_ ) is a
natural way to express that, but it does mean that if you want to do more than
one thing in the then clause then you'll need to have a PROGN. You _could_
have multiple statements in the else clause (which is what emacs does).

In Common Lisp, the syntax is:

    
    
        if test-form then-form [else-form] => result*
    

In emacs, it's:

    
    
         if test-form then-form [else-form]* => result*
    

I think that the emacs form is weird and annoying, but might make certain
forms of code easier (e.g. check for something and short-circuit, else
calculate something more deeply).

COND is a different beast entirely.

Note that WHEN & UNLESS both have an implicit PROGN.

~~~
AceJohnny2
OK, so the weirdness is really that Emacs' else-form accepts multiple
statements, and/or that there is no implicit progn for both then-form and
else-form.

I.e, I would've expected something like this:

    
    
        (if (condition)
          (
             (then do something)
             (and more things)
          )
          (
             (else do something else)
             (and more other things)
          )
        )
    

(and yeah, I'm definitely showing my C-semantic preferences here aren't I?)

~~~
qwertyuiop924
That's closer to cond:

    
    
      (cond
        (<condition1> <body>*)
        (<condition2> <body>*)
        (else <body>))

~~~
kazinator
Not in Common Lisp, that: else is just a variable reference here. Unbound, if
you're lucky; bound to a true value if you're somewhat less lucky, and bound
to nil if you're haplessly unfortunate. :)

~~~
qwertyuiop924
Ah. I forgot about that.

See, this is what happens when you know Scheme better than CL.

