Btw, this "Hello world" example tells a little, basically, how efficiently memory allocation, and IO layer are implemented (that's why no one still could beat nginx).
Much more interesting comparison would be of some "real-world scenario", say, "implement an http look up for some public data-set, imported into a persistent local storage, say, Postgresql" and then compare not just throughput rates, but also resource usage.
Actually at least on other guy just beat nginx just other day -- don't remember the name, it was some Japanese made server.
Anyway, a lot of the performance work which went into mighttpd2 is automatically shared by all Haskell applications. (Most of it went to optimizing the GHC VM I/O Manager which handles I/O in the Haskell runtime. It's sort of like libev/libuv only at the VM level instead of as a library.)
You can be as edgy as you want and use LISP but real-world scenarios take into account economical factors. And surprise, surprise no company gives a shit about the beauty of a solution.
"built on top of libev"
The lisp part is a thin ffi wrapper on top of lots of C code.
Not than an ffi wrapper around C code isn't useful, but it's C code that does all the heavy lifting there.
What libev provides is a very portable wrapper around all the different non-blocking IO system calls in all the different Unixes. That's the hard part.
Woo uses fast-http, quri, and http-body, all written in pure Common Lisp and very, very fast.
The real challenge is in writing a useful, useable server which still stays fast under load. In contrast, you have to be writing terrible coding horrors for your home-grown static file web server NOT to be wire-speed :)
(No offence to the writers of this particular server, I haven't looked at the code)
As implied in other answers in the subthread, nowhere in this paradigm does random input data get treated as something that can safely be executed. READ in Lisp Machines is mentioned because reader macros like #. allow data in the form being read to be evaluated, which in this domain is an obvious big no-no.
As early as 1983, I think earlier, it was recognized that things like eval servers were a bad idea if accessible by the outside world.
If you say, this can't be so, you may well be right...
I just searched through sys:network; and sys:ip-tcp; in Genera 8.3 and the only use of READ was in an Eval server.
The reason I noticed was that I had created a package that did not use 'lisp:', and therefore 'nil' was no longer 'lisp:nil', which broke the NFILE client.
Maybe this got fixed sometime between then and 8.3. (Or maybe NFILE was ripped out altogether?)
I also recall that Jeff Schiller found some remote vulnerabilities. I don't know if they involved 'read', but they certainly could have. Again, this was long before 8.3.
Symbols could have been created or looked up by other means...
However, don't dismiss SLIME due to emacs. It contains all the nice IDE functions - symbol completion, attaching (and editing!) long running processes, function tracing, stack tracing, find references etc.
Access to a REPL in a running system image is an amazing and productive experience.
SLIMe doesns't have a GUI but is pretty good tooling. has Autocomplete according to symbols available to your image out of the box, xref (who calls function/who setsvariable/etc), a menu for managing threads, an object inspector and much more.