
Modern C++ and Lisp Programming Style - deepaksurti
https://chriskohlhepp.wordpress.com/advanced-c-lisp/convergence-of-modern-cplusplus-and-lisp/
======
lorenzhs
The addone function in C++14 would look something like this:

    
    
        auto addone = [](auto x) { return x+1; };
    

That monstrosity at the top of the article is needlessly complex because C++11
didn't support generic lamdbas.

~~~
photon-torpedo
For terseness, it's probably hard to beat Julia:

    
    
        addone1(x) = x+1
    

Completely generic with type inference, and compiles to optimal machine code.
But it also suffers from the "latent type errors deferred to the user" that
the article discusses. Calling it with a type that does not support adding 1
will raise an error for a non-existing "+" method:

    
    
        addone1("0")
        ERROR: MethodError: no method matching +(::String, ::Int64)
    

This is pretty understandable in this case, but we may prefer annotating the
addone function so that it can only be called with types that support adding
1. Using the (very conservative) notion that only numbers can have 1 added to
them, we could use Julia's type hierarchy to restrict the types for which
addone may be called:

    
    
        addone2(x::T) where T<:Number = x+1
    

Calling it with an incompatible type now properly points to the outer level:

    
    
        addone2("0")
        ERROR: MethodError: no method matching addone2(::String)
    

But this type of dispatch restrictions rely on a type hierarchy, causing
similar problems as class hierarchies in OO languages. For example, the user
might have defined their own type that supports addition but not
multiplication. Using addone on this type makes sense, but the type shouldn't
be a subtype of Number. Such more flexible dispatch scenarios can be achieved
with traits. Although Julia currently doesn't have direct language support for
traits, they can be implemented inside the language, with macros for syntactic
sugar. With traits, the previous example becomes:

    
    
        using SimpleTraits
        @traitdef CanAddOne{T}
        @traitimpl CanAddOne{T} <- issubtype(T,Number)
        @traitfn addone3{T; CanAddOne{T}}(x::T) = x+1
    

Still relatively straight-forward to write and read, and the trait-restricted
addone has the same performance as the original one. For incompatible types,
the error message still points to the outer level:

    
    
        addone3("0")
        ERROR: MethodError: no method matching addone3(::Type{SimpleTraits.Not{CanAddOne{String}}}, ::String)
    

Importantly, the user can extend the CanAddOne trait (with additional
@traitimpl lines) to cover their own type, without being forced to make it a
subtype of Number.

~~~
deckiedan
Haskell:

    
    
        add1 = (+1)
    
        > add1 23
        24
    
        > add1 2.01
        3.01
    
        > add1 "0"
        No instance found...
    

But that error is at compile time.

~~~
photon-torpedo
Right, with currying it can be even more terse. And getting the error at
compile time is actually preferable.

My Haskell is very rusty, but IIRC it will deduce that the argument to add1
must be Num, because of the type constraint on (+):

    
    
        (+) :: Num a => a -> a -> a
    

So if I'm not mistaken, your add1 is equivalent to my addone2, i.e.
incorporating a type constraint based on the type hierarchy. I'm curious:
what's the equivalent of my addone3 in Haskell? How would you define add1 so
that it also works on a type T later defined by the user, with T just
supporting addition but T not subtype of Num? Do you need to define your own
addition function (acting as a proxy for (+) for Num types)?

~~~
mmalone
I'm still pretty new to Haskell, but I think there are several ways. The
equivalent to your addone3 would be some sort of Template Haskell incantation.
Template Haskell gives you lisp-style hygienic macros that are expanded and
type checked at compile time (except more complicated because of the more
complicated semantics and AST).

Haskell also provides a ton of options for generic programming that provide
similar power (still statically typed). GADTs (Generalized Algebraic Data
Types) might be of use here. There are also existential types and type
classes.

What you probably really want though is a dependently typed language like
Idris or Agda. If you're not familiar, here's the one liner: Java generics are
types that depend on types (like a List of Bools). Dependent types are types
that can depend on values (like a List of exactly 3 Bools). So in theory you
should be able to define the type you're looking for here, but I don't know
what the concrete syntax would be.

The problem with dependently typed languages right now is that the surface
languages and concepts tend to be hard for people to grok. Personally I think
meta-programming facilities and DSLs will help with this. Ther are lots of
languages heading in that direction.

~~~
kazinator
I wrote myself a perfectly functioning multi-entry accounting system over last
weekend in my own dialect of Lisp for my self-employment venture. It's stuffed
full of real data spread over nine accounts and lets me see a ledger report
over any time period at a glance: all the debits and credits with running
balances and so on. I can easily obtain the information to do all taxes and
whatnot. It generates beautiful HTML+CSS invoices with a nice SVG logo. It's
nicely object-oriented with classes and methods for everything: ledger,
account, transaction, increment, invoice. I overloaded a cluster of math
functions in a separate package so they work over a money type. It has self-
checks against accounting errors.

Working code poured out almost as fast as I can type.

If I had to think about some propeller-head academic dependent type nonsense,
I'd still be writing it come March 2018.

> _The problem with dependently typed languages right now is that the surface
> languages and concepts tend to be hard for people to grok._

Anything hard to grok is a regression in tooling.

Why would I want to grok something difficult, when I'm finding programming
easy and don't have any issues with getting the desired behavior out of the
machine.

All I want to be grokking is how the bits of the run-time representation in
the machine are coming together to solve whatever is being solved.

~~~
mmalone
I actually agree, for the most part. I'm not advocating for Haskell or any of
the other languages I mentioned, per se. I've spent time with lots of
languages and there are things I like and don't like about all of them. If you
forced me to write down my top 10 favorite languages, there would undoubtedly
be several Lisps near the top.

If you're programming a single computer the existing languages and tooling are
good. What I'm interested in is distributed compute and compute over
heterogenous architectures (CPUs, GPUs, FPGAs, microcontrollers, etc).

For simplicity, let's limit ourselves to the simple case of HTTP-based
service-oriented systems (or microservices). The state of the art has us
spending a lot of time considering serialization, protocols, communication
failures, managing identities, making authorization decisions, routing, etc.
Things that should be completely orthogonal to the problem we're trying to
solve end up being tightly coupled with our business logic. This becomes
exponentially harder to manage as you add languages to your stack.

There is a lot of research around solving these problems. Unfortunately, all
of it starts with a formal system that is "declarative" (or at least
"algebraic") and doesn't mesh well with existing popular languages and
tooling. Personally, I think we need to come up with a solution that lets us
keep the existing tools that are good for programming a single component, and
use some of this newer technology to build a smarter platform / underlay. The
bridge between the existing languages and this underlay would be DSLs. Or,
more precisely, "abstract languages" that can be implemented as DSLs in a
variety of programming languages and may or may not have their own concrete
syntax (think SQL+ORMs).

Sort of vague, I know, but hopefully that kind of makes sense?

------
vvanders
> The assumption is that the machine code of the C++ compiler is significantly
> faster than that emitted by a comparable dynamic language compiler. While
> this may hold true in general, it does not necessarily hold true with Lisp.
> Lisp is a programmable programming languages. If we are inclined to program
> it for speed, we can.

My Lisp-fu isn't as strong as my C++-fu so someone correct me if I'm wrong but
isn't the GC an intrinsic part of Lisp? Do more modern Lisps allow you to mark
value types so you can control memory access patterns(which is where the true
speed of C/C++ comes from).

~~~
pjmlp
Lisp does support more than just lists as data structures.

Arrays are also available, including specialized versions that hold value
types.

[https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node15.html](https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node15.html)

You can also stack allocate if required, via _dynamic-extent_ ,
[http://clhs.lisp.se/Body/d_dynami.htm](http://clhs.lisp.se/Body/d_dynami.htm)

Also not all Lisps have a tracing GC, some variants had a RC with tracing GC
for collecting cycles.

RAII like patterns can be achieved via the _with-...._ functions, or macros.

I don't know the actual performance of commercial Lisps like Allegro Common
Lisp and LispWorks, but I imagine it is quite good, given that they stay in
business.

On the other had, given the amount of money spent in C and C++ optimizers vs
the lack of industry wide adoption of Lisp, probably still not as good as
current leading C++ compilers.

~~~
kazinator
Indeed, those _with-_ macros.

In TXR Lisp, RAII is supported thusly:

This is GC finalization of a struct:

    
    
      This is the TXR Lisp interactive listener of TXR 172.
      Use the :quit command or type Ctrl-D on empty line to exit.
      1> (defstruct animal nil
           (:fini (me) (put-line `@me says good-bye`)))
      #<struct-type animal>
      2> (progn (new animal) nil) ;; make animal without referencing from REPL
      nil
      3> (+ 2 2)
      4
      4> (sys:gc)
      #S(animal) says good-bye
      t
    

OK, now _with-objects_ macro:

    
    
       5> (with-objects ((a (new animal)))
           (prinl a))
       #S(animal)
       #S(animal) says good-bye
       #S(animal)
    

_with-objects_ invokes finalizers explicitly, before objects become
unreachable.

Also, what if a constructor throws? Let's derive _animal_ to _dog_ which bails
at `new` time:

    
    
      6> (defstruct dog animal
           (:fini (me) (put-line `@me: woof woof`))
           (:postinit (me) (error "refuse to construct")))
      #<struct-type dog>
      7> (new dog)
      #S(dog): woof woof
      #S(dog) says good-bye
      ** refuse to construct
      ** during evaluation at expr-6:3 of form (error "refuse to construct")
    

The object instantiation logic catches exceptions and invokes finalizers on a
partially constructed object (in the proper order as you can see: derived,
then base).

~~~
lomnakkus
The with-* style function do not replace RAII _except_ for the limited case of
_lexically scoped_ resources. Not all resources are scoped lexically -- e.g.
file handles you store in a map. What C++ programmers typically see as the
most important guarantee is that RAII is _prompt_ freeing -- that is, as soon
as said example map goes out of _its_ scope (or is freed by its owner), then
those file handles will also be freed _immediately_.

Neither the gc-based nor with-* based solutions handle that.

~~~
pjmlp
Even on C++ RAII requires a lexical scope at some point in the program.

Somewhere in the whole workflow there must be a class allocated in the stack.

You can have destructor like behavior in Lisp as well, just register a file
handle cleanup action by giving a cleanup lambda when creating the map
instance.

The function that removes entries from the map will call the provided lambda.

~~~
lomnakkus
I'm sure I must be missing something, but can you give a concrete example?

Of course instances of classes (not "classes", as you say, classes are an
entirely compile-time construct in C++) must be allocated, but I mean...
there's still heap storage. You know, shared_ptr<T> and all that...

What am I missing?

> You can have destructor like behavior in Lisp as well, just register a file
> handle cleanup action by giving a cleanup lambda when creating the map
> instance.

Well, except you don't know exactly when the _map_ is going to get cleaned
up...? And it could get reused across various sections of the program... Do
you see what I'm talking about?

EDIT: That, and if the map exists for the entire duration of the program, but
it's really important that entries have prompt cleanup when removed... what
then?

(Btw, I _do_ understand that in the general case with weak_ptr, shared_ptr,
unique_ptr, etc. that things get decidedly less deterministic[1], but RAII is
pretty well defined by scope or referenced-to-scope.)

[1] Basically almost as unpredictable as a general purpose GC. I can't recall
the paper title, but I'm sure there _is_ a paper out there detailing this.

~~~
kazinator
TXR Lisp:

    
    
      This is the TXR Lisp interactive listener of TXR 173.
      Use the :quit command or type Ctrl-D on empty line to exit.
      1> (defstruct map nil
           (hash (hash :equal-based))
           (:method lambda (me key) (gethash me.hash key))
           (:method lambda-set (me key new-val)
             (let ((old-val (shift (gethash me.hash key) new-val)))
               (call-finalizers old-val)
               new-val))
           (:fini (me)
             (dohash (key val me.hash)
               (call-finalizers key)
               (call-finalizers val))))
      #<struct-type map>
      2> (defstruct widget nil
           id
           (:static counter 0)
           (:init (me) (set me.id (inc me.counter)))
           (:fini (me) (put-line `widget @{me.id} says bye`)))
      #<struct-type widget>
      3> (defvar map (new map))
      map
      4> (set [map "a"] (new widget))
      #S(widget id 1)
      5> (set [map "a"] (new widget))
      widget 1 says bye
      #S(widget id 2)
      6> (set [map "a"] (new widget))
      widget 2 says bye
      #S(widget id 3)
      7> (set [map "b"] (new widget))
      #S(widget id 4)
      8> (set [map "c"] (new widget))
      #S(widget id 5)
      9> (call-finalizers map)
      widget 3 says bye
      widget 4 says bye
      widget 5 says bye
      10> (with-objects ((m (new map)))
            (set [m "x"] (new widget))
            (set [m "y"] (new widget)))
      widget 6 says bye
      widget 7 says bye
      #S(widget id 7)

------
nkozyra
“Yes. STL is not object-oriented. I think that object orientedness is almost
as much of a hoax as Artificial Intelligence.“

Is hoax a placeholder for "hype" or am I missing something?

~~~
sqeaky
That confused me too, both clearly work unless there is some definition of
work that excludes things which have earned many people many billions of
dollars. Hoaxes don't create sustainable businesses.

~~~
_delirium
If you're thinking of the recent boom in machine learning specifically, there
are plenty of people in ML, even people making lots of money from it, who
think the concept of "artificial intelligence" is a recurring hoax, or at best
an overselling aimed at people who've read more sci-fi than science. Of course
plenty of people think otherwise, too, but it's not a rare view within the
field.

~~~
sqeaky
This is another way of thinking about I had not originally thought of, thank
you.

------
rurban
I cannot say much about the C++ code, but the lisp examples are very bad
style. One does not write like this.

One writes ordinary generic functions, and then optimized typed methods in
lisp like this:

    
    
        (defgeneric xplusone (x) (1+ x))
        (defmethod xplusone ((x integer)) (1+ x))
        (defmethod xplusone ((x double-float)) (1+ x))
    

The sbcl compiler (called python) even creates the typed methods by itself, so
mostly the defgeneric line is enough. The type hints for args and return types
are purely optional, as the compiler figures it out by itself.

He is right that algorithms, methods, trump data structures, objects. You
always write methods with specializations on objects. Not the other way round,
classes with specific methods.

~~~
xrange
> (defgeneric xplusone (x) (1+ x))

> The sbcl compiler (called python) even creates the typed methods by itself,
> so mostly the defgeneric line is enough.

Is this some new SBCL extension you are talking about? That `defgeneric` line
is an error in CLOS.

~~~
nuntius
Very much a part of the standard. i.e. not specific to SBCL.

Think of defgeneric as the function signature and defmethod as the template
specialization. Not sure why you say this is an error in CLOS. Looks fine to
me.

That said, most implementations try to auto-infer the generic function
metaobject when you use defmethod without defgeneric. SBCL raises a warning.

Good reads on the topic:

[http://www.gigamonkeys.com/book/object-reorientation-
generic...](http://www.gigamonkeys.com/book/object-reorientation-generic-
functions.html)

[https://mitpress.mit.edu/books/art-metaobject-
protocol](https://mitpress.mit.edu/books/art-metaobject-protocol)

[http://mop.lisp.se/](http://mop.lisp.se/)

~~~
juki
The DEFGENERIC line is wrong. It can't have a body like that. It should be
something like

    
    
        (defgeneric xplusone (x)
          (:method (x) (+ 1 x)))
    

Also, generic functions are slower than regular functions (due to dynamic
dispatch), so using them for type optimization would be rather
counterproductive.

~~~
rurban
Yes, thanks. I only wanted to point out similarities to abstract classes,
generic slow methods.

------
psyc
Unless I'm missing something, generic algorithms that work across types, and
algos that work on type internals are just two separate things. And the latter
probably still wants to be encapsulated in the type.

------
leshow
I don't see the relation to Lisp, if anything that quote about noticing the
semigroup property of parallel fold algorithms speaks to Haskell or ML more
than anything else.

~~~
sedachv
A lot of early research into parallel algorithms and exploiting associativity
for parallelism started at Thinking Machines, a Lisp supercomputer company.
Hillis and Steele's 1986 paper in CACM is still one of the best introductions
to the subject: [http://cva.stanford.edu/classes/cs99s/papers/hillis-
steele-d...](http://cva.stanford.edu/classes/cs99s/papers/hillis-steele-data-
parallel-algorithms.pdf)

~~~
lispm
There was quite a lot research into parallel, concurrent and distributed Lisps
beginning in the 80s. Thinking Machines with its SIMD computer was just one
approach. At some point in time there was a lot of money available for that
stuff, including custom hardware. In the US the DoD paid and you can bet that
some military/intelligence applications were based on exotic multiprocessor
machines running Lisp.

[http://www.softwarepreservation.org/projects/LISP/parallel](http://www.softwarepreservation.org/projects/LISP/parallel)

Often parallel computers had also a Lisp implementation. Much more than listed
above.

Parallel Computation and Computers for Artificial Intelligence
[http://link.springer.com/book/10.1007/978-1-4613-1989-4](http://link.springer.com/book/10.1007/978-1-4613-1989-4)

Parallel Lisp: Languages and Systems
[http://link.springer.com/book/10.1007/BFb0024148](http://link.springer.com/book/10.1007/BFb0024148)

Parallel Symbolic Computing: Languages, Systems, and Applications
[http://link.springer.com/book/10.1007/BFb0018643/page/1](http://link.springer.com/book/10.1007/BFb0018643/page/1)

------
cassowary
As I was reading some of the illustrations, I began to wonder why anyone would
prefer to use Lisp instead of Haskell. Can anyone mention a few advantages?
Mostly, I find Haskell attractive because of the fantastic type system which
allows me to write code that will fail when I'm writing or changing code. But
I worry that I'm overlooking something because I can't understand why some
people prefer Lisp.

~~~
keymone
Ever since i learned lisp i feel very annoyed by languages having so much
syntax. Just yesterday i was looking at tiny piece of Haskell code and it gave
me headaches. Once you realize how much more productive you can be without
cognitive load of juggling dozens of syntactic constructs in your head - it's
hard to go back. and then you have benefits of homoiconicity on top of that.

------
vkazanov
Can't say Common Lisp is intuitive but in comparison with C++ it looks...
Cleaner.

------
turnipping
What is Stepanov's beef with AI's week foundation?

------
Animats
_" Modern C++ has shifted focus from an emphasis on type (objects) which
accommodate algorithm to an emphasis on algorithms parametrized over types."_

That may be a bug, not a feature. The Boost crowd won the battle, making
extremely complex templates an essential part of the language. But they may
have lost the war, as C++ loses market share.

LISP backed into typing, and it shows. Both typed variables and objects are
painful in LISP. By the time LISP got both, the era of LISP was over. LISP is
really dead now; there hasn't been a release of GNU Common LISP ("clisp") in 7
years.

~~~
hughw
I recently came back to write a small experiment in C++. I programmed it
professionally for a decade, ending in 2002, and was pretty good at it. After
the last 16 years working in Java, Scala, Clojure, and Javascript, I have two
observations on C++ 11:

a) Grateful for the extensive access to algorithms, lambdas, type inference;

b) Astounded at the complexity of template meta programming.

I imagine I'll get better at it with practice. But I'm operating at about 15%
of the time efficiency of, say, Scala.

~~~
AstralStorm
Why do you really need to do template metaprogramming?

Generally that is not required even in performance sensitive code. Perhaps a
few conditionals a'la enable_if or direct use of SFINAE... but most everything
else, not really.

~~~
mannykannot
Sooner or later you will likely be faced with having to understand code that
was written by someone who had just learned about template metaprogramming and
was determined to put his new-found knowledge to use.

~~~
hughw
that's gonna be the next guy who looks at this code i'm writing...

Edit: Clarification, I am not actually having myself to do original
metaprogramming, but in order to diagnose problems implementing the custom
iterators, I have needed to delve into, and understand, the meaning of the
Thrust library code.

~~~
mannykannot
You ask way too many pertinent questions to be the sort of person who does
what I am thinking of.

