> This means that there is absolutely no need to maintain a compiled C++ part for your bindings, and so the problem of keeping the interface up to date is greatly mitigated
I tend to disagree. I would never consider the raw (swig or cling) 1-to-1 bindings of C++ code satisfactory for end-user use in Python. Ideally (in my subjective opinion and previous experience) Python-side bindings would closely mirror the C++ API, to the point where downstream code in either language looks very similar, but they don't reference any C++ stuff, be it vectors, or maps, or template arguments, or anything else. This implies you would have to maintain a set of higher level bindings on top of swig/cling ones anyway -- and these are the ones that'll break as the code evolves and that you'll have to maintain manually. As such, I'd rather maintain one set of bindings than two.
I was being quite literal when I wrote "no need to maintain a compiled C++ part": of course, you probably do want to maintain /some/ extra layer! And, in that sense, I do think that cppyy lets you maintain just one set of bindings (not two); and, as in pybind11, the ultimate aim is to transparently translate any "vectors, or maps, or template arguments" into idiomatic Python: this is why cppyy has a 'Pythonization' API.
Perhaps there are just two slightly different niches: cppyy is good when you need more a interactive interface to C++, for prototyping or exploration (for example), because of its JIT nature; and pybind11 is good for building something more static in the longer term, and where you don't mind the cost of keeping the compiled part up to date with the relevant C++ API.
It's certainly an interesting space at the moment, and I do hope both projects keep the momentum up and keep innovating!
The big dependency is LLVM, not any more CERN code (there's some left, but it nowhere near takes up the disk space or compilation time than the patched version of LLVM does). The CERN code exist b/c LLVM APIs are all lookup based. The bit of leftover code merely turns that into enumerable data structures. Once pre-compiled modules can be deployed, everything can be lookup. That will greatly reduce the memory footprint, too, by making everyhing lazy.
It is hard to trim down the source of LLVM, but trimming the binary is easier to achieve and that's what I'm working on right now. The end result should be a binary wheel less than 50MB that is usable across all python interpreters on your system, and would be updated something like twice a year. Since that gets it down to a level where even an average phone won't blink, pushing it beyond that leads to vastly diminishing levels of return, and I'll leave it at that unless a compelling use case comes along.
That said, there is an alternative: on the developer side, you can use cppyy to generate code for CFFI (http://cffi.readthedocs.io/en/latest/). The upshot is that LLVM only has to live on the developer machine, but would not be part of any deployed package. Of course, w/o LLVM, you have to do without such goodies as automatic template instantion.
Finally, note that cppyy was never designed with the same use case as e.g. pybind11 in mind. Tools like that (and SWIG, etc.) are for developers who want to provide python bindings to their C++ package. The original idea behind cppyy (going back to 2001), was to allow python programmers that live in a C++ world access to those C++ packages, without having to touch C++ directly (or wait for the C++ developers to come around and provide bindings). Hence the emphasis on 100% automation (with the limitations that come with that). The reflection technology already existed for I/O, and by piggy-backing on top of it, at the very least a python programmer could access all experimental data and the most important framework and analysis classes out-of-the-box, with zero effort.