
Nemo: computer algebra package for Julia - winestock
http://nemocas.org/
======
Cyph0n
Holy shit! Those benchmarks look absolutely incredible, outperforming Magma by
a factor of 5 on average. Incredible work. This is the kind of work that makes
Julia really look like the next step for scientific computing.

[http://nemocas.org/benchmarks.html](http://nemocas.org/benchmarks.html)

~~~
rp21a
I'd like to know the CPU to compare. For example, the first one is 5 seconds
on a Core i5 4570 3.2 GHz in Maple.

The second problem is also not clear. Is there a Kronecker substitution on the
input? If so, what are the degrees? I'd like to see the code used.

~~~
wbhart
It was on a single core of an Opteron K10 6174 at 2.2GHz. I'm not sure what
you mean by KS on the input. Nemo doesn't use KS on that benchmark (except
possibly in Flint for the lowest level). This is comparable to what the other
systems are doing, except Pari which uses a recursive sparse representation.

The code is in the Benchmarks test in test/Benchmark-test.jl.

It's not surprising the first benchmark is faster in Maple, since Maple can
make use of a true sparse representation, possibly quite a bit of
vectorisation on the processor you have and possibly multiple cores. The
benchmark here uses only a dense representation, a single core and there is no
explicit vectorisation (possibly none at all).

Pihanha for example will do both of those first examples in a fraction of the
time that Nemo will. But again, it uses sparse representation and again can
use multiple cores.

We'll do a sparse representation in Nemo later on, perhaps even wrap Pirhana.

The main purpose of the benchmark is actually to show off what the really fast
Julia generics do for us, not to actually do this particular benchmark as fast
as is humanly possible. In order to do that as fairly as possible, we
deliberately use univariate polynomials over other univariate polynomials in
all systems (except Pari, as noted), rather than dedicated multivariate
polynomial rings.

~~~
rp21a
For that CPU the times are good. My understanding is that Pari uses recursive
dense, and Nemo is also using that, and you are instructing Magma to do that.
I'll have to download Nemo and try it out :)

~~~
wbhart
I used to think Pari used recursive dense too, but I've recently been informed
it is more of a recursive sparse format. This is essentially the same thing
except that they have special zero objects in every polynomial ring, so that
their recursive tree is essentially a pruned tree.

------
jpfr
The Axiom [1] developers created a special purpose dependently typed language
to capture mathematical abstractions: SPAD/Aldor [2]. Maxima uses Lisp. [3]
These packages contain many man-years of work.

Julia is JIT-compiled, a Lisp (under the hood) and (somewhat) dependently
typed.

Is that enough to port (or even transpile) modules from the big open source
CAS without an entire rewrite? Is there enough similarity between CAS to make
"foreign" modules even a remote possibility?

Otherwise, Nemo will likely not achieve a _great unification_ of math
packages, since the required effort goes way beyond the resources of a small
dedicated group.

[1] [http://www.axiom-developer.org/](http://www.axiom-developer.org/)

[2] [http://www.aldor.org/](http://www.aldor.org/)

[3] [http://maxima.sourceforge.net/](http://maxima.sourceforge.net/)

~~~
nickpsecurity
I mostly just follow articles on Julia for now. Aside from it having LISP-like
macros, where's a page that says it's a LISP underneath? And I didn't see
dependent types mentioned in the page below despite some things sounding
similar. You got a link that straight up shows dependent types in Julia or how
to emulate them?

[http://docs.julialang.org/en/release-0.3/manual/types/](http://docs.julialang.org/en/release-0.3/manual/types/)

~~~
jpfr
> Where's a page that says it's a LISP underneath?

Every expression is transformed internally to an AST representation that can
be seen as a lisp-style s-expression. And this gets transformed, JIT-compiled,
and so on. [1]

Furthermore, the Julia internals contain an entire lisp implementation
(femtolisp [2]). That has influenced the metaprogramming capabilities.

> And I didn't see dependent types mentioned in the page below despite some
> things sounding similar.

Integer matrices can be represented as Array{Int64,2}. So the type of the
matrix depends on the Int64 datatype and the dimensionality 2. But that's not
necessarily what type theorists understand to be a dependent type. There was
an epic discussion that led to the phrasing "dependent" being removed from the
Julia docs. [2]

[1]
[http://julia.readthedocs.org/en/latest/manual/metaprogrammin...](http://julia.readthedocs.org/en/latest/manual/metaprogramming/)

[2]
[https://github.com/JuliaLang/julia/tree/master/src/flisp](https://github.com/JuliaLang/julia/tree/master/src/flisp)

[2]
[https://github.com/JuliaLang/julia/issues/6113](https://github.com/JuliaLang/julia/issues/6113)

~~~
wbhart
I think the Julia people were right to remove "dependent" from the docs.
Nowadays I say that Julia has a limited form of static dependent typing, which
is to say it doesn't have dependent typing.

When we first wrote Nemo in Julia, we used a form of emulated dependent
typing. But the way we did this led to performance problems that we just
hadn't thought about.

For example, if we did lots of operations over Z/pZ for many primes p, e.g. in
multimodular algorithms, it meant that the Jit compiler had to recompile
everything for every prime p, which was awfully slow.

Julia is absolutely not meant to be used that way, and it took us a long time
to see that. After redesigning Nemo to use Julia much more like the way it was
designed to be used, we actually ended up speeding everything up.

~~~
nickpsecurity
So, to be clear, you had to avoid using dependent type style in Julia and use
something very different? And that different thing wasn't another style of
dependent types? Am I interpreting it correctly?

Reason being that the only emulation of dependent types in Julia being way to
slow would be an extra argument against any claims for Julia having dependent
types vs real, dependently-typed languages that don't punish you for using
them. I'd rather be clear before going that far, though.

~~~
wbhart
Avoiding dependent types wasn't due to a problem in Julia. Proper dependent
types are essentially not something you want to mix with Jit compilation due
to undecidability, at least not if you want to use bignums in your dependent
values.

Of course you could emulate them some other way in Julia, but they are
deliberately not baked into the language.

In the end we used the model that some computer algebra systems, like Sage and
Magma use, of having parent objects for the rings in which the element objects
live. I'll leave it up to the experts to figure out if this is dependent
typing or not. It obviously works very well either way.

~~~
nickpsecurity
Makes sense. Appreciate the clarification.

------
IndianAstronaut
Since Julia is about speed, how does this compare to Sympy speed?

~~~
srean
> Since Julia is about speed

Is it ? At least that's not the part that excites me most, although its a nice
by product to have. Its a nice flexible language with good syntax and a good
set of features. I still dislike 1 based indexing though.

~~~
IndianAstronaut
I honestly see little reason for its existence besides speed. R and Python are
very well established.

------
vortico
This is a fantastic collection of software into what seems to be a well
organized and well documented package.

------
wbhart
There's a little bit more information in Fredrik J's blog post:
[http://fredrikj.net/blog/2015/09/finding-
nemo/](http://fredrikj.net/blog/2015/09/finding-nemo/)

------
mrcactu5
If it is built on PARI how does it improve upon it?

Personally I want to know how to define algebraic number classes. I attempted
and failed write my own algebraic number types in Python.

~~~
wbhart
Pari is not used for elements of algebraic number fields. We use Antic for
that.

I should point out we actually spent some time speeding up that particular
benchmark. The others are much more raw.

Having said that, we have an even faster way of doing it which will be in Nemo
0.4.

It's worth noting though that Pari doesn't have Jit compilation, so it would
still be possible to beat something written in Pari/GP for example.

One of the big things with complex functionality like that in Pari is that
years of mathematical knowledge have gone into it. One can't come along with
some tricks like Jit compilation and expect to beat it with some simple
scripts. It actually took months of work on the Antic library to get faster
algebraic number field element arithmetic than what was already in Pari, and
that's trivial functionality compared to the rest of Pari.

The winning strategy isn't actually to reimplement the whole of Pari (or
Singular, or Gap, etc.) but to use them for complex functionality that they
are actually good at, and then either improve Pari itself, or implement
something even better over time, say by extending Antic.

