
Static, Ahead of Time Compiled Julia - Lofkin
http://juliacomputing.com/blog/2016/02/09/static-julia.html
======
kgabis
I think this [0] is worth reading before starting a project in julia (it's
quite shocking). Does anyone know if anything has changed in julia's
development process over the last year?

[0] [http://danluu.com/julialang/](http://danluu.com/julialang/)

~~~
ChrisFoster
I'm an enthusiastic julia user, observer and very minor contributor. IMO a lot
of the issues in this constructive rant have been addressed to some extent.
For context, here's the previous HN discussion
[https://news.ycombinator.com/item?id=8809422](https://news.ycombinator.com/item?id=8809422)

Going through the post in order:

The stable releases still have some bugs as you would expect in a young
language, but 0.4 is now well below my tolerance level. For a rough idea, I
now use julia daily and encounter a bug perhaps once every one or two weeks.
In 0.4 I haven't encountered any bug which was a real show stopper and
couldn't easily be worked around.

Testing has gotten a whole heap nicer with a decent test framework in Base
(accessible in 0.4 via BaseTestNext.jl). Testing and package manager integrate
in a simple but effective way which really makes the friction for writing a
suite of tests for new packages very low, much lower than other languages I've
used. I can't speak for actual coverage in Base, but I know it's now actually
being measured and work has gone into the coverage tools.

I'm going to skip over the complaints about error handling, because others
have already responded to this, for example StefanKarpinski's post to the
julia-users list [https://groups.google.com/d/msg/julia-
users/GyH8nhExY9I/0Bzn...](https://groups.google.com/d/msg/julia-
users/GyH8nhExY9I/0BznqtD5VJgJ)

Consistent benchmarking is currently being addressed, with great work going on
at BenchmarkTrackers.jl, and a proper setup with dedicated benchmarking
hardware for the language itself. I don't have the depth of knowledge to
comment on Dan's other complaints regarding skewed benchmarking.

Regarding contributing, my experience is that contributions to Base and the
runtime by unknowns (myself, say) are generally met with the fair skepticism
and good taste that all good maintainers should display. Sometimes I feel the
core devs could do more to encourage new contributors, and the environment can
feel slightly hostile when suggesting new features. I'm not sure how to
entirely avoid this, when a core job of a good maintainer is to say "no" to a
lot of poorly considered requests! Much of the code in Base is still commented
in a minimalistic fashion, if at all. In contrast my experience in
contributing to packages has been almost entirely positive, with a lot of
excitement and energy leading to some really great code and interactions.

With precompliation, slow package load times have really been improved to the
extent that they're no longer a major hassle, but there's still room for
improvement here.

The real sting in the tail of this blog post is the paragraph about nastiness
in the community. There was a couple of unfortunately worded (though not
unambiguously malicious) mails on the julia-users list following Dan's post,
but the discussion was largely constructive and helpful. I've no idea about
the "private and semi-private communications" and I can only hope things were
patched up there.

Overall I've found the julia experience almost entirely positive. It's a joy
to work with for numerical and statistical problems, and we're moving forward
at work to get our first major pieces of julia code into production.

~~~
StefanKarpinski
> I've no idea about the "private and semi-private communications" and I can
> only hope things were patched up there.

I'm the co-creator that Dan was talking about. He wrote a bunch of less-than-
charitable comments on the aforementioned semi-private forum – not
specifically to me, but where he surely knew I would read them – to which I
responded with:

[https://gist.github.com/StefanKarpinski/c72219ff8ce261172b11](https://gist.github.com/StefanKarpinski/c72219ff8ce261172b11)

You can judge for yourself whether I was nasty or dishonest. Things were,
unfortunately, not patched up. Dan posted a number of responses, deleted all
of them before I could read them, then left the conversation permanently.

~~~
nickpsecurity
Luu's claims made me hold off on pushing people to contribute to Julia as I
waited for corroboration (or refutation) of them. I appreciate you linking to
that very fair post that replies to it. The contrast between how the two of
your present your claims adds credibility to yours. I also got a great laugh
out of part about one guy that barely speaks English using the project as a
personal Stackoverflow. A problem I'd have not anticipated starting a
language/compiler project haha.

Curious, are you all still coding the internals of the compiler in femtolisp,
is most of it written in Julia indirectly relying on that, or no LISP now? A
barrier to entry question basically.

~~~
StefanKarpinski
The parser and some lowering passes are still written in femtolisp. There has
been some discussion of switching to the native JuliaParser package [1].
However, JuliaParser doesn't implement the fairly tricky lowering passes that
the femtolisp parser does. I personally would prefer to have the parser in
Julia, but at this point the most pressing issue with parsing and lowering is
speed – so it's possible that the parsing and lowering will be converted to C
instead.

[1]
[https://github.com/JuliaLang/JuliaParser.jl](https://github.com/JuliaLang/JuliaParser.jl)

~~~
vanderZwan
> I personally would prefer to have the parser in Julia, but at this point the
> most pressing issue with parsing and lowering is speed – so it's possible
> that the parsing and lowering will be converted to C instead.

But isn't Julia supposed to be fast? ;)

~~~
nickpsecurity
And this is just a parser. Nothing too fancy. Analysis should be able to
produce some efficient code for one.

------
jhoechtl
I personally find the syntax of the language and quality of the current
implementation (speed!) excellent. However, it doesn't experience the
marketing languages like Rust or Golang receive.

What I personally also find worrisome is the perception (at least for me) that
Julia is confined to scientific computing whereas I find it should really be a
general purpose language.

~~~
plinkplonk
There is no technical reason that Julia can't be used for general purpose
computing. (Well personally, I'd like nested namespaces, but) it is an amazing
language. Just a matter of someone putting in the time to build the required
libraries and glue code. Which will happen in time.

~~~
elcritch
The lack of AOT has been the main thing holding me back from adopting it for
more general purpose computing. Now with AOT I can "compile" the code and get
some reasonable expectation of performance and ease of deployment. I'm
probably going to start using Julia and (hopefully) extending the web server
packages. Basically it feels similar enough to JavaScript for web development
but with more expressiveness, so I'm pretty excited by the prospect.

Now this turns up a terrible delima, do I try to use Julia or Rust for writing
embedded controllers? Rust has macros and direct memory control, but the
article mentions. :)

I also think the more of us that recommend Julia for general compute, the more
likely it'll get used that way.

------
vanderZwan
> _For example, the Julia community seems to have coined the term “type-
> stability” to describe a concept that static / compiled languages have
> historically enforced and dynamic / scripting languages have historically
> disregarded._

I was not aware that this was a Julia neologism. It seems like such an
appropriate term for discussing how to make code make the most out of JIT-
compilation.

~~~
openasocket
The closest concept I can think of is the concept of "strong typing," that a
value does not change type based on its context, which is a fairly widely used
term. It seems like Julia has some form of weak typing, at least between the
different numeric primitives, and "type stability" refers to avoiding any
usage of weak typing. I don't know a lot about Julia, though, so I could be
way off here.

~~~
StefanKarpinski
"Strong typing" and "weak typing" don't really mean anything:

[http://blogs.perl.org/users/ovid/2010/08/what-to-know-
before...](http://blogs.perl.org/users/ovid/2010/08/what-to-know-before-
debating-type-systems.html)

You probably mean "static typing" versus "dynamic typing" which I wrote a bit
about in the context of Julia here:

[http://stackoverflow.com/questions/28078089/is-julia-
dynamic...](http://stackoverflow.com/questions/28078089/is-julia-dynamically-
typed)

Basically, I think "type stability" hasn't really been a thing in the past
because in dynamic languages, people have traditionally not cared about
ensuring that return types are predictable based on argument types, and in
static languages, a program is _incorrect_ if that's not the case. As people
care more and more about being able to statically predict the behavior of
programs in dynamic languages, the concept of type-stability in dynamic
languages becomes increasingly important.

~~~
openasocket
I disagree: strong and weak typing are fairly well defined terms. I'm going
off the definition given in "Programming Language Pragmatics," which states "a
language is strongly typed if it never allows an operation to be applied to an
object that does not support it; a language is said to be statically typed if
it enforces strong typing at compile time."

~~~
jedharris
Even if you are correct that they are well defined, by that definition Lisp,
Ruby, Python, JavaScript and Julia are strongly typed. C, Forth and assembly
language are not. So the distinction isn't useful in this discussion.

~~~
openasocket
Javascript is probably one of the weakest typed languages out there. {}+{},
[]+[], {}+[] are all completely valid operations in javascript that cause all
kinds of implicit coercions. And assembly language is very strongly typed: it
just has a very simple type system.

As for Julia, I'm not sure how strongly or weakly typed it is, but I would
probably put it at around the same level as Java.

------
blt
Julia seems like an almost perfect MATLAB replacement. (for my personal
preference, I would like a slightly more static/rigid language, but I
understand why that's not the right choice for Julia's target users.) There is
just one problem... I really, really wish they had dropped the 1-based
indexing and <= upper bound on index ranges. It is so annoying.

~~~
jfaucett
"I really, really wish they had dropped the 1-based indexing and <= upper
bound on index ranges."

really? There are quite a few things I don't like about julia, but using
1-indexing and "end" makes implementing algorithms much clearer IMHO.

The main pain points for me in julia are the module/pkg system and that the
runtime is not just batteries but more like 10 generators included i.e. it
could be way more minimal. But I get that the goal is to have a powerful
scientific computing language and not to build a multipurpose language that
emphasizes modular construction of code units and production ready package and
build management utilities.

All in all, when judged by how well julia achieves its self-stated goals, I
think it is excellent.

~~~
halflings
This is kind of bike-shedding (since that's such a small part of the
language), but I also think there are arguments for using 0-indexing with open
upper bounds.

Guido explains his choice best:

[https://plus.google.com/115212051037621986145/posts/YTUxbXYZ...](https://plus.google.com/115212051037621986145/posts/YTUxbXYZyfi)

~~~
rbehrends
Well, Julia is a language designed for mathematics, where indices starting at
1 is common (vectors, matrices). You can argue that polynomials have exponents
starting at zero, but then you quickly get to Laurent polynomials, and what
you really should be arguing is that the lower bound should be configurable,
rather than being set at one specific value (which, of course, is still
possible with custom types).

Second, I don't find Guido's argument convincing. Yes, half-open ranges can be
mathematically more elegant (and that's actually Dijkstra's argument), but
that doesn't mean that the code necessarily becomes more readable. For
example, to construct an array without the element at index i, you'd do the
following with Python-style indexing:

    
    
      a[0:i] + a[i+1:n]
    

and the following with closed intervals and indexing starting at 1:

    
    
      a[1:i-1] + a[i+1:n]
    

While there is an element of subjectivity to it, I at least find the latter
option more readable (possibly because of habituation to mathematical
notation).

While the notation for the specific example of i:i+k-1 might be less elegant
with closed ranges, closed ranges are something that you find in every math
textbook, because sums, products, unions, intersections from a to b (and other
operators in that style) operate on closed ranges normally. Closed ranges are
the norm in conventional mathematical notation and it makes sense to pick the
option that minimizes the overhead when transcribing between mathematical
texts and code.

~~~
blt
Here are some tasks that are ugly with [1:n] indexing:

    
    
        - the 1D index of element (i,j) in a matrix is i+(j-1)*m instead of i+j*m
    
        - the i'th 3-element subvector of a vector is v[3*(i-1)+1:3*i] 
          instead of v[3*i:3*(i+1)]
    
        - if you have vector of indices that partitions an vector into chunks,
          the i'th chunk is v[ind[i]:ind[i+1]-1] instead of v[ind[i]:ind[i+1]]
    

Perhaps small issues, but these are all real examples from my most recent
Matlab project that were annoying.

But maybe, like the static typing issue, my opinion on this topic is distorted
because I spent a lot of time programming in C++ and comparatively little time
reading math papers.

Or maybe it would be equally easy to make a list of tasks that are ugly with
[0:n) indexing.

~~~
jasode
_> I really, really wish they had dropped the 1-based indexing

>, my opinion on this topic is distorted because I spent a lot of time
programming in C++_

Mathematics-related programming[1] in MATLAB, R Language, Mathematica, SAS,
etc all use 1-based indexing. Given that the originators of Julia are MATLAB
users, it makes sense that they made a deliberate choice to keep 1-based
indexing.

In other words, it was more important to grab mindshare from those previous
math tools rather than appeal to C/C++/Java/etc programmers.

One outlier in the landscape of numerical programming is Python+NumPy/SciPy in
the sense that it uses 0-based indices. While Julia also wants to be
attractive to Python programmers, it still seems like the bigger motivation
was programmers of MATLAB and other math software.

[1][https://www.youtube.com/watch?v=02U9AJMEWx0&feature=youtu.be...](https://www.youtube.com/watch?v=02U9AJMEWx0&feature=youtu.be&t=1m57s)

~~~
Fomite
This, pretty much. Not to mention that, beyond languages, _data_ is often
1-based indexed. I have never gotten a patient data set with ID=0 as the first
entry. In my mind, compatibility with what users are expecting, and trying not
to induce indexing errors, trumps most other concerns.

~~~
jasode
_> , beyond languages, data is often 1-based indexed._

That's a good point. Probably the most widespread _data_ example for non-
programmers is spreadsheets (MS Excel, Google Sheets). The first row[1] in the
spreadsheet is labeled as "1" instead of "0". The idiomatic Visual Basic
programming code to loop through the rows would look something like:

    
    
      For Each cell In Range("a1:a25")  ' not "a0:a24"
          ' do work
      Next cell
    

[1][https://www.google.com/search?q=microsoft+excel+spreadsheet+...](https://www.google.com/search?q=microsoft+excel+spreadsheet+a1&source=lnms&tbm=isch)

------
pjmlp
Julia is really following Lisp and Dylan's footsteps!

Congratulations to all involved in pushing the actual state of dynamic
languages.

~~~
jakub_h
"following Lisp and Dylan's footsteps"

I hope not, especially regarding popularity.

~~~
pjmlp
I was thinking in terms of language design and tooling. :)

Agree with you.

------
vmorgulis
> Julia Computing carried out this work under contract from the Johns Hopkins
> Applied Physics Laboratory (JHU APL) for the Federal Aviation Administration
> (FAA) to support its TCAS (Traffic Collision Avoidance System) program.

It's great! Looks like a very interesting contract.

~~~
chappi42
In case you are interested, here are slides
([http://juliacon.org/2015/images/juliacon2015_moss_v3.pdf](http://juliacon.org/2015/images/juliacon2015_moss_v3.pdf))
and a presentation
([https://www.youtube.com/watch?v=19zm1Fn0S9M](https://www.youtube.com/watch?v=19zm1Fn0S9M))
about TCAS (and its successor, ACAS X)

------
illumen
In the literature this is called Gradual Typing. Or optional typing.

Python uses it in the last couple of versions.

------
tempodox
I come from AOT languages and have lost my patience by now. Julia is
overrated.

