
Julia v1.3 - threatofrain
https://github.com/JuliaLang/julia/blob/v1.3.0/NEWS.md
======
falkaer
By far the most interesting part of this release is the new multi-threading
features, I recommend reading
[https://julialang.org/blog/2019/07/multithreading](https://julialang.org/blog/2019/07/multithreading)
for an overview, and watching
[https://www.youtube.com/watch?v=YdiZa0Y3F3c](https://www.youtube.com/watch?v=YdiZa0Y3F3c)
for a talk about some unique stuff that's being worked on with Julia's multi-
threading (though the depth-first scheduling is not in 1.3 afaik).

------
ashton314
I find Julia to be a beautiful little language. Applause to the team for this
1.3 release!

In one of my CS classes we wrote a LISP using Julia. Our professor wrote the
parser for us, so we just had to focus on the actual interpreter bit. The
pattern matching/multi-dispatch mechanisms were really nice. We didn't get to
play _too_ much with the parallelism mechanisms, but I liked what I did see.

I feel like Julia is a much richer language than, say, Python, and it would be
a nice drop-in replacement for numerical computing tasks: it's faster, in some
ways more ergonomic, and afaik can call out to Python libraries if needed. Has
anyone here switched from Python to Julia for scientific/ML/AI purposes?

~~~
dmos62
I played around with Julia a bit last year. One thing that stuck with me was
the non-negligible start-up speed. If you want tight feedback loops you have
to figure out a way to reuse a Julia instance (like Jupyter).

On that train of thought, how interoperable is preexisting numpy/pandas code
with Julia?

~~~
socialdemocrat
You can ;-) The secret sauce in Julia is to use Revise.jl package. Is makes
your life so much better. It observes source code changes while you are at the
REPL. No need to restart everything.

~~~
jjoonathan
How reliable is Revise.jl compared to ipython autoreload? My experience with
ipython autoreload is... very poor, even though some people swear by it, so I
have trust issues with this type of thing.

~~~
montalbano
I use it extensively. In my experience it is very reliable. Onle once or twice
in two years of using it has something strange happened with JIT compilation
of functions. All I had to do to fix was restart the Julia REPL and load
things in again.

------
playing_colours
I keep my eye on Julia, but still did not have a chance to dig into it. I
would be interested to learn more about its place and capabilities:

\- I want to build an new (yet another) distributed data processing framework,
say, Hadoop or Spark Next, generic or for a particular industry, and I do not
want to use Java or C++.

Can Julia be a feasible choice for a distributed computation like Spark, for
distributed file system like HDFS, for resource allocator like Yarn? Can it be
a good choice for a database engine, or its place is in layers above the
engine - say a layer to support aggregation and computations?

Or it's not a right choice comparing with Rust, C++, or Java for core systems,
and it's better to stick with it just for computations on the top of those
core systems?

~~~
socialdemocrat
I don't know the domain you describe, but you have e.g. JuliaDB which has
quite good performance from what I read.

[https://juliadb.org](https://juliadb.org)

JuliaDB is about storing and retrieving Julia data end to end, although it
works with e.g. CSV files etc as far as I know.

The benefit of using Julia over something like Java or C++ is that user
defined function can easily be added and they are JIT compiled for maximum
performance.

The main consideration with respect to using Julia is about whether your
domain has problems with JIT compilation or not. Anything that is started and
shut down frequently and only runs for a short time will not work well with a
JIT based system like Julia.

Also places where you need fine control over latency such as computer games or
real time systems may not be suitable for Julia. Then I suspect Rust or Swift
would be better.

Other than that Julia is good for almost anything. I has great support for
parallelism, concurrency and crunch numbers really fast.

While C++ and Rust may beat Julia in performance I think you should be able to
outperform Java, because Julia is designed much more with performance in mind
than Java. E.g. you have better control over memory layout, cache misses etc
in Julia than in Java.

~~~
pjmlp
It was really a bad decision to design Java without value types, given the
already existing plethora of GC enabled systems languages since the mid-70's.

However that will be eventually fixed, and then we won't be able to bash Java
any longer for lack of value types.

In what GC languages are concerned, it would be more interesting to compare
Julia's generated code against .NET Core languages, D (specially ldc), Nim.

~~~
BubRoss
Java has been around for multiple decades but the types are just around the
corner? That reminds me of static compilation, which was also coming soon for
20 years.

~~~
pjmlp
AOT compilation has been an option to Java shops willing to buy third party
commercial JVMs since around 2000.

You can play around with value types experimental releases already.

[https://jdk.java.net/valhalla/](https://jdk.java.net/valhalla/)

Adopting value types, while keeping 20+ year old jars working without any kind
of changes is an engineering feat.

I bet that Java will get value types before Go gets its generics, if ever.

------
short_sells_poo
I'm rooting for Julia. Currently we are using python and rust, but rust is not
an easy language to do exploratory analysis in, so it's relegated to handling
only the stable parts of our analytics libraries.

It looks like Julia integrates quite seamlessly with python, so I'm hoping
that we can start using it to easily speed up exploratory research code
without having to spend a lot of time.

------
cshenton
Julia feels like a scripting language but every function is effectively a
template. I prototype lots of high perf stuff in it because it gives you
control of memory layout and what gets evaluated at compile time, so it'll get
you within spitting distance of an optimised c++ implementation but with 10x
better iteration speed. Thoroughly recommend it to any Comp Sci / ML
researchers.

Only thing I'd wish for is better control of memory layout for mutable
structs, but I know that's unlikely since that's one thing that the Julia
folks want to keep abstracted from the user.

------
sgillen
>> Zero-dimensional arrays are now consistently preserved in the return values
of mathematical functions that operate on the array(s) as a whole (and are not
explicitly broadcasted across their elements). Previously, the functions +, -,
*, /, conj, real and imag returned the unwrapped element when operating over
zero-dimensional arrays.

That's great news, it drives me crazy when languages or libraries get that
wrong.

------
tempodox
I was still hoping that someday support for standalone binary executables
would be added. Maybe next century.

~~~
stilley2
Does this not do what you want?
[https://github.com/JuliaLang/PackageCompiler.jl/blob/master/...](https://github.com/JuliaLang/PackageCompiler.jl/blob/master/README.md)

~~~
eigenspace
Note that PackageCompiler.jl's author no longer wants to work on it and there
are still many rough edges. However, there are several exciting successor
projects making quick progress right now!

------
oconnor663
Does the Rust Rayon library have the issue the speaker was describing, where
parallel inner loops mostly get executed serially once an outer loop is
parallelized? (I assume it does, from its Cilk heritage?) Are there
workarounds for when that is a problem?

Edit: Dumb mistake on my part. I was looking at the video linked in the top
comment
([https://www.youtube.com/watch?v=YdiZa0Y3F3c](https://www.youtube.com/watch?v=YdiZa0Y3F3c))
and then I came back to this thread and thought it was the topic.

~~~
GolDDranks
What speaker you are talking about? The linked post is to Julia 1.3 release
notes that doesn't mention Rayon at all?

~~~
oconnor663
Oops, dumb mistake on my part. I was referring to the video linked in the top
comment:
[https://www.youtube.com/watch?v=YdiZa0Y3F3c](https://www.youtube.com/watch?v=YdiZa0Y3F3c)

------
hokkos
Does Julia plan to have a WASM backend ?

~~~
ChrisRackauckas
Yes, you can try it out:

[https://keno.github.io/julia-
wasm/website/repl.htm](https://keno.github.io/julia-wasm/website/repl.htm)

Here's the repo:

[https://github.com/Keno/julia-wasm](https://github.com/Keno/julia-wasm)

It's still a work in progress to get the full package manager on there, but
there's funding from Mozilla to get it done. I'm excited to see how that turns
out.

------
igouy
???

Upcoming release: v1.3.0-rc5 (Nov 17, 2019)

We're currently testing release candidates for Julia v1.3.0

[https://julialang.org/downloads/](https://julialang.org/downloads/)

~~~
oxinabox
Hacker News is just super on the ball, it out-paced actually updating the
website.

Still when I built julia this morning, from the 1.3 banch, I can confirm the
-RC5 label was gone.

------
The_rationalist
In other words, mostly catching up features present in mainstream languages.

~~~
StefanKarpinski
The only other dynamic language with this kind of multithreading capability
is, wait for it... Raku (aka Perl 6). And my impression is that Raku doesn’t
get very high performance with its multithreading, which is mostly designed to
improve I/O throughput (and because it’s a cool feature and Raku has never met
a feature it doesn’t like).

Even among static languages, it’s a relatively rare capability: Go has
excellent support (of course), C extensions like Cilk and TBB support it, Rust
has something similar with tokio, and I’ve heard people claim that Haskell can
do something similar with appropriate extensions.

~~~
lizmat
> And my impression is that Raku doesn’t get very high performance with its
> multithreading

Could you elaborate on how you got that impression? Do you have any benchmarks
that the Raku core developers could look at?

~~~
StefanKarpinski
I recall seeing various benchmarks of Raku that were on the slow side—slower
than Perl 5 / Python speed, which are already what I'd describe as "slow
languages". I can't currently find what specifically I read, but
[https://www.reddit.com/r/perl6/comments/btvkoz/is_perl6_stil...](https://www.reddit.com/r/perl6/comments/btvkoz/is_perl6_still_slow/)
seems to have some discussion (and I think you're on that thread, so I'm sure
you're aware). This doesn't specifically address threading performance, but if
the runtime is 100-500x slower than C, it's not like threading can make up
that kind of performance deficit.

~~~
lizmat
The Raku runtime is still improving, and several object creation benchmarks
now outrun Perl. A low bar, some might argue. But still.

Threading _can_ make a difference on non-IO bound tasks if you're interested
in wallclock rather than CPU: for instance, `say (1..Inf).grep( { .is-prime }
)[9999]` (showing the 10000th prime number) runs 22 seconds on my machine, but
the threaded version of that: `say (1..Inf).hyper.grep( *.is-prime )[9999]`
runs in 8 seconds. That's just by adding the `.hyper` method to the chain!

~~~
StefanKarpinski
Here's a comparable thing in Julia:

    
    
        julia> using Primes, Lazy
    
        julia> @time drop(9999, filter(isprime, Lazy.range(1)))[1]
          0.119811 seconds (1.25 M allocations: 23.502 MiB)
        104729
    

The Lazy package doesn't support using threads yet since 1.3 was just
released, so doing the threaded comparison isn't simple at the moment.
However, this illustrates what I was getting at: the sequential Julia code is
185x faster than the the sequential Raku code and 67x faster than the threaded
Raku code. It's great that Raku threading gets some scaling here (not sure how
many cores you have, so it's unclear if 2.75x scaling is good or not, but it's
not nothing). But it's considerably easier to scale if there's already a lot
of performance on the table. The faster each operation is, the harder it is to
make a threading implementation that has low enough overhead to make threads
worthwhile. From the performance perspective, any effort spent on threading in
Raku would be better spent on sequential speed until that has been maxed out.

I should also note that this is not how one would actually find the 10000th
prime efficiently. For that you'd use the `nextprime` function also provided
by the Primes package, like so:

    
    
        julia> @time nextprime(1, 10000)
          0.010456 seconds (17.00 k allocations: 265.562 KiB)
        104729
    

That's another 10x faster than the lazy sequence approach. Which mostly tells
me that the lazy code is impressively efficient—I would have expected a
dedicated function to have more of an edge. Lest anyone cry foul about using C
or whatever, this function is implemented fairly straightforwardly in Julia:

[https://github.com/JuliaMath/Primes.jl/blob/ce0c1e388e1fd375...](https://github.com/JuliaMath/Primes.jl/blob/ce0c1e388e1fd375ee3385596ae00a531cc916e7/src/Primes.jl#L589-L632).

~~~
lizmat
Thank you for these goals :-)

Just for the record: yes, there are faster ways of finding the 10000th prime,
but I use this example often as an example of a CPU-intensive task that can be
spread over multiple threads easily.

Also, please note that all features of the example are built-in into Raku: no
external module loading needed.

