[1 2 3]'
julia> [1 2 3]' isa Matrix
julia> collect(transpose([1 2 3]))
Also, near the end they talk about closures but don't actually show any closures unless one is to assume the code snippets they're showing are actually inside a function body themselves.
If you only ever work in R then technically that'd be fine (though notationally awkward). But that's not a good habit to keep since it would introduce pretty insidious bugs if you ever take the adjoint of a complex matrix intending to take the transpose.
There’s no math or physics equation here with objects transforming under an adjoint representation, it’s just a constructor.
The only guaranteed way to ensure that a free variable in python function scope (it's not a closure) does not change is to pass it explicitly as a default value.
a = 1
def function(x, a=a):
return x + a
a = 1
return x + the_last_value_a_obtains_in_this_scope
a = 2
assert function(1) == 3
What? It's not a big fat lie, it's the exact truth. The same truth you'd get in any language, even Scheme:
> (define a 1)
> (define (f) a)
> (define a 2)
Your example is true at the top level in scheme, but semantics in top (REPL) are different (at least in Racket [0, 1]). In a source file you have to explicitly use set! to mutate a so it is harder to shoot yourself in the foot. In theory the implementation of closures is the same, but in python's case you are free to mutate yourself into unexpected situations. Whereas I'm actually not even sure it is possible to construct a situation in scheme (not at the top level) where you could induce a situation similar to the one in python.
In many cases, the "FP paradigm" is very sensible and makes closures safe to work with and easy to reason about, especially in contexts where concurrently running processes could alter the memory underneath you.
iex(1)> a = 1
iex(2)> f = fn x -> x + a end
#Function<7.91303403/1 in :erl_eval.expr/5>
iex(4)> a = 2
julia> a = 1
julia> function foo(x) x+ a end
foo (generic function with 1 method)
julia> a = 2
If I could never write Matlab again and instead use Julia, I'd be delighted.
Before that gets you too excited or sceptical, let me say that end users will not really see many significant speed boosts. When I say end users, I mean people who are just loading packages and writing scripts where they plug variables into functions from the package. The reason these end users won't see much difference is that a well made Python or R package (including things like Numpy) are actually Python and R wrappers around C, C++, Fortran or Julia code. However, if you are trying to do something that your packages weren't directly designed to do and writing non-trivial code, then you're going to start seeing the speed advantages of Julia.
With all that in mind, I'd say the main reason for end users to use Julia is not the speed, it's the expressiveness and the top notch libraries.
Expressiveness is kinda a vague concept, but what I'm basically saying is that through things like multiple dispatch and macros, julia is able to provide ways of writing programs in almost any domain that feel incredibly natural and various programs will compose with eachother in ways you won't see in any language except Common Lisp.
Finally, because of Julia's native speed and expressiveness, it doesn't get in package developers way like Python and R do and so we're seeing that even though julia has a much smaller community than Python or R, we have a package ecosystem that's quite comparable and in certain specific regions, flat out superior. As adoption grows, we're going to see the julia package ecosystem accelerate and Julia will make sense for more end users.
For now, whether or not Julia makes sense for you really depends on what field you're working in (ie. if there are good packages already for the tasks you're interested in) and how advanced a programmer you are / want to be (ie. if you're going to be trying to do something that's not already covered by existing packages).
Or you use Numba. :-)
But yeah, the speed can be pretty bad compare to native code if you can't compile everything to native with Numba. To put some concrete numbers here, the one time I had some complicated but heavily-optimized Python and C++ scicomp codebases computing exactly the same thing, I saw  running times like the following depending on the algorithm:
1. 10.6s with NumPy, 7.5s with NumPy + Numba, and 0.6s in direct C++ (1.4x and 12.5x respectively)
2. 68s with NumPy, 6.5s with NumPy + Numba, 0.5s in direct C++ (10x and 13x respectively)
The first algorithm was simpler and more vectorizable, while the second algorithm was more complicated and less vectorizable but with a lower time complexity. In either case, the algorithm still had to do a fair bit of work in Python, so it wasn't running native code 100% of the time (which I think is fairly realistic).
In either case, you can see the difference between C++ and Python was around 13x when using Numba... not quite as bad as 100x, but I imagine Julia probably does better.
 Actually, originally I saw only a 5x improvement with Python 2.7 on my older laptop, but I re-ran these on my current laptop under Python 3.7 and the difference is 13x now.
My gripe with Numba is that you're still giving up on a huge suite of Python's language features. One could say "oh well, eventually with enough work Numba will make all of Python as fast as Julia" but I don't think that's true. Python has specific semantics core to it's design that make all sorts of optimizations impossible. Julia made a lot of core design decisions to make sure that these optimizations were possible. You could invent a new language that's python-like but doesn't have the same limitations but then you might as well just use Julia.
Now, if all you ever interact with is is floating point numbers then Numba may be enough for you unless you're in an application like differential equation solving where the context switch of going back and forth between interpreted Python code and compiled Numba code will kill your performance.
But Julia's compiler will work on custom types just as well as builtin types (in fact, Julia's builtin number types are written in pure julia and could have been implemented in a package with the same performance).
> In either case, you can see the difference between C++ and Python was around 13x when using Numba... not quite as bad as 100x, but I imagine Julia probably does better.
When I said the 100x that's mostly a statement about Python's looping performance and discounting Numpy vectorization Ie. comparing a Python for loop to a C or Julia for loop (arguably a strawman, I know). Numpy is just calling C routines afterall, so for the most part it can be really fast other than the fact that individual Numpy calls don't know about each-other which precludes some optimizations.
But yeah, claims about performance differences between languages is a really technical and highly context contingent debate which is why I wish people wouldn't lean so much on Julia's speed as it's main selling point. It will mostly just lead to disappointment or confusion when someone comes over to julia and finds out that matrix multiplication is the same speed in julia as their old language (or slower since we default to bundling OpenBLAS instead of MKL). Hell, for people just writing simple scripts and restarting julia often, especially plotting, Julia is probably significantly slower than Python since a non-trivial amount of time will be spent compiling functions before the first time they're run.
Julia's incredibly expressive type system, macros and the crazy composability of julia code are much more impactful benefits than it's runtime performance, but these are much more vauge squishy, qualitative concepts than quantitative concepts like runtime performance.
A while back I was prototyping let's just say... unusual binary datatype representations for numbers. All i had to do was reimplement a handful of operations (+, -, x, /, one, zero) and I got everything from fourier transforms to matrix solving for free. Comparing numerical performance with standard IEEE representations was then easy, and I had confidence that my comparisons were legit, since I was literally calling the same function against both numerical types.
More recently I wanted to play around with galois fields, following Mary Wootter's impressive work, and was able to test some ideas very quickly (and trivially deploy on a supercomputer cluster) in few lines of code using Julia.
That's not a typical use case, but it's a thing. I am thinking about playing around with complex numbers (when I get some free time, which increasing seems like 'never') in deep learning, and for similar reasons Julia will be an obvious choice.
But the Juno IDE is just as good as Jupyter notebooks or RStudio, and there are probably high quality libraries for whatever you’re doing, unless you work in one of those niche areas.
Here's how long a vector of 5 random numbers takes, after a cold start, on 1.1:
$ julia -e '@time rand(5)'
0.054343 seconds (121.03 k allocations: 6.219 MiB)
I don't think the latency you're experiencing is normal.
There is an obvious reason to choose Julia:
it's faster than other scripting languages, allowing
you to have the rapid development of Python/MATLAB/R
while producing code that is as fast as C/Fortran"
However, it's a community effort and is somewhat non-trivial to get up and running with. Once Julia gets better precompilation/binary packaging support, common workflows like plotting will improve dramatically.
$ julia -e '@time (using GR; plot(rand(20)))'
3.931433 seconds (10.38 M allocations: 521.292 MiB, 6.44% gc time)
$ julia -e '@time (using Plots; plot(rand(20)))'
19.498644 seconds (57.26 M allocations: 2.844 GiB, 8.07% gc time)
$ julia --compile=min -e '@time (using GR; plot(rand(20)))'
0.375836 seconds (368.83 k allocations: 20.190 MiB, 1.65% gc time)
$ julia --compile=min -e '@time (using Plots; plot(rand(20)))'
4.302867 seconds (6.41 M allocations: 371.485 MiB, 5.07% gc time)
Simple plots take a fraction of a second in Python/R/Matlab. I feel like many people don't realize how crucial this is. Sub-second plotting makes working with data interactive. If it takes more than 5 seconds to produce simple plots, that's no longer interactive. Imagine if your debugger took half a minute to show you the value of a variable while trying to find a complex bug. You'd start pulling your hair out.
If in Julia it takes me half a minute at least (dumping to text file, reading it in somewhere else and then plotting it), Julia is going to remain firmly in the "check this language again in 2 years time if the plotting story has become sensible yet".
What I do is wait one second for the RCall.jl package to load and then use R's ggplot2 library to plot. Works really well, especially since I am really familiar with ggplot2.
Also pretty much into the unix philosophy whereby tools should do only one thing and do it well.
This is understandable, because julia is not intended to be used that way. You are supposed to "live" inside the repl. However, I prefer tools that are flexible enough that can be used comfortably in non-intended ways.
Plus it has quite a few Lisp inspired features, like multi-methods and powerful macros.
Not having a packaging system that is a stapled-on-afterthought is so nice.
function f(out, x)
out = x.^2
x = rand(10)
y = zeros(length(x), 1)
1. The arrays x and y are passed to f (no memory copying done yet, since MATLAB uses copy on write)
2. When we have `out = x.^2`, this will allocate a new array in memory and store in it the result of `x.^2`, and will call this `out`. The original `out` which was passed into the function can now be garbage collected (although it won't be, because it's still in the parent scope as 'y')
3. When the function exits, the new `out` goes out of scope and can be garbage collected.
So all this example does is allocate a new array, assign x.^2 to it, and then throw that result away again. There's no in-place modification.
You can't really pass a matrix by reference in MATLAB like you can in Python, however in some cases you can write your function in a specific way which will ensure in place operations are done and prevent a huge matrix copy. For example (pauses are just there so that you can watch task manager memory usage):
function x = inPlace()
x = rand(100000000, 1);
x = doSomething(x);
function y = doSomething(y)
y = y.^2;
* Input and output variables must have same name in caller
* Input and output variables must have same name in callee
* Callee must be inside a function, not a script
Look here for a more detailed description: https://blogs.mathworks.com/loren/2007/03/22/in-place-operat...
If it helps at all, this is exactly what you'll see when you create a matrix in Julia:
julia> mat = [1 2; 3 4]
julia> mat = [1 2
I also feel the language really is built for people doing numerical computing. I smiled when I find out that randn(ComplexF64) gives you circularly-symmetric complex normal random variables. It feels like Julia understands what I want.
Do all the major package managers do this or is it just apt? I have a relatively new version of Julia from pacman on Arch, I wonder if it has a patched LLVM or not...
They don't require you to use “Linux's LLVM” (which would not make any sense given that Linux and LLVM are two independent projects), they make you use the version of LLVM they are currently packaging.
> Do all the major package managers do this or is it just apt?
All the package managers will conceptually do that; however, how old is the considered LLVM will vary strongly depending on the packaging policy of your distribution.
ps: ohh this was introduced in python 3.5 https://stackoverflow.com/questions/27385633/what-is-the-sym...
The most well known is a 100% Julia neural network library called Flux.jl , which aims to become what Swift for Tensorflow wants as well (to make the entire Julia language a fully differentiable language) through Zygote.jl , and even without it has already great integration with the ecosystem, for example with the differentiable equations library through DiffEqFlux.jl . Plus the source code is very high level (while being high performance, including easy GPU support), so you can easily see what each component does and implement any extension directly on your code without worrying about performance.
There is also another feature complete native library that allows some very concise code, Knet.jl , and the Tensorflow bindings .