
Matlab–Python–Julia Cheatsheet - tomrod
https://cheatsheets.quantecon.org/
======
eigenspace
The second example in the first section is a bit misleading. They say that to
create a column matrix, ie. an (n, 1) matrix, the syntax is

    
    
        [1 2 3]'
    

but that will give you the (lazy) hermitian adjoint of the column vector, not
a column matrix.

    
    
        julia> [1 2 3]' isa Matrix
        false
    

Instead, if you really need a true (n, 1) matrix you should 1) not use adjoint
because that will also take complex conjugates (unless that's what you wanted)
and 2) use collect to turn the lazy transpose (or adjoint) into a true matrix

    
    
        julia> collect(transpose([1 2 3]))
        3×1 Array{Int64,2}:
         1
         2
         3
    

However, you almost never need to do this sort of thing and you should be fine
using either transpose on a row vector or just using a real column vector.

Also, near the end they talk about closures but don't actually show any
closures unless one is to assume the code snippets they're showing are
actually inside a function body themselves.

~~~
hyperion2010
That python "closure" is a big fat lie too due to python's late binding.
Depending on the context in which that code is being defined it may or may not
work as expected.

The only guaranteed way to ensure that a free variable in python function
scope (it's not a closure) does not change is to pass it explicitly as a
default value.

    
    
        a = 1
        def function(x, a=a):
            return x + a
    

That will work in loops. Otherwise what you are really writing is

    
    
        a = 1
        def function(x):
            return x + the_last_value_a_obtains_in_this_scope
    
        a = 2
        assert function(1) == 3

~~~
mehrdadn
> That python "closure" is a big fat lie too due to python's late binding.

What? It's not a big fat lie, it's the exact truth. The same truth you'd get
in any language, even Scheme:

    
    
      > (define a 1)
      > (define (f) a)
      > (f)
      1
      > (define a 2)
      > (f)
      2
    

That's just how closures work...

~~~
hyperion2010
Fair enough, I think my complaint is more about the fact that closures and
python are not accompanied by a safety net the way they are in some other
languages, that doesn't make them not closures, it just makes them a less
useful abstraction (I've basically given up on using anything other than full
objects and list comprehensions in python due to the countless subtle and
often silent irregularities in how a form behaves when used in different
contexts).

Your example is true at the top level in scheme, but semantics in top (REPL)
are different (at least in Racket [0, 1]). In a source file you have to
explicitly use set! to mutate a so it is harder to shoot yourself in the foot.
In theory the implementation of closures is the same, but in python's case you
are free to mutate yourself into unexpected situations. Whereas I'm actually
not even sure it is possible to construct a situation in scheme (not at the
top level) where you could induce a situation similar to the one in python.

0\.
[https://gist.github.com/samth/3083053](https://gist.github.com/samth/3083053)
1\. [https://groups.google.com/d/msg/racket-
users/0BLHm18YUkc/BwQ...](https://groups.google.com/d/msg/racket-
users/0BLHm18YUkc/BwQdBlnmAgAJ)

~~~
mehrdadn
In all brutal honestly I think you just need to realize this is just you being
upset at having shot yourself in the foot at some point (don't worry, we've
all done it) and then trying to blame it on closures somehow, in this case by
saying this makes them a "less useful abstraction". In all honesty, no, it
just doesn't. It makes them more useful. If I can make an analogy, it's a bit
like saying bicycles should only have fixed gears as a "safety net" because
you've broken or slipped your variable gears in the past and now you've just
learned to take a car or train if you want to go faster than first gear. I
mean, that's definitely _one_ way to live your life, but most people don't do
it that way... they just realize their mistakes are a natural artifact of
being in the learning process and instead of abolishing gears, they keep
practicing more so that they eventually get the muscle memory to use their
vehicle properly and don't have to think about this problem every time. That's
the way to solve the problem for good -- so you can avoid the downsides while
reaping the benefits at the same time.

------
msaharia
So, looks like Julia would be an easier transition for a lot of academic
scientists. What am I missing? I mostly use R and Python. Can anyone tell me
briefly why I should use Julia over Python?

~~~
adamnemecek
The whole development experience is so much better than Python.

~~~
FridgeSeal
Right?!

Not having a packaging system that is a stapled-on-afterthought is so nice.

------
bwindsor
The final MATLAB example of "Inplace modification" is not correct.

    
    
      function f(out, x)
           out = x.^2
      end
      x = rand(10)
      y = zeros(length(x), 1)
      f(y, x)
    
    

What happens here when you call f(y, x) is:

1\. The arrays x and y are passed to f (no memory copying done yet, since
MATLAB uses copy on write)

2\. When we have `out = x.^2`, this will allocate a new array in memory and
store in it the result of `x.^2`, and will call this `out`. The original `out`
which was passed into the function can now be garbage collected (although it
won't be, because it's still in the parent scope as 'y')

3\. When the function exits, the new `out` goes out of scope and can be
garbage collected.

So all this example does is allocate a new array, assign x.^2 to it, and then
throw that result away again. There's no in-place modification.

You can't really pass a matrix by reference in MATLAB like you can in Python,
however in some cases you can write your function in a specific way which will
ensure in place operations are done and prevent a huge matrix copy. For
example (pauses are just there so that you can watch task manager memory
usage):

    
    
      function x = inPlace()
          x = rand(100000000, 1);
          pause(3);
          x = doSomething(x);
          pause(3)
      end
    
      function y = doSomething(y)
          y = y.^2;
      end
    

Conditions required are:

* Input and output variables must have same name in caller

* Input and output variables must have same name in callee

* Callee must be inside a function, not a script

Look here for a more detailed description:
[https://blogs.mathworks.com/loren/2007/03/22/in-place-
operat...](https://blogs.mathworks.com/loren/2007/03/22/in-place-operations-
on-data/)

------
mbeex
I hate these 2x2 (or nxn) matrix examples. You never know what is considered
column and what is row.

~~~
throwawaymath
Julia takes after Matlab, where matrices are defined by enumerating each row
followed by a semicolon. Personally I prefer Mathematica's syntax, but the
Julia/Matlab syntax is still way better than Numpy syntax (though to be fair
most of that is due to the lack of native matrix support in Python).

If it helps at all, this is exactly what you'll see when you create a matrix
in Julia:

    
    
        julia> mat = [1 2; 3 4]
        2×2 Array{Int64,2}:
         1  2
         3  4

~~~
dnautics
FYI you can also do this

    
    
        julia> mat = [1 2
                      3 4]

~~~
throwawaymath
Oh that's neat. Thanks, I wasn't aware of that.

------
ForHackernews
Great to see Julia gaining more exposure. I worry that it will languish as an
obscure research language without strong corporate champions.

~~~
adamnemecek
Juno is developed by Uber.

~~~
longemen3000
any source for that? the github of the juno project have 4 people, and none of
them are working for Uber

~~~
eigenspace
I think that’s some misconception coming from the fact that the package in the
Atom repository is called Uber-Juno. I don’t think the Juno devs have anything
to do with Uber.

~~~
longemen3000
Ahh, yes, that is a plausible explanation. It adds to the fact that not a lot
of people know that Uber is a german word, indicating high hierarchy

~~~
wallnuss
Über just means above. So "über dir" is "above you". Urban dictionary has uber
in english to mean superior, but that is not the original German meaning.

------
rhymer
Last week I converted a simple side project in Python to Julia. It's a
sequential Bayeisan estimation problem. A pleasant surprise that the Julia
version runs 30x faster than the Python counter part. I am sure the Python
code can be improved using Numba and various tricks. But the Julia version
simply works with minimum effort.

I also feel the language really is built for people doing numerical computing.
I smiled when I find out that randn(ComplexF64) gives you circularly-symmetric
complex normal random variables. It feels like Julia understands what I want.

~~~
elcritch
Is the randn(Complex64) used in your Bayesian estimation project? I’d be
curious to know how if so!

~~~
rhymer
Yes but not directly. It is used in the Monte Carlo simulation part. In
communication systems everything is complex :D

------
aptroninnoida
The feature that sets Julia apart from any other language is that it is a
dynamically typed, optionally interactive language that is statically analyzed
to compile to efficient machine code. [http://aptronnoida.in/best-python-
training-in-noida.html](http://aptronnoida.in/best-python-training-in-
noida.html)

------
holy_city
Are there analogs to Simulink or some of MATLAB's toolboxes in Julia? I know
of a few shops that use Octave because developer pricing for MATLAB isn't
enough when we have an alternative, but there's still a need for a roving
license or two because there are a few things lacking in the ecosystem.

~~~
dagw
For Simulink the closest you'll get is probably OMJulia[1] which is a set of
Julia bindings to OpenModelica as opposed to a stand alone library

[1]
[https://github.com/OpenModelica/OMJulia.jl](https://github.com/OpenModelica/OMJulia.jl)

~~~
ddragon
For simulation there is also a library that reimplements the Modelica language
in Julia using macros:

[https://github.com/ModiaSim/Modia.jl](https://github.com/ModiaSim/Modia.jl)

------
inamberclad
Julia is missing from the Ubuntu 18.04 repos for some reason. 16.04 is stuck
at v0.45.

~~~
eigenspace
Yeah, the Julia community for better or worse seems to have an attitude of
“don’t get julia from a package manager, download a binary from our website or
built it from source.” No doubt this has to do with the fact that releases
happen quickly and it’s a giant pain in the ass to go and push binaries to all
these package managers.

~~~
ChrisRackauckas
This is because LLVM has bugs (like all software) and Julia carries around
patched versions of LLVM. Those fixes are being upstreamed over time, but it
takes time. Julia is a very hardcore test suite for numerical libraries, so it
happens to find things that only show up when numerical stability is pushed to
the edge, and Julia Base will have some tests fail if you don't build with the
patched LLVM. That said... a lot of users might not notice a difference. But
if you want what is known to be the most correct, then you should use the
patched one. However, Linux package managers require that you use their LLVM,
which in some cases is the wrong version, and in all cases is not the patched
one.

~~~
eigenspace
I didn’t know that the package managers require you to use Linux’s LLVM.
That’s a shame.

Do all the major package managers do this or is it just apt? I have a
relatively new version of Julia from pacman on Arch, I wonder if it has a
patched LLVM or not...

~~~
pjmlp
And rust does the same, as they also need their own LLVM version.

~~~
steveklabnik
We don't _need_ it, we just prefer it. You can build with stock LLVM if you
want. (And that's how distros treat Rust as well; they use their own version
instead of ours.)

~~~
pjmlp
What about the bugs that required a patched LLVM?

~~~
steveklabnik
You get the bugs. No way around that. We try to upstream as many of our
patches as possible, but we’ll always be a bit farther ahead. It’s just the
nature of things.

------
agumonkey
how did numpy manage to override `@` in python ?

ps: ohh this was introduced in python 3.5
[https://stackoverflow.com/questions/27385633/what-is-the-
sym...](https://stackoverflow.com/questions/27385633/what-is-the-symbol-for-
in-python)

pps: [https://pastebin.com/s751DRDi](https://pastebin.com/s751DRDi)

------
dzink
Are there good Tensorflow and Pytorch alternatives in the works writen in
Julia?

~~~
ddragon
Yes, there are already a couple.

The most well known is a 100% Julia neural network library called Flux.jl [1],
which aims to become what Swift for Tensorflow wants as well (to make the
entire Julia language a fully differentiable language) through Zygote.jl [2],
and even without it has already great integration with the ecosystem, for
example with the differentiable equations library through DiffEqFlux.jl [3].
Plus the source code is very high level (while being high performance,
including easy GPU support), so you can easily see what each component does
and implement any extension directly on your code without worrying about
performance.

There is also another feature complete native library that allows some very
concise code, Knet.jl [4], and the Tensorflow bindings [5].

[1] [https://github.com/FluxML/Flux.jl](https://github.com/FluxML/Flux.jl)

[2] [https://github.com/FluxML/Zygote.jl](https://github.com/FluxML/Zygote.jl)

[3]
[https://julialang.org/blog/2019/01/fluxdiffeq](https://julialang.org/blog/2019/01/fluxdiffeq)

[4]
[https://github.com/denizyuret/Knet.jl](https://github.com/denizyuret/Knet.jl)

[5]
[https://github.com/malmaud/TensorFlow.jl](https://github.com/malmaud/TensorFlow.jl)

------
timmit
I like it.

