Hacker News new | past | comments | ask | show | jobs | submit login
Matlab–Python–Julia Cheatsheet (quantecon.org)
265 points by tomrod on June 9, 2019 | hide | past | favorite | 93 comments

The second example in the first section is a bit misleading. They say that to create a column matrix, ie. an (n, 1) matrix, the syntax is

    [1 2 3]'
but that will give you the (lazy) hermitian adjoint of the column vector, not a column matrix.

    julia> [1 2 3]' isa Matrix
Instead, if you really need a true (n, 1) matrix you should 1) not use adjoint because that will also take complex conjugates (unless that's what you wanted) and 2) use collect to turn the lazy transpose (or adjoint) into a true matrix

    julia> collect(transpose([1 2 3]))
    3×1 Array{Int64,2}:
However, you almost never need to do this sort of thing and you should be fine using either transpose on a row vector or just using a real column vector.

Also, near the end they talk about closures but don't actually show any closures unless one is to assume the code snippets they're showing are actually inside a function body themselves.

Good catch, that's an odd mistake. I would assume the author knows the adjoint isn't equal to the transpose unless the matrix is real.

If you only ever work in R then technically that'd be fine (though notationally awkward). But that's not a good habit to keep since it would introduce pretty insidious bugs if you ever take the adjoint of a complex matrix intending to take the transpose.

But in mathematical derivations, when does a transpose which is not an adjoint ever show up? In many derivations we write transpose knowing that it would be the adjoint if complex numbers are used. That's at least true for most of applied mathematics, statistics, physics, etc.

My point at least was just that it’s a bad idea to use adjoint to construct a column matrix since in that case you likely didn’t intend to take an adjoint, you just wanted a certain shape.

There’s no math or physics equation here with objects transforming under an adjoint representation, it’s just a constructor.

Yes, one may keep that in mind when working something on scratch paper. But when it’s possible for things to be complex, the standard notation is something like A^. In my opinion, it is rather sloppy to write A^T and expect the reader to substitute for the complex case. I can’t remeber where at the moment, but I have definitely seen derivations when the authors explicitly wanted A^T on a complex matrix.

You have to escape your star A^* .

That python "closure" is a big fat lie too due to python's late binding. Depending on the context in which that code is being defined it may or may not work as expected.

The only guaranteed way to ensure that a free variable in python function scope (it's not a closure) does not change is to pass it explicitly as a default value.

    a = 1
    def function(x, a=a):
        return x + a
That will work in loops. Otherwise what you are really writing is

    a = 1
    def function(x):
        return x + the_last_value_a_obtains_in_this_scope

    a = 2
    assert function(1) == 3

> That python "closure" is a big fat lie too due to python's late binding.

What? It's not a big fat lie, it's the exact truth. The same truth you'd get in any language, even Scheme:

  > (define a 1)
  > (define (f) a)
  > (f)
  > (define a 2)
  > (f)
That's just how closures work...

Fair enough, I think my complaint is more about the fact that closures and python are not accompanied by a safety net the way they are in some other languages, that doesn't make them not closures, it just makes them a less useful abstraction (I've basically given up on using anything other than full objects and list comprehensions in python due to the countless subtle and often silent irregularities in how a form behaves when used in different contexts).

Your example is true at the top level in scheme, but semantics in top (REPL) are different (at least in Racket [0, 1]). In a source file you have to explicitly use set! to mutate a so it is harder to shoot yourself in the foot. In theory the implementation of closures is the same, but in python's case you are free to mutate yourself into unexpected situations. Whereas I'm actually not even sure it is possible to construct a situation in scheme (not at the top level) where you could induce a situation similar to the one in python.

0. https://gist.github.com/samth/3083053 1. https://groups.google.com/d/msg/racket-users/0BLHm18YUkc/BwQ...

In all brutal honestly I think you just need to realize this is just you being upset at having shot yourself in the foot at some point (don't worry, we've all done it) and then trying to blame it on closures somehow, in this case by saying this makes them a "less useful abstraction". In all honesty, no, it just doesn't. It makes them more useful. If I can make an analogy, it's a bit like saying bicycles should only have fixed gears as a "safety net" because you've broken or slipped your variable gears in the past and now you've just learned to take a car or train if you want to go faster than first gear. I mean, that's definitely one way to live your life, but most people don't do it that way... they just realize their mistakes are a natural artifact of being in the learning process and instead of abolishing gears, they keep practicing more so that they eventually get the muscle memory to use their vehicle properly and don't have to think about this problem every time. That's the way to solve the problem for good -- so you can avoid the downsides while reaping the benefits at the same time.

It's not categorically how closures work, it's just that in python (and most other languages, including julia), you bind to effectively "a pointer to the value". In FP paradigm, you typically bind to "the value at function creation time".

In many cases, the "FP paradigm" is very sensible and makes closures safe to work with and easy to reason about, especially in contexts where concurrently running processes could alter the memory underneath you.

    iex(1)> a = 1
    iex(2)> f = fn x -> x + a end
    #Function<7.91303403/1 in :erl_eval.expr/5>
    iex(3)> f.(1) 
    iex(4)> a = 2
    iex(5)> f.(1)

Elixir doesn't have mutable variables though. Is there any language that has mutable variables and closes over the value of a variable and not a "pointer to the value"?

The same is true with Julia:

julia> a = 1


julia> function foo(x) x+ a end

foo (generic function with 1 method)

julia> foo(2)


julia> a = 2


julia> foo(2)


So, looks like Julia would be an easier transition for a lot of academic scientists. What am I missing? I mostly use R and Python. Can anyone tell me briefly why I should use Julia over Python?

I just wrote my first bit of Julia this weekend. I'm impressed - it's clearly been designed for technical/numeric computing with modern language features baked-in. The type system and macros are some significantly distinguishing characteristics as compared to Python. The Unitful.jl package illustrates how these can be used to powerful effect. Multiple dispatch is another such characteristic. Having benefited from static type systems in other software projects, I would never again want to write significant technical code without a first-class type system supporting declarative type constraints.

If I could never write Matlab again and instead use Julia, I'd be delighted.

Well written pure julia programs tend to be about as fast as C, or in other words about 100 times faster than Python or R.

Before that gets you too excited or sceptical, let me say that end users will not really see many significant speed boosts. When I say end users, I mean people who are just loading packages and writing scripts where they plug variables into functions from the package. The reason these end users won't see much difference is that a well made Python or R package (including things like Numpy) are actually Python and R wrappers around C, C++, Fortran or Julia code. However, if you are trying to do something that your packages weren't directly designed to do and writing non-trivial code, then you're going to start seeing the speed advantages of Julia.

With all that in mind, I'd say the main reason for end users to use Julia is not the speed, it's the expressiveness and the top notch libraries.

Expressiveness is kinda a vague concept, but what I'm basically saying is that through things like multiple dispatch and macros, julia is able to provide ways of writing programs in almost any domain that feel incredibly natural and various programs will compose with eachother in ways you won't see in any language except Common Lisp.

Finally, because of Julia's native speed and expressiveness, it doesn't get in package developers way like Python and R do and so we're seeing that even though julia has a much smaller community than Python or R, we have a package ecosystem that's quite comparable and in certain specific regions, flat out superior. As adoption grows, we're going to see the julia package ecosystem accelerate and Julia will make sense for more end users.

For now, whether or not Julia makes sense for you really depends on what field you're working in (ie. if there are good packages already for the tasks you're interested in) and how advanced a programmer you are / want to be (ie. if you're going to be trying to do something that's not already covered by existing packages).

> However, if you are trying to do something that your packages weren't directly designed to do and writing non-trivial code, then you're going to start seeing the speed advantages of Julia.

Or you use Numba. :-)

But yeah, the speed can be pretty bad compare to native code if you can't compile everything to native with Numba. To put some concrete numbers here, the one time I had some complicated but heavily-optimized Python and C++ scicomp codebases computing exactly the same thing, I saw [1] running times like the following depending on the algorithm:

1. 10.6s with NumPy, 7.5s with NumPy + Numba, and 0.6s in direct C++ (1.4x and 12.5x respectively)

2. 68s with NumPy, 6.5s with NumPy + Numba, 0.5s in direct C++ (10x and 13x respectively)

The first algorithm was simpler and more vectorizable, while the second algorithm was more complicated and less vectorizable but with a lower time complexity. In either case, the algorithm still had to do a fair bit of work in Python, so it wasn't running native code 100% of the time (which I think is fairly realistic).

In either case, you can see the difference between C++ and Python was around 13x when using Numba... not quite as bad as 100x, but I imagine Julia probably does better.

[1] Actually, originally I saw only a 5x improvement with Python 2.7 on my older laptop, but I re-ran these on my current laptop under Python 3.7 and the difference is 13x now.

> Or you use Numba. :-)

My gripe with Numba is that you're still giving up on a huge suite of Python's language features. One could say "oh well, eventually with enough work Numba will make all of Python as fast as Julia" but I don't think that's true. Python has specific semantics core to it's design that make all sorts of optimizations impossible. Julia made a lot of core design decisions to make sure that these optimizations were possible. You could invent a new language that's python-like but doesn't have the same limitations but then you might as well just use Julia.

Now, if all you ever interact with is is floating point numbers then Numba may be enough for you unless you're in an application like differential equation solving where the context switch of going back and forth between interpreted Python code and compiled Numba code will kill your performance.

But Julia's compiler will work on custom types just as well as builtin types (in fact, Julia's builtin number types are written in pure julia and could have been implemented in a package with the same performance).

> In either case, you can see the difference between C++ and Python was around 13x when using Numba... not quite as bad as 100x, but I imagine Julia probably does better.

When I said the 100x that's mostly a statement about Python's looping performance and discounting Numpy vectorization Ie. comparing a Python for loop to a C or Julia for loop (arguably a strawman, I know). Numpy is just calling C routines afterall, so for the most part it can be really fast other than the fact that individual Numpy calls don't know about each-other which precludes some optimizations.

But yeah, claims about performance differences between languages is a really technical and highly context contingent debate which is why I wish people wouldn't lean so much on Julia's speed as it's main selling point. It will mostly just lead to disappointment or confusion when someone comes over to julia and finds out that matrix multiplication is the same speed in julia as their old language (or slower since we default to bundling OpenBLAS instead of MKL). Hell, for people just writing simple scripts and restarting julia often, especially plotting, Julia is probably significantly slower than Python since a non-trivial amount of time will be spent compiling functions before the first time they're run.

Julia's incredibly expressive type system, macros and the crazy composability of julia code are much more impactful benefits than it's runtime performance, but these are much more vauge squishy, qualitative concepts than quantitative concepts like runtime performance.

Julia's compiled code might be super fast, but developing (or, God forbid, doing any sort of analysis) is painful because of how brutally slow the REPL/interactive environment is. Pretty much every little snippet of code you'll want to test as you write Julia feels like it takes _forever_ to run. I don't know if there's a solution for this while retaining the compiled run-time performance. I'm new to Julia (from R and Python), but I find the slowness/sluggishness of REPL to be nearly a deal breaker for me. It feels like the web back in the 1990s when you'd click a button and wait, and then click another button (or link) and wait, etc.

Are you using the Revise.jl package? Without that package, it can be painful in many cases.

if you want the flexibility to do something truly bizzare with confidence, you really ought to use Julia, and don't look back.

A while back I was prototyping let's just say... unusual binary datatype representations for numbers. All i had to do was reimplement a handful of operations (+, -, x, /, one, zero) and I got everything from fourier transforms to matrix solving for free. Comparing numerical performance with standard IEEE representations was then easy, and I had confidence that my comparisons were legit, since I was literally calling the same function against both numerical types.

More recently I wanted to play around with galois fields, following Mary Wootter's impressive work, and was able to test some ideas very quickly (and trivially deploy on a supercomputer cluster) in few lines of code using Julia.

That's not a typical use case, but it's a thing. I am thinking about playing around with complex numbers (when I get some free time, which increasing seems like 'never') in deep learning, and for similar reasons Julia will be an obvious choice.

I should probably post evidence; this was a live demo I did of the floating point stuff at Stanford (demo begins ~53 minutes in)


You should use Julia over Python because of the JIT compilation makes it faster, as long as you aren’t constantly writing code and running it a single time. Also as long as you are able to keep your REPL open for days at a time. It takes several minutes to load some popular plotting packages in Julia (because of the JIT), so it will take you some time to adjust your workflow to leaving your REPL open all the time.

But the Juno IDE is just as good as Jupyter notebooks or RStudio, and there are probably high quality libraries for whatever you’re doing, unless you work in one of those niche areas.

Julia JITs to native LLVM code and is really fast for a lot of use cases. For things where the JIT warming up takes less time than the full job, it can be better than Python. It was also developed with numerical computation in mind, so it was designed for that performance wise and has so many brilliant people working on the language and library. You can use macros for awesome DSLs and run on the GPU and in parallel a lot easier than Python in some cases. It has a first class package manager, great REPL and doesn't need an installer.

I didn’t find the REPL that great, because it takes forever to jit anything. My laptop is not that old and creating an array with 5 elements takes over a second. If I type a syntax error, it takes tens of seconds to produce an error. This yields a very frustrating experience and doesn’t lend itself to an effective prototyping environment. For now at least I’ll be sticking with matlab.

There are indeed frustrating lags due to JIT, but they have got better lately & are being worked on -- if I understand right this is now one of the priorities, after focusing on getting the breaking changes done before 1.0.

Here's how long a vector of 5 random numbers takes, after a cold start, on 1.1:

    $ julia -e '@time rand(5)'
    0.054343 seconds (121.03 k allocations: 6.219 MiB)

Yeah, this has been my experience as well. I really _want_ to like Julia. But so far the JIT experience has been exceptionally miserable--and unfortunately for me, my typical approach to development is quite interactive. I'm on 1.1.1.

That sounds really bizarre. What version of which OS are you using? I've heard that older versions of Windows (I think 7) have problems with the REPL that cause extreme latency.

I don't think the latency you're experiencing is normal.

I was running Debian 9, with Julia from julialang.org (not the Debian repo). This was six months ago so maybe things have improved since then.

Hm, if you do git it another try and experience that again, please open an issue on Github so it can be fixed:


"Why Does Julia Work So Well?

There is an obvious reason to choose Julia:

    it's faster than other scripting languages, allowing 
    you to have the rapid development of Python/MATLAB/R 
    while producing code that is as fast as C/Fortran"

yet the startup time remains slower...

Also if you have any need to generate plots & graphs - RIP Julia. I was excited to try Julia, since it seemed to integrate some of the best features of each MATLAB, R, and Python. I was truly disappointed to discover that Julia would take minutes to render the exact same plots I was generating in Octave almost instantly.

There's PackageCompiler[0] which allows to precompile the packages you need for work into a system image. This means avoiding precompilation in the REPL every time you start up Julia. That's if you start with the custom system image.

However, it's a community effort and is somewhat non-trivial to get up and running with. Once Julia gets better precompilation/binary packaging support, common workflows like plotting will improve dramatically.

[0]: https://github.com/JuliaLang/PackageCompiler.jl

to be fair, that typically only happens the first time you make your graph; replots are typically lightning fast. but this was a major frustration for me, I haven't had occasion to use it more recently, but I understand maybe things are a bit better for graphing in julialand in the last few months?

I thought it had got better, but also I've just adjusted to work around it. Here are some timings today, Julia 1.1, cold start to first plot:

    $ julia -e '@time (using GR; plot(rand(20)))'
      3.931433 seconds (10.38 M allocations: 521.292 MiB, 6.44% gc time)
    $ julia -e '@time (using Plots; plot(rand(20)))'
     19.498644 seconds (57.26 M allocations: 2.844 GiB, 8.07% gc time)
Running with less compilation:

    $ julia --compile=min -e '@time (using GR; plot(rand(20)))'
      0.375836 seconds (368.83 k allocations: 20.190 MiB, 1.65% gc time)
    $ julia --compile=min -e '@time (using Plots; plot(rand(20)))'
      4.302867 seconds (6.41 M allocations: 371.485 MiB, 5.07% gc time)
But, as you say, it's much much quicker once started, like 1-5ms per plot.

4.3 seconds seems great. I remember when it felt like a minute or two.

I had almost the opposite experience lately. I started doing some data analysis in Python/NumPy/Matplotlib because I figured it was mostly plotting and would be quicker in Python. It was excruciatingly slow. Partly due to wanting hidpi plots on my Mac. After switching to over to Julia, the new plots package has been complete enough to handle most use cases and only takes <5s to complete after warmup compared to matplotlib’s 30s+ in many cases. Nice benefit is that Julia types shortened up my data cleaning code significantly too. My favorite Julia plots combo has been gr in Jupiter notebooks.

this is not really an issue, as long as you ignore the julia plotting capabilities; which should have never been there anyway. You can easily dump your numbers (and functions) on a text file and gnuplot them.

Uhh no. I do a lot of data analysis and if I have to dump stuff into text files every time I want to visualize something quickly then I'm going to go mental.

Simple plots take a fraction of a second in Python/R/Matlab. I feel like many people don't realize how crucial this is. Sub-second plotting makes working with data interactive. If it takes more than 5 seconds to produce simple plots, that's no longer interactive. Imagine if your debugger took half a minute to show you the value of a variable while trying to find a complex bug. You'd start pulling your hair out.

If in Julia it takes me half a minute at least (dumping to text file, reading it in somewhere else and then plotting it), Julia is going to remain firmly in the "check this language again in 2 years time if the plotting story has become sensible yet".

It would be great if it were quicker. But right now, interactive use is pretty good, like 5ms for a simple plot. It's calling julia from the command line, and thus starting cold, which is more than 5s.

That is just a really terrible way to do things and I have no idea why you would excuse it like that. Typical workflow if you use the Plots.jl package is to wait half a minute for the package to load in the REPL and then stay in the same REPL for your workflow so that you won't have to reload the package.

What I do is wait one second for the RCall.jl package to load and then use R's ggplot2 library to plot. Works really well, especially since I am really familiar with ggplot2.

I do not use the REPL, I call julia scripts from elsewhere, and then I want to recover the data out of julia, and text files are perfectly appropriate (a few thousand numbers). Then julia plotting capabilities are suboptimal with respect to specialized plotting software like gnuplot. I would really prefer if julia did not have any plotting stuff.

That's not typical workflow for the majority of data scientists. And saying you prefer Julia not have any plotting stuff sounds really really dumb to be frank.

Agreed. As a data scientist myself, I can't imagine Julia getting much "mindshare" among us with the JIT experience it has. Perhaps we're not the real target audience for Julia? But if that's the case then adoption will likely be slow, and limited to only very niche applications and roles. For Julia to really become the next big thing (and solve the damn two language problem), it needs to be an effective solution for data scientists and machine learning engineers--and right now, it just isn't.

I do machine learning and computer vision in python, statistical analysis, plotting, and anything to do with dataframes in R, and computational stuff, network science, and almost everything else in julia. I would like to switch my data analysis stuff to julia but waiting for libraries and functions to load is just too frustrating when I'm doing things interactively. I'm hoping Julia will have a good machine learning, computer vision, and data science environment in the future and it is looking like it will. But for now, it is not an easy environment to work with in these applications and you'd need some fairly specific needs to justifiably use Julia here. But the thing is that when you do have relatively esoteric things to do in these applications, it is much easier to do them in Julia.

Not a data scientist, sorry, just a mathematician

Also pretty much into the unix philosophy whereby tools should do only one thing and do it well.

Startup feels instant to me. Perhaps you are talking about precompilation. In that case, it’s insignificant for most computationally intensive applications.

I don't know the difference between startup and precompilation, and I do not really care, but if I launch a julia script from the command line it is unbearably slow for no apparent reason. Octave, on the other hand, launches instantly and starts making computations.

This is understandable, because julia is not intended to be used that way. You are supposed to "live" inside the repl. However, I prefer tools that are flexible enough that can be used comfortably in non-intended ways.

That’s compilation time. Currently yeah, it’s not fantastic, but I believe that’s being actively worked on.

I've moved all of my development from Python to Julia - The syntax is similar, but I think Julia's is more expressive and powerful once you start looking into the details. Macros, broadcasting (automatic vectorization), lambda functions, string interpolation, 1-based indexing (which weirded me out at first but I found to be more natural for most things I work on). It's been an almost wholly positive transition.

No need for a second language, you can write everything in Julia, while enjoying a mix of productivity and code execution performance.

Plus it has quite a few Lisp inspired features, like multi-methods and powerful macros.

The whole development experience is so much better than Python.


Not having a packaging system that is a stapled-on-afterthought is so nice.

any video or article about it ?

The final MATLAB example of "Inplace modification" is not correct.

  function f(out, x)
       out = x.^2
  x = rand(10)
  y = zeros(length(x), 1)
  f(y, x)

What happens here when you call f(y, x) is:

1. The arrays x and y are passed to f (no memory copying done yet, since MATLAB uses copy on write)

2. When we have `out = x.^2`, this will allocate a new array in memory and store in it the result of `x.^2`, and will call this `out`. The original `out` which was passed into the function can now be garbage collected (although it won't be, because it's still in the parent scope as 'y')

3. When the function exits, the new `out` goes out of scope and can be garbage collected.

So all this example does is allocate a new array, assign x.^2 to it, and then throw that result away again. There's no in-place modification.

You can't really pass a matrix by reference in MATLAB like you can in Python, however in some cases you can write your function in a specific way which will ensure in place operations are done and prevent a huge matrix copy. For example (pauses are just there so that you can watch task manager memory usage):

  function x = inPlace()
      x = rand(100000000, 1);
      x = doSomething(x);

  function y = doSomething(y)
      y = y.^2;
Conditions required are:

* Input and output variables must have same name in caller

* Input and output variables must have same name in callee

* Callee must be inside a function, not a script

Look here for a more detailed description: https://blogs.mathworks.com/loren/2007/03/22/in-place-operat...

I hate these 2x2 (or nxn) matrix examples. You never know what is considered column and what is row.

Julia takes after Matlab, where matrices are defined by enumerating each row followed by a semicolon. Personally I prefer Mathematica's syntax, but the Julia/Matlab syntax is still way better than Numpy syntax (though to be fair most of that is due to the lack of native matrix support in Python).

If it helps at all, this is exactly what you'll see when you create a matrix in Julia:

    julia> mat = [1 2; 3 4]
    2×2 Array{Int64,2}:
     1  2
     3  4

FYI you can also do this

    julia> mat = [1 2
                  3 4]

Oh that's neat. Thanks, I wasn't aware of that.

I definitely have this problem with Numpy: it just sort of vomits the whole matrix at you and never really makes it clear which one’s which. I haven’t had the problem with Julia though: it will actually format the output and the API’s make it clear when you’re dealing with rows or columns, and which one.

Rows, then columns. Lowest index level is column. Follows Pandas, more or less.

Great to see Julia gaining more exposure. I worry that it will languish as an obscure research language without strong corporate champions.

Juno is developed by Uber.

any source for that? the github of the juno project have 4 people, and none of them are working for Uber

I think that’s some misconception coming from the fact that the package in the Atom repository is called Uber-Juno. I don’t think the Juno devs have anything to do with Uber.

Ahh, yes, that is a plausible explanation. It adds to the fact that not a lot of people know that Uber is a german word, indicating high hierarchy

Über just means above. So "über dir" is "above you". Urban dictionary has uber in english to mean superior, but that is not the original German meaning.

Last week I converted a simple side project in Python to Julia. It's a sequential Bayeisan estimation problem. A pleasant surprise that the Julia version runs 30x faster than the Python counter part. I am sure the Python code can be improved using Numba and various tricks. But the Julia version simply works with minimum effort.

I also feel the language really is built for people doing numerical computing. I smiled when I find out that randn(ComplexF64) gives you circularly-symmetric complex normal random variables. It feels like Julia understands what I want.

Is the randn(Complex64) used in your Bayesian estimation project? I’d be curious to know how if so!

Yes but not directly. It is used in the Monte Carlo simulation part. In communication systems everything is complex :D

The feature that sets Julia apart from any other language is that it is a dynamically typed, optionally interactive language that is statically analyzed to compile to efficient machine code. http://aptronnoida.in/best-python-training-in-noida.html

Are there analogs to Simulink or some of MATLAB's toolboxes in Julia? I know of a few shops that use Octave because developer pricing for MATLAB isn't enough when we have an alternative, but there's still a need for a roving license or two because there are a few things lacking in the ecosystem.

TBH, Simulink is the golden standard, i don´t know any other alternative. with that in mind, the other toolboxes are easily replaceable with some julia packages, for example in the optimization Toolbox there are various options, like NLSolve(solving no-linear systems),JuMP,Optim,NLOpt (general Optimization). I know that the DifferencialEquations.jl package is the state of the art in ODE solvers. Images.jl provides a good alternative to the Image Toolbox. for what i understand, in the most simple way to put it, simulink is a giant differential equation solver, so you can implement the logic with julia, but the UX of simulink is the best right now

For Simulink the closest you'll get is probably OMJulia[1] which is a set of Julia bindings to OpenModelica as opposed to a stand alone library

[1] https://github.com/OpenModelica/OMJulia.jl

For simulation there is also a library that reimplements the Modelica language in Julia using macros:


Julia is missing from the Ubuntu 18.04 repos for some reason. 16.04 is stuck at v0.45.

Yeah, the Julia community for better or worse seems to have an attitude of “don’t get julia from a package manager, download a binary from our website or built it from source.” No doubt this has to do with the fact that releases happen quickly and it’s a giant pain in the ass to go and push binaries to all these package managers.

This is because LLVM has bugs (like all software) and Julia carries around patched versions of LLVM. Those fixes are being upstreamed over time, but it takes time. Julia is a very hardcore test suite for numerical libraries, so it happens to find things that only show up when numerical stability is pushed to the edge, and Julia Base will have some tests fail if you don't build with the patched LLVM. That said... a lot of users might not notice a difference. But if you want what is known to be the most correct, then you should use the patched one. However, Linux package managers require that you use their LLVM, which in some cases is the wrong version, and in all cases is not the patched one.

I didn’t know that the package managers require you to use Linux’s LLVM. That’s a shame.

Do all the major package managers do this or is it just apt? I have a relatively new version of Julia from pacman on Arch, I wonder if it has a patched LLVM or not...

> I didn’t know that the package managers require you to use Linux’s LLVM. That’s a shame.

They don't require you to use “Linux's LLVM” (which would not make any sense given that Linux and LLVM are two independent projects), they make you use the version of LLVM they are currently packaging.

> Do all the major package managers do this or is it just apt?

All the package managers will conceptually do that; however, how old is the considered LLVM will vary strongly depending on the packaging policy of your distribution.

And rust does the same, as they also need their own LLVM version.

We don't need it, we just prefer it. You can build with stock LLVM if you want. (And that's how distros treat Rust as well; they use their own version instead of ours.)

What about the bugs that required a patched LLVM?

You get the bugs. No way around that. We try to upstream as many of our patches as possible, but we’ll always be a bit farther ahead. It’s just the nature of things.

It is not only Julia problem. It is a problem for many packages, mostly because of the slowness of Debian, from which Ubuntu takes packages. Distributions like Arch, Void or Fedora are much better in that sense.

how did numpy manage to override `@` in python ?

ps: ohh this was introduced in python 3.5 https://stackoverflow.com/questions/27385633/what-is-the-sym...

pps: https://pastebin.com/s751DRDi

Are there good Tensorflow and Pytorch alternatives in the works writen in Julia?

Yes, there are already a couple.

The most well known is a 100% Julia neural network library called Flux.jl [1], which aims to become what Swift for Tensorflow wants as well (to make the entire Julia language a fully differentiable language) through Zygote.jl [2], and even without it has already great integration with the ecosystem, for example with the differentiable equations library through DiffEqFlux.jl [3]. Plus the source code is very high level (while being high performance, including easy GPU support), so you can easily see what each component does and implement any extension directly on your code without worrying about performance.

There is also another feature complete native library that allows some very concise code, Knet.jl [4], and the Tensorflow bindings [5].

[1] https://github.com/FluxML/Flux.jl

[2] https://github.com/FluxML/Zygote.jl

[3] https://julialang.org/blog/2019/01/fluxdiffeq

[4] https://github.com/denizyuret/Knet.jl

[5] https://github.com/malmaud/TensorFlow.jl

Flux (https://fluxml.ai/) is one native Julia ML package. For a more detailed discussion of the use of Julia in the space see https://julialang.org/blog/2018/12/ml-language-compiler

Others have mentioned Flux.jl which aims to be idiomatic/native, but there are also MXNet bindings: https://github.com/apache/incubator-mxnet/tree/master/julia

Neural differential equations are something that's worked out in Julia. Neural ordinary differential equations (ODEs), stochastic differential equations (SDEs), and delay differential equations (DDEs) can be found in this blog post (https://julialang.org/blog/2019/01/fluxdiffeq). Neural jump diffusions (jump SDEs) and neural partial differential equations (PDEs) are described here: http://www.stochasticlifestyle.com/. All of this is built on the Flux.jl machine learning library which is extremely flexible.

I like it.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact