
Writing Performant Code in Julia - wikunia
https://techytok.ml/code-optimisation-in-julia/
======
Bostonian
Julia looks resembles modern Fortran in some ways (no semicolons or curly
braces, array indices start at 1, built-in array operations), and several of
the tips in the article amount to writing Julia like Fortran. So maybe one
should write a computational kernel in modern Fortran.

"the type of the returned value of a function must depend only on the type of
the input of the function and not on the peculiar value it is given." Fortran
mandates this.

"Another common error is changing the type of a variable inside a function."
Static typing, as in Fortran, disallows this.

"When a function is in a critical inner loop, avoid keyword arguments and use
only positional arguments, this will lead to better optimisation and faster
execution." I doubt this will matter for a compiled language.

"Avoid global scope variables." For a compiled language, it is unlikely to
matter.

~~~
ddragon
It's not an accident really, dynamic dispatch is slow but expressive, static
dispatch is fast but restrictive. Julia (as a language trying to solve the
two-language problem) is designed to be approachable to Fortran programmers,
so writing it like Fortran will work (and it will be just as fast because
static is fast), and for Python programmers (which will lead to slower but
highly dynamic code). Idiomatic Julia is actually a middle ground though, you
always program at the highest level possible (you don't assume types, you
assume behaviors, as the language is fundamentally duck typed like Python).
The difference to CPython though is that the Julia compiler will generate one
optimal static implementation for each possible argument combination that is
used, instead of one dynamic implementation (as long as each implementation is
not so expressive that can't be represented through a static code, which
usually means depending on runtime information).

About the third point, it really won't matter in most other languages since in
this case the Julia compiler is actually optimizing beyond those thanks to the
multiple dispatch paradigm (in some cases it will even replace the function
call directly with the result if it can solve it entirely with compile time
information). And global variables in general can't be optimized by the
compiler since they can be modified at any point, so it can't make any
assumptions about it.

------
alpaca128
For me the main performance problem with Julia is how long compilation takes.
Yes, using it only in conjunction with Jupyter Notebook avoids the issues for
the most part, but writing large programs across multiple files that way is so
cumbersome(if even possible) I won't even try.

A program that otherwise takes 4 seconds to compile and run takes 30 seconds
when I also let it export a diagram, because each time I run "julia file.jl"
it recompiles the plotting library. If there is a way to have a Julia runtime
in the background and letting it execute iterations of the program I haven't
found it yet.

~~~
ddragon
The easiest solution to have Julia on the background is using Revise.jl.
Basically you import your program on the REPL and it will automatically track
any change to the source code and precompile and update the environment as
soon as you save it (and any function you run on the REPL, including slow to
compile such as plotting, will be faster on the second time forward). You can
combine it with other useful REPL modules such as Rebugger.jl, OhMyREPL.jl and
Infiltrator.jl which you can import automatically in the startup.jl file.

The compiler team is prioritizing the compile-time latency and error messages
for the 1.4 release, which if they succeed will make the language more
approachable for people who use the same workflow as other dynamic languages
(running the code from the shell).

~~~
alpaca128
Thanks for the suggestion. The first overhead is even more extreme but
afterwards it works pretty smoothly.

