Hacker News new | past | comments | ask | show | jobs | submit login
Type Stability in Julia: Avoiding Performance Pathologies in JIT Compilation (arxiv.org)
123 points by leephillips 39 days ago | hide | past | favorite | 39 comments



There is an interesting story here that I wish was told in more details somewhere on the internet (currently I just have some partial guesses from listening to JuliaCon talks). It seems some of these (awesome) properties arose very organically after experimentation, not by having a committee of theorists define the properties the language should have. This has led to a couple of minor inconsistencies to be ironed out, however computer scientists (like the authors of this paper) then decided to study and formalize Julia and uncovered gems like this (I guess they are gems particularly if you care about formal treatments).

I am certain I am somewhat wrong in this description, but I would love to see a writeup of the whole story and how both sides (the devs and the scientists) lived through it.


You're quite right about the organic nature. The story of how we get together could be better told by our prof, Jan Vitek, I guess. But in a nutshell, one of his older students (Ben, also an author of the paper) was once on a plane with some other student, Jean, from MIT, where Julia was developed, and Ben got interested and told Jan… Then there were occasional meetings with Julia devs (Jeff Bezanson in particular) that turned out to be super nice and inviting people. I should add, we're all from Northeastern University in Boston, and that's close to MIT... All in all, we decided PL academia should learn about this "organic gem".


I am not familiar with Julia, but it seems like the common feature of “automatic big num conversion” would work against optimization like this correct? Because the output type no longer depends on the input types, but also there values.


This is one of the reasons that Julia doesn't do automatic big num conversion, instead using native machine two's complement modular arithmetic for integers. It would be a good optimization to use an immediate representation for small bignums, however. See this FAQ entry https://docs.julialang.org/en/v1/manual/faq/#faq-integer-ari... on the subject.


If you're thinking about how in e.g. python, integer addition never overflows and is always a big integer, then yes. If "small"/machine integers and big integers are seperated each have a distinct type, promotion on overflow would introduce a type instability on every addition (not only the overflowing one!). This is precisely because the type would depend on the result of the addition, not only the input types (which may both be machine integer types).

A "solution" to this conundrum is to have big integers by default, which sadly isn't feasible for performance reasons - you can't then take advantage of hardware, because every operation necessarily has to do more work than would be required for machine integers.

--

To add on to this, when encountering overflow (which doesn't happen often in the first place), it's usually sufficient to go for 128 bit integers first instead of jumping directly to big integers. They're still quite a bit faster and offer a much larger space to work with.


You are correct that “automatic big num conversion” works much against optimization. Int{8,16,32,64,128} in Julia also overflow silently. If you want other behaviour you can write your own custom behavior and "overload" +,* for your type. SafeIntegers throws an exception on overflow. Switching to BigInt would also be possible to implement but i am not aware of any package that does that or if it would give performance benefit over just using BitInt to begin with.


Another alternative is to use the Base.Checked.* family of functions, though they have different names than (+) like checked_add and thus have to be explicitly used.


Which is what SafeInteger does internally. Checked is kinda obscure but i added a doc string to it which was merged with the main branch today.


It would only depend on the value if the operation had actually overflowed in the past, in which case performance is already going to be poor.


That means it does depend on the value.


That’s not quite what I mean - I mean the code to handle cases that depend on the value only needs to be included based on the observed values. That’s different to the code always being there and the control flow depending on the value.


That's not how Julia's JIT works—it's "Just Ahead of Time" which is a term we use to distinguish it from what other modern JITs tend to do, which is generate code after some interpreted execution based on observed behavior. What Julia does is much more akin to static compilation but done the first time you need the code, based on strictly in advance data flow type analysis of the code. One of the interesting things about the language that this paper touches on is that Julia and its libraries are designed so that you can analyze code well enough in advance to perfectly predict types much of the time, even though that is not a static requirement of the language.


When I was working with Julia I wanted it to just throw an error whenever a top level program was compiled to unstable types. I didn’t find a way to do it though :(


One problem is Julia is just ahead of time compiled, so you need something like JET to iteratively run stability analysis via abstract interpretation if you intend to run it for a top level program: https://aviatesk.github.io/JET.jl/dev/optanalysis/

It recently got support on vscode.

What was available in Julia before (things like @code_typed) are only applicable locally, so there is only limited benefit. Because linters have dramatically improved in recent years, it feels like Julia was lagging behind until recently.

Actually thinking about it, Julia could use something like refinement types ala Liquidhaskell: http://ucsd-progsys.github.io/liquidhaskell-tutorial/Tutoria...


Julia has the `@infered` macro in `Test` that throws an error if your code isn't type stable.


I absolutely agree that providing a tool to _assert_ stability (just like usual @assert) would be great. We thought about it when writing the paper but never got to device anything like that. For smooth integration, that would probably require hacking on Julia itself.


Do you know about the code_warntype macro and function?


The code_warntype macro is helpful, but to prevent a future inadvertently introduced type stability, a failing test would be much better.

Test.@inferred is not quite what is needed:

    function foo(x)
      s = 0 
      # better use a zero of the same type than x
      # s = zero(x)
      for y in x
        s += y
      end
      return s
    end

    Test.@inferred Float64 foo(12.1) 
This test succeeds even if code is type-unstable


this has always been my top complaint about Julia -- programming without type stability is madness!


In the paper, we provide several thoughts, including one example, when type _in_stability actually makes sense. In particular, the example concerns a typical `parse` function, that converts a string into an AST node. Assuming you have one struct per AST type, the parse function is unstable.

Generally, several well-known OOP design patterns would naturally lead to instability. In turn, those patterns can be handy to solve certain typical problems. If you have such a problem at hand, you have a choice: either do a well-known thing and embrace instability (which is actually not always all that harmful for performance), or get out of your way and torture your code to make it stable…


i think you can successfully type the return type using a Union. This might have the performance penalty of instability, but i care more about the soundness issue of potentially returning garbage types than about e.g. vtables

btw rust-miniscript is a nice example of a type stable parser :)


You can use Unions, and in fact that's what Julia do. The issue is performance -- that's what the paper advocates: stable (grounded, more precisely) functions are a clear cut in terms of optimization. Unstable ones is a lottery (sometimes Julia does a decent Job at optimizing them sometimes not).

Rust is an interesting example, maybe I should look into the library. But the compilation model is very different from probably any JIT, including Julia. Optimizations too. For one, unions (enums) in Rust would have much less profound impact on possible optimizations and therefore performance.


I think that Julia lacks a native tagged enum, which is what rust has. Unions are more akin to a local trait T that you box and implement for all the return types, which is strictly worse. If instead explicit unions were optimized to be tagged enums you could get the big performance gains of rust.

Calling code would, of course, have to destructure the return. But destructures of a return value can be very efficient using match (jump tables internally where possible).


There isn't a massive difference between type instability vs programming against interfaces in OOP languages or using a Box<dyn T> in Rust, both add an extra indirection which has a performance impact, but depending on the situation it could be a positive or negative impact.


it has a huge impact on static analysis to know that the type is a Box<dyn T> in rust, and you can't (without something else) downcast Box<dyn T> to a specific U.

i don't care about the performance as much the correctness :)


Author here. AMA!


I'm still working through the paper—reading your group's Julia papers is always a great treat—thank you for this one and all of them. Looking at the definition of "type groundedness" it seemed to me that perhaps the definition should require that "every expression’s type depends only on the argument types" rather than only variables. I haven't gotten to the proof parts, but this was just a thing that struck me early on.


Your intuition is right but _every_ expression is bound by a variable in this representation.


Ah, ok. I hadn't gotten to that part yet. Would it make sense to change the general definition to "expressions" since that would be the correct definition for Julia code as well as the Jules IR format?


This may require more thinking on our end but the way we design Jules is the usual SSA form. The idea being it's easier to reason about it (and prove things) when every expression has a name. It's a bit of cheating of course but makes our life easier. We don't discuss how surface Julia (or subset of it), which is more expression-oriented, can be translated to Jules, because we have only so many pages, and also it's not all that interesting, I think.


(disclosure: not a PL person, so my question could be really dumb.)

> Informally, a function is type stable if the type of the output depends only on the types of the inputs, not their values.

This line reminds me of (the opposite of) dependent-type. But I think that intuition may not be useful or relevant for understanding here?


It does have similar formulation but the setting is completely different. Types in Julia are not the same thing as types in statically typed languages with dependent types. In those languages you can elegantly declare the connection between inputs and outputs, and moreover the connection will be checked. In Julia type stability is a property of a code that Julia compiler will discover on its own during runtime. The programmer can't declare it nor have it checked.


Hi, thank you! The paper mentions that the dynamic analysis code is written in julia and could be used by package developers to study instabilities in their code, but I couldn't find a link to the code in the references. Did I just miss it or is it contained in the zip containing the visualization data?

--

EDIT: Nevermind, found it in the zip!


I think the link's supposed to be in the text but the repo is here: https://github.com/prl-julia/julia-type-stability There's a PDF there explaining how to get started with it and repro the graphs but it shouldn't be too hard to utilize it for the purpose you mention. We never got around to writing that kind of docs (for package developers) properly unfortunately. Feel free to open issues against the repo if you have troubles using it!


Ah, that link doeesn't seem to be in the paper (at least my searching for `github` didn't turn up anything), thank you! I did however find the same code as part of the visualization, so I got what I was looking for.


That's my bad then. I think I might have thought Zenodo link would be enough. But now I think it's a bad idea. Probably will add the GH link in an update.


Hi, cool paper! semi related questions:

What do you think of JET.jl?

Tension between Method ambiguities and desire for traits or multiple inheritance is something discussed in the community. Has your group given any thought to this?

Thanks !


Hey! we're tracking JET.jl and think that it's great already and may become even better. We hope to base some of our future stuf off it. Some of us (not me) actually work on a trait design. Multiple inheritance -- not so much: we think it would preclude some important optimizations and that way defeat the whole purpose of Julia (performance + dynamism).


Cool! Looking forward to seeing some of the trait work




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: