Hacker News new | past | comments | ask | show | jobs | submit login
Julia's multiple dispatch explained with Pokemons (moll.dev)
161 points by xiaodai 6 months ago | hide | past | favorite | 108 comments

This post shows MD but it misses the forest for the trees. To riff on the blog post linked below, “what if you have an Onyx holding a hard stone executing Earthquake in a Sandstorm against a Flygon with the ability Hover?”

The answer is that rules should be encoded in data structures and not in the type system. The type system is for structuring how your program runs not its logical correctness. If you have a coworker who, upon hearing that you have a new business requirement, says “oh goodie I’ll reconfigure our whole type system!” please report them to the relevant local authorities.

If you aren’t writing code that handles types as 1st class objects like a parser, please reconsider using this pattern. If you are writing a parser, this isn’t even the fancy way to do it anymore, but at least you’re holding the book upside right.

The funny thing is that the Pokémon valences are already in a table at the start of the blog post; an ideal location to store such data!


EDIT: this is not shade on the OP, I just don’t want junior (or senior!) engineers reading this and implementing their business logic in MD because I don’t want to debug it or extend it!

>"The type system is for structuring how your program runs not its logical correctness."

Type-level programming can be a very powerful and useful way to coerce correctness in some scenarios, so I'm really confused by this statement.

I think the arguments on both sides are valid.

In dynamic programming languages, I generally prefer factoring business logic outside of procedures and into lookup tables. This is sometimes called "data-driven" or "table-oriented" programming. It eliminates control flow, which makes code more maintainable and less buggy.

Using Julia's dispatch mechanism accomplishes a similar goal. Here, multiple dispatch is akin to a table lookup. It may even be faster than the explicitley data-driven approach since the lookup can be done in compile-time.

I like data driven programming too. The code can become more declarative this way and the logic more malleable.

Part of his point seems to be that you can manipulate data with code, but you can't manipulate your app's typings with code.

Expressing your business domain in types makes it more likely that a given change of requirements demands a lot of by-hand, non-abstractable changes to the code.

I'd argue on the contrary, having the business domain in the type system allows the compiler to help you ensure the changes are made correctly instead of relying purely on tests (which are still valuable, but I'll take guarantees where I can get them).

The type system also makes it easy to reduce the state space of the program. For example, if a method takes an int as a parameter you have a pretty large state space. But if it takes a strongly typed enum which can only be of 3 values, now you have restricted the surface area for which you have to test. If you exploit the type system to this effect it is very powerful.

I agree with both your points! I didn't make a value judgment, and in fact I think our comments aren't mutually exclusive, we're just describing the pros for each side of what ends up being a trade-off.

The problem I call out is less of an issue for the parts of the domain that are unlikely to change often, or that touch fewer parts of the system.

"Part of his point seems to be that you can manipulate data with code, but you can't manipulate your app's typings with code."

You can do both you know.

Yeah if you’re writing a standard library or a parser or maybe a database query optimizer, but if you aren’t writing code about code, then it’s probably not the way to go.

I don't know. I'm of course biased, though that bias is built on several years of dealing with increasingly critical systems and the entirely avoidable issues those systems faced due to a preponderance of foot-guns that could have been otherwise mitigated with clever use of "ahead of time" constraint checking and enforcement. Not to mention the runtime quality and localizability of your diagnostics goes up a ton too when you know ahead of time that when you see certain kinds of errors/exceptions/failures that they must have happened in certain places and/or under very narrow conditions.

It also has a very nice quality of not mucking up your runtime performance with doing all the constraint checking on-demand and in very branchy ways.

As a simple example of how useful the approach can be, here's a thing we did awhile ago for making safe abstractions to low-level hardware resources: https://blog.auxon.io/2019/10/25/type-level-registers/ .

With that approach we could build a pipeline to consume hundreds of pages of SoC datasheet, generate the appropriate type specs from the register definitions, and end up with an API to the hardware that would fail to compile if a program was written to consume the hardware interface in a way that violated the spec in the datasheet.

take a look at https://blog.moertel.com/posts/2006-10-18-a-type-based-solut... for a good counterexample

Is it a good counterexample? It seems to have approximately the same misuse resistance as parameter binding or equivalent apis. (For example, the dom api.)

In particular, this property of the described system makes it trivial for the unknowing or uncaring to subvert it: > It provides a string-management kernel that lets you create “safe strings” by certifying a regular string as representing either text or a fragment of a known language.

From what I understood, the certification is an unchecked assertion made by the program. One of the comments gives a great example of how this will go wrong in practice. The other aspect of this that is glossed over is that many data schemas represent all these things as "text". It is again an exercise for the program to get this right at the io boundaries.

i am perhaps biased, since my day job is working on static type inference for python[0], but i genuinely do believe that encoding properties like this into the type system gives you not just an extra level of safety, but an extra level of expressiveness when modelling your data in code. it's the equivalent of having units in physics.

i agree that it's not foolproof, but it's better than treating everything as an undifferentiated string.

[0] https://github.com/google/pytype

The idea is to make security reviews easier. You can easily find all the places "safe strings" are created to see if they are doing the right parsing or sanity checks. The code that only uses the safe api needs less scrutiny, at least for the class of bugs this design is supposed to help with.

Off topic: the link in your profile seems to no longer have its intended contents

thanks, removed it!

Hey, author here.

1) Including the "Onyx holding a hard stone executing Earthquake in a Sandstorm against a Flygon with the ability Hover?" state is just doing an N-constraint solver. Since multiple dispatch is a generalized system, we can dispatch on N different types.

Take the "Dual Type" in Pokémon for example: https://pokemondb.net/type/dual. You'll notice that instead of just an NxN grid, we're dealing with an NxNxN. Where a single attack needs to be related against 2 different defenses.

Simple enough `eff(atk::T1, def1::T2, def2::T3) = ...`, the we can just encapsulate this second type within a `Pokémon` structure and route to the correct function dynamically.

2) The "Super Effectiveness" of a MD system is that you don't _need_ to put everything into a singular table, something that's functionally impossible to extend. The idea is that we can build up the correct relationships between types completely independent of one another. The issue is, who owns that table? How do you merge more than one new type in? (see my section about Composition in the post)

If someone else wants to make a new `Foo` type Pokémon, and another person is doing `Baz`, they can work completely separately, only defining the `eff` functions _only_ concerning their type. And there's _zero_ integration work to use both, just import the new types and their functions. This is incredibly extensible!

> Take the "Dual Type" in Pokémon for example: https://pokemondb.net/type/dual. You'll notice that instead of just an NxN grid, we're dealing with an NxNxN. Where a single attack needs to be related against 2 different defenses.

> Simple enough `eff(atk::T1, def1::T2, def2::T3) = ...`, the we can just encapsulate this second type within a `Pokémon` structure and route to the correct function dynamically.

This doesn't look like a good approach to me. One thing that bothers me is that it draws a distinction between def1 and def2 that doesn't, in reality, exist. You should not be handling the cases of "fire attack deals damage to grass/ice" and "fire attack deals damage to ice/grass" separately, because those are not separate cases. No type has a different effect when listed first than it does when listed second. No pair of types has any effect other than the independent effects of each type considered individually.

The same issue reoccurs at a higher level: fundamentally, you aren't dealing with an NxNxN grid. You're free to represent the data that way, but it's redundant -- the NxNxN grid contains no information that isn't already present in the NxN grid. You could reapply the same logic and produce an NxNxNxN grid detailing what would happen if a single-typed attack hit a triple-typed defender, or if a dual-typed attack hit a dual-typed defender, but... why would you do that?

So, it's an Nx(NxN/2) half grid. This is easily solved on the implementation side by making sure, for example, that the enum values for the second 2 arguments are always in ascending order.

> So, it's an Nx(NxN/2) half grid.

No, it's an NxN grid. Look at the second half of my comment.

> This is easily solved on the implementation side by making sure, for example, that the enum values for the second 2 arguments are always in ascending order.

So that when somebody invokes your function and passes the defender's types in the order listed for the Pokemon rather than sorting them beforehand, you crash?

Well the real issue is that we're using N instead of A and D. It is A X (D X D / 2).

And why the sudden helplessness? Just sort the 2 arguments before passing them to the internal Impl.

> Well the real issue is that we're using N instead of A and D. It is A X (D X D / 2).

No, it isn't. It's AxD, where A and D are always equal. There is no reason to add another dimension to the result table when the defense or offense might pick up another type. The expanded table will never contain any more information than the two-dimensional table already does.

(Dividing by 2 isn't correct either, even from your perspective; you're forgetting about the table's diagonal. In the "space is no object" approach you're advocating, the diagonals need to be filled by special-casing, since they represent a phenomenon that doesn't exist (a Pokemon which bears multiple instances of the same type) and obscure a phenomenon that does exist (a Pokemon which bears fewer types than the maximum possible number).)

You can just define a generic fallback method like `eff(p, d1, d2) = eff(p, d2, d1)`

I disagree - the question is how much fit is there between the type system and the logic that is being implemented using it.

Reusing a solver rather than writing a solver is a powerful approach, because the type system as a solver is common across all Julia projects - there is one Julia type system.

If you or I write a solver and use it in our project then everyone who comes to that project has to learn the solver.

However, this logic breaks if the use of the type system is so stretched and arcane that almost no engineer has seen it before.

Rules can and absolutely should be encoded in the type system if your language is powerful enough to allow it. Look at Type Driven Development in say Idris or F#.

> “what if you have an Onyx holding a hard stone executing Earthquake in a Sandstorm against a Flygon with the ability Hover?”

This is kind of a strange example -- the only two parts of it that interact are the Onyx, which is ground type, and the Earthquake, which is also ground type and will deal increased damage because the Onyx shares its type. So we could replace the question with "what if you have an Onyx using Earthquake?"

There is no ability Hover, but if the Flygon had Levitate (as all of them do), that would interact too, causing the Earthquake to have no effect.

This specific example hurts its own cause because it is a huge amount of code and explanation for what would be a screenful in Python.

It reminds me of the folks who have written their fifth rambling blog post about ‘Now I finally understand Monads in Haskell’ and don’t get that that will scare sensible people away and leave just the Don Quixote types.

Thanks for linking to that of post, I always enjoy his articles and hadn't read that one for a long time :)

Although the last paragraph (of episode 5) claims not to give up on oop, the whole article says otherwise.

Episode 5 of what?

> I’ll admit, multiple dispatch is a super abstract concept, one that you really only appreciate when you need it.

After reading this post and this stackexchange link[0], I'm not sure it is that complicated. The behaviour in the post can be explained as Julia picking the most specific function that fits based on the runtime type (i.e. uncovering the underlying concrete type at runtime and checking for an implementation). In fact the stackexchange answer mentions that dynamic languages don't differentiate between overloading and multiple dispatch at all, since everything is resolved at runtime. So, overloading and multiple dispatch seem more like that same concept but are one is eagerly resolved while the other is lazily resolved.

[0] https://softwareengineering.stackexchange.com/a/125088

Overloading tends to be different because the first argument gets massively higher priority in determining the effective method.

The main promise of Julia’s multiple dispatch is that it allows the obvious code to do the right thing and dispatch to fast specialised methods for a large variety of types, which tends to make programs more composable.

Another way to look at it is that the language is designed to be able to express a full numeric tower, i.e. different int and float types, complex and arbitrary precision numbers, suitable upcasting when types don’t match. One desirable property is that a + b should generally end up calling the same function as b + a. Try to think about how you would do this with Java-style overloading, even if the overloading was more dynamic and looked at the runtime type and all methods up the class’ ancestors. And how would you extend this to add a new numeric type (say dual numbers.) IIRC, the Julia strategy mostly uses some type-level programming to figure out a suitable shared type for a and b (which basically only runs at JIT-compilation-time), then coerces them both to that type, then adds them.

Scheme and Common Lisp manage to pricide quite big complicated numeric towers but they are basically entirely impossible to extend with custom types.

The promise of multiple dispatch is that it allows every library to be generic. One author writes a library that calculates $COMPLICATEDTHING; another author writes a library that implements a new type of number, such as ranges; provided the second library defines all the methods that the first uses, then, magically, they work together and you can calculate $COMPLICATEDTHING on ranges! That is the thing that Julia does that nothing else does.

"Entirely" is hyperbolic here. Building new numeric types with CL is certainly possible if you write your own versions of numeric functions like `plus` in lieu of the builtin `+`, and it's easy to make your new functions backward-compatible so `plus` will work on e.g. integers. This is not as easy as if the numeric functions were generic but it's still doable. By not being generic the builtin CL numeric functions run very fast on the most common number types; that was the tradeoff benefit of not designing them be generic.

"By not being generic the builtin CL numeric functions run very fast on the most common number types; that was the tradeoff benefit of not designing them be generic."

Julia shows this tradeoff isn't necessary.

Depending on the language target and method of execution dynamic resolution of runtime types can be quite hard. IIRC Java's JVM is super optimized for static codepaths, whereas Julia uses a JIT system that is "slow" on first run but speeds up considerably after all methods in a hot code path are resolved.

Yep, it's just late binding.

If you want that in the JVM, you can use Groovy, which does just that.


    class Fire{}
    class Water {}
    class Grass {}
    class Ice {}
    def eff(Fire f, Grass g) { 'burn' }
    def eff(Fire f, Ice ice) { 'melt' }
    def eff(Water w, Grass g) { 'grow' }
    def eff(Water w, Ice ice) { 'freeze' }
    def eff(a, b) { 'undefined' }
    println eff(new Fire(), new Grass())
    println eff(new Fire(), new Ice())
    println eff(new Water(), new Grass())
    println eff(new Water(), new Ice())
    println eff(new Ice(), new Water())

burn melt grow freeze undefined

That is method overloading, which isn't the same as multi-dispatch.

Groovy supports multi-dispatch, if you needed to encapsulate a generic type, the compile would dynamically resolve that type correctly. See my example in Java (which has method overloading, but not MD)

What would multi-dispatch do differently???

the difference is that overloading works on compile time types but dispatch works on runtime types. This matters because when you write a generic algorithm, you don't know the type, so don't get the right behavior with overloading (you get the generic fallback).

In Groovy, dynamic dispatch is used. It's the runtime types that matter, as my example makes patently clear.

EDIT: to make it even more obvious there's no difference:

    class Fire{}
    class Water {}
    class Grass {}
    class Ice {}
    def eff(Fire f, Grass g) { 'burn' }
    def eff(Fire f, Ice ice) { 'melt' }
    def eff(Water w, Grass g) { 'grow' }
    def eff(Water w, Ice ice) { 'freeze' }
    def eff(a, b) { 'undefined' }
    // hide compile-time types
    Object fire = new Fire()
    Object grass = new Grass()
    Object ice = new Ice()
    Object water = new Water()
    println eff(fire, grass)
    println eff(fire, ice)
    println eff(water, grass)
    println eff(water, ice)
    println eff(ice, water)


I'll admit, I'm a bit disappointed that my original title "Julia used Multiple Dispatch! It's Super Effective!" didn't make the HN frontpage.

Super nitpicky, but @dang can we please update the title? "Pokemon" is both singular and plural, nobody ever says "Pokemons". It's just the Pokemon nerd in me... the title feels so wrong.

I guess I am old, but I feel I understand Julia better than I understand Pokemon...

I mean, the only part you need to understand for this blog post is "rock-paper-scissors, but extended to N types and with more than just win/lose as the outcome from pairing them up"

Yes, it's not a bad post. Nor are Pokemons, I'm sure. Just an observation: I felt I learned more about Pokemon than Julia reading the post, thus concluding probably I was not the target audience.

I would try to implement PokeTypes by using an enum to define each PokeType. I'd have a dictionary that stored attacker, defender pairs as a key and the value would be the effectiveness of the attack. The "eff" function (which I'd call something like attack_effectiveness) would just be a get from the dictionary that defaulted to NORMAL_DAMAGE if the key wasn't contained in the dictionary. Later, I'd probably build up the dictionary from something that was easier to read and update to make maintaining it easier.

My point here is that I don't really see the point of using types to implement this functionality. Is my approach weak in some way that is solved by this?

I think the advantage here is that instead of centralizing the relationships between all types in one place, these relationships can be defined anywhere, including external packages. This makes composition of additional types extremely easy, even when multiple people are working on the project.

This solution works until you have 1 type that has a different fallback than everything else.

I think it would still work. You would just need to define every attacker-defender pair for that type. If types with different fallbacks were common then you'd just define a dictionary of attack effectiveness fallbacks and the attack_effectiveness function would get from the attacker-defender dictionary and default to fallback for the attacker type.


This used to make me very frustrated as a child.

> "Pokémon" is identical in both the singular and plural, as is each individual species name; it is grammatically correct to say "one Pokémon" and "many Pokémon", as well as "one Pikachu" and "many Pikachu".

Interesting, did not know this

It is a general phenomenon of Japanese, not something specific to Pokemon.

Pokémon, you left out the accent aigu.

Yes, it's Pokemi


the correct term is "pokemans"


(post title has been edited since I commented, nothing to see here)

Thank you, I thought I was going crazy over here! What the fuck does it mean!?

If i onderstaand thuis correctly, there is a known pattern for resolving this in languages like Java. Is called double dispatch it only takes a bit more boilerplate than using this Haskell like pattern matching: https://en.m.wikipedia.org/wiki/Double_dispatch

It doesn't scale well to triple dispatch, quadruple dispatch...

Death by swipe... *If I understand this ...

One point I have read repeatedly is that Julia is aimed at High Performance Computing (HPC). Another argument I have read repeatedly is how object oriented programming (with its single dispatch) is bad for HPC because of pointer indirections leading to cache misses. This leads me to conclude Julia will be much worse (than well written C/C++) due to its multiple dispatch.

In practice, Julia's multiple dispatch is almost always devirtualized. That is, dispatch is resolved at compile time.

Generic code relies on specialization instead of dynamic dispatches to be generic with respect to input types. That is, for each new input type, a new method gets compiled (allowing the dispatch to be resolved statically).

Do you have a citation for this? My google searches have led me to believe this to not be true; see [1]

Here is the relevant quote from [1]: "In the context of julia though, compile time type simply do not exist ...."

[1]: https://discourse.julialang.org/t/claim-false-julia-isnt-mul...

This is one of the cases where there's sometimes confusion between the semantics of Julia as a language and the practical implementation of Julia -- just like how Julia is dynamically typed as a matter of language semantics, but compiles to a statically-typed intermediate representation as an implementation detail [1].

While it is not part of the language semantics, there certainly is, in the current implementation, a time at which any given method in Julia is (JAOT) compiled (via SSA-form IR, LLVM IR, and finally to native machine code) -- and whether or not types are able to be inferred at this time is sufficiently important that it has its own name: type stability [e.g., 2], with type-stable code being generally a couple orders of magnitude faster than equivalent type-unstable code.

[1] https://stackoverflow.com/questions/28078089/is-julia-dynami...

[2] https://www.juliabloggers.com/writing-type-stable-julia-code...

For those who don't know JAOT stands for Just-Ahead-Off-Time and refers to the fact that Julias current implementation doesn't do many things associated with JITs today but simply calls an AOT compiler under the assumption that any function gets called often enough in this session for compilation to be amortized.

There are some weak mechanisms to prevent useless overspecialization such as @nospecialize and there are attempts to add smarter recompilation strategies by some packages.

Julia semantically doesn't have compile time type, but is free to (and almost always does) figure out the compile time type and use that information aggressively as long as doing so doesn't change behavior.

By "compile time", I meant the first time you call the function with a given type signature.

Also, that comment is saying "compile time type" does not exist. I don't know C++, so I cannot comment on it, but from the sound of Yu Yichao's comment, C++ has separate concepts for runtime vs compile time types. Julia does not (as already said by adgjlsfhk1).

Is it resolved at compile time? it was my understanding that Julia only attempts it at runtime, specialization being the job of the JIT

Depends what you mean by compile time.

If function f(...) calls g(...) then the first time you call f(...), g(...) would normally also get compiled. The compilation is specialized on the type of input arguments (the method is selected by multiple dispatch and code is specialized further by concrete type information).

If the concrete types of ... in the g(...) call are inferrable, then g(...) is specialized and compiled to native code immediately, and possibly even inlined. If those types can't be inferred, it will repeat this process when it is called (there is also a cache of specialized code so it only compiles once, but you need to "look up" the function in cache dynamically in that case).

In many situations the types of an entire program can be inferred and be "compiled" ahead of time, but the semantics are always that of an interpretter.

TLDR here is that Julia is better described as Just Ahead of Time (JAOT) instead of JIT. Julia isn't using a normal tracing JIT where things start running interpreted and then get replaced. When Julia first runs into a new function (or function being called with new argument types), it will statically compile (at runtime) code for that function being called with those argument types. When it does this, it will use the type information of the types the function has been called with, to do all sorts of optimization (de-virtualization, inlining, etc).

Once this method is compiled, it will be used whenever the same function is called with the same argument types.

Julia compiles at run time.

Julia is fairly fast, since its type system _only_ does dynamic/runtime typing, the JIT is optimized towards that. You'll experience some minor startup lag, typically due to initial JIT'ing of any new used functions. However, this has largely be remedied with a compiler backend that completely precomputes this behavior. https://julialang.github.io/PackageCompiler.jl/dev/

The key thing that resolves this is that for type stable code, Julia doesn't do dispatch at runtime. For most real world problems, Julia will know all the methods at compile-time, and therefore doesn't chase pointers at all.

This is an excellent intro to Pokemon, explained using Julia's multiple dispatch mechanics.

OP said that we would have to use reflection to make this work in Java, but I think we can also make use of Generic classes:

  class Pokemon<T extends PokeType> {
    public T type;

    public Pokemon(T givenType) {
      type = givenType;
and in main():

  Normal n1 = new Normal();
  Pokemon<Normal> pokemon1 = new Pokemon<>(n1);

Bug: eff(atk::Electric, def::Grass) = NO_EFFECT

Grass should be Ground

Always nice to see people explaning/exploring technical concepts - but in this case I think the official documentation is both clear and useful:


Perhaps a dumb question but why couldn't I just use pattern matching for this? Your essentially writing the pattern matching rules out anyway.

Type -> Type -> Effectiveness

isEffective a b

    | fire water = supereffective

    | water fire = noteffective


In Common Lisp, multiple dispatch lets you provide a new specialization of a function at a different time/place. I imagine it's similar in Julia.

Is this like Common Lisp's dispatch as defined by the default CLOS implementation?

The main difference is that Julia uses more complicated subtyping and specificity to let you express some useful more complicated relationships (e.g. diagonal dispatch).

Also, there is a major difference that in Julia all methods use multiple dispatch (and there isn't a performance cost to doing so). Multiple dispatch in Lisp was severely limited by people not using it for performance reasons.

> Multiple dispatch in Lisp was severely limited by people not using it for performance reasons.

What do you mean by this? In what ways do you find MD in Lisp "limited"?

I haven't used lisp a ton, so I might be missing something here, but as I understand it, multiple dispatch in lisp is opt in. The downside of this is that since it has a performance penalty, people didn't use multimethids for core functions like +, so you lose the benefits multiple dispatch offers.

Still a Lisp system may have thousands of Generic Functions with ten thousands of methods. One Lisp implementation I use has that many out of the box. Incl. various I/O related code: streams, networking, graphics, user interface, development environment, ...

Even though low-level numeric code might not be written with them, large parts of applications often are written using them.

Yes! In fact I think Julia borrows a lot of concepts from CL and the like for their type system implementation.

Julia is basically a trojan horse for Lisp. Syntactically, it looks kind-of like Matlab. But semantically it is very much in the Lisp family. Since 1959, Lisp remains the best idea in computer programming. And Julia is bringing it to the masses.

Yeah, although Java did it first.

"We were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp." - Guy Steele

I assume Guy Steele knows his stuff when talking about Lisp like languages.

Java mostly only got typical runtime ideas (managed memory, virtual machine, code loading, calling conventions, runtime safety) of the JVM from Lisp (often via Smalltalk, etc.), but not ideas like executable memory heaps, code as data, etc.

Higher language feature from Lisp (CLOS, macros, conditions, closures, interactive development, ...) were not brought to mainstream Java. Closures, some interactivity, ... eventually were added many years later.

From a Lisp user perspective Java was more than 'halfway' away, and probably still is.

Guy Steele did not say 100%, after all.

So we could argue bullet points about how someone highly relevant in Lisp and Scheme community was wrong on his assertion, if you like.

There are lots of highly relevant people in the Lisp and Scheme communities.


Sure there are, yet none of them were responsible for designing parts of Java architecture, and made the statement we are discussing about.

Had not been for Guy Steele's background, and the context of the talk where he made that statement, I would agree with you.

There were many (ex-)Lispers working on comparable languages: inkl. Java, C#, Dylan, ... The discussion took place on the Little-languages list, where also a bunch of people with actual Lisp experience and general language design&implemetation experience were participating. I'm pretty sure many of them had a good idea where Java was technically positioned in the language landscape between C, C++, Ada, ... ML, Scheme, Lisp, Prolog, Smalltalk, Self, Perl, TCL, ...

Also keep in mind that SUN at that time was aggressively marketing Java as THE new language for system and application development, especially for the enterprise (a main target market for SUN). Though the origins of Java was as a programming language for set-top boxes, when it was still called Oak.

The quote from Guy was kind of an excuse there, for the modest design goals: at least we (-> SUN) dragged C++ developers towards Lisp, even though Smalltalk, Lisp, etc. people themselves were not a target and were not that impressed. Things like Garbage Collection in a language designed to replace C++ in many scenarios was still revolutionary.

> Pokémon are a great way to explain Julia's Multiple Dispatch

Is julia faster than python because it is a lisp and thus easy to write an optimizing compiler for it? Yeah, so is multiple dispatch something that is intertwined with SSA?

Julia has a bunch of semantic differences that are also responsible. For a quick list:

1. you can't add fields to structs (ie all types have __slots__) This is really important since it means you can store structs inline which avoids a ton of pointer chasing 2. eval always happens in the global scope (and world age means that eval doesn't have effect until you hit the global scope). This means that you can inline code which is one of the most important optimizations.

There are a few of other careful compromises like this where Julia gives up the small amount of dynamism that makes optimization impossible, while keeping enough dynamic behavior to be highly usable without doing anything that makes optimization impossible.

Apparently, function overloading in a language like C# is done at compile time, while in Julia is done at runtime based on the type of argument given.

You can achieve the same using dynamic in C#:

eff((dynamic)pokemon1.type, (dynamic)pokemon1.type);

Comes at a bit of a performance cost of course.

It'd make for some interesting bugs as well

so, it seems to me, the relevant question is:

Is it the Julia JIT that makes the type inference fast, or is it the type inference that makes the JIT fast? (see other comments eg by adgflsfhk1 that ask about whether the types are resolved at runtime or compile time or whether the julia compiler is neither AOT nor JIT but JAOT) Can anyone answer LI5?

When folks talk about the performance of Julia's JIT, they're generally talking about the compile-time overhead (and not the runtime speed). Type inference is one part of the compile-time overhead that enables improved runtime speed... but I think you're actually curious about runtime speed.

Julia aggressively compiles multiple specialized versions of almost all the methods it encounters — one specialization for each unique combination of the argument types you pass. Even if you only define a single method `f(x,y) = 3x+2y`, Julia will compile a specialized floating point implementation when you call `f(2.5, 3.5)`, an integer implementation when you call `f(1, 2)`, and so on. It's this very aggressive compilation of everything many times over that makes Julia infamously slow to start and fast to run.

Inside each of these specializations, Julia concretely knows the type of `x` and `y`. And so it can — while compiling it — look up exactly which multiplication method it should use to compute `3x` and `2y`, inline them, infer the types of the results, lookup which `+` method to call, and inline it.

Even if you end up calling bigger functions that don't inline, Julia can hard-code the pointer to the exact specialization of every function you call because it's in a context where it knows the exact types of everything.

So it's a bit of a chicken-and-egg question. Julia's JIT (typically) compiles specializations of methods with precisely known argument types. This makes does make inference easier: every function call is a fresh start! If at any point inference loses the trail, it's ok, Julia just compiles it pessimistically to handle any type and can lookup the exact method/specialization that's needed for each function call on-demand (potentially compiling it and getting a new fresh start if needed), and then you're back on the happy well-inferred and super-specialized path.

It's a bit of both. Julia's JIT makes code without annotated types fast since it can figure out the types just before runtime and de-virtualize on that basis.

The reason julia is able to make such aggressive compile time optimizations is because it only compiles the code that you actually use. In many ways, julia (performance-wise) is similar to heavily templated C++, but the key difference is that in C++ templated code, you have to compile for every combination of possible types, which can easily be thousands of times more methods that are actually needed.

What? No of course not, you have to compile templates for only for types that are actually used. In fact it would be impossible to compile them for every (combination of) possible types.

seems super abstract and hard to maintain.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact