Hacker News new | past | comments | ask | show | jobs | submit login
Concurrency in Julia (lwn.net)
181 points by leephillips 19 days ago | hide | past | favorite | 42 comments



The Folds.jl package [1] mentioned in the article is very nicely written, IMHO.

For another alternative to Julia's built-in `Threads.@threads` macro, folks may also be interested in checking out `@batch` from Polyester.jl [2] (formerly CheapThreads.jl), which features particularly low-overhead threading.

[1] https://github.com/JuliaFolds/Folds.jl

[2] https://github.com/JuliaSIMD/Polyester.jl


I really like how simply Julia unifies many computational concepts. For example, MPI is a classic solution to some of the same problems, but it is only for coarse parallelism. OS-specific threading libraries are available for fine-grained parallelism, but the ligature between the two is typically messy.

I have heard about Go's parallelism being awesome, does anyone know of a comparison between Go's and Julia's models?


According to this[1], Julias threading/task model is in part inspired by Go. I don't know about any detailed analysis on similarities and/or differences though.

[1] https://julialang.org/blog/2019/07/multithreading/


I’ve been playing with GPGPU on Julia lately also, and it really seems like things have come a long way in the last few years. Check out the Juliacon 2021 talk on GPU compute if you’re interested.



Probably more this talk about CUDA 3.0 https://live.juliacon.org/talk/UGX8YR or the workshop https://www.youtube.com/watch?v=Hz9IMJuW5hU


Yeah, I was thinking of the workshop.


Are AMD and NVidia GPUs on par nowadays? Or is it still an NVidia-first world when it comes to compute support.


There was actually a question about this earlier today: https://discourse.julialang.org/t/amdgpu-jl-status/71191

TLDR: The entire GPUArrays.jl test suite now passes with AMDGPU.jl. There are still some missing features and it is not as mature as the nvidia version, but this space is progressing rapidly, and benefited from the generic GPU compilation pipeline that was initially built for CUDA.jl


Keep in mind that AMDGPU.jl requires ROCm, which is basically dead (no recent GPUs support it and none of those that do are consumer-grade).

The problem with AMD GPGPU is not software, it is that AMD literally does not care.


Definitely not dead; Vega is well supported, and with some tweaks, Polaris probably works too (although it definitely was broken in HIP around ROCm 4.0.0 or so).

I think AMD has some work to do on non-C++/Python ecosystem engagement for sure, but they've built a foundation that's quite easy to build upon and get excellent performance and functionality; AMDGPU.jl is a testament to that.


The gfx10 line (6800XT et al) probably work out of the box on a recent release. I think some are even officially supported. I test on a 5700XT which I don't think is officially supported. The change to 32 wide wavefronts took a while to resolve.

Rocm gets releases every few months or so. The llvm project part is mirrored to GitHub in real time.


It's probably worth mentioning that Julia has MPI.jl which creates an array type that works over MPI transparently to the user.


MPI.jl does not have an array wrapper type, it just exposes the bare mpi calls. Some other packages do use it to build array types.


> I have heard about Go's parallelism being awesome, does anyone know of a comparison between Go's and Julia's models?

There's nothing fundamentally special to Go's "parallelism".

It uses near-preempted (and now actually preempted I think, since a few versions back? before that they'd have implicit yield points at function calls but without function calls you could lock out the scheduler) userland threads with small stacks.

The parallelism comes from the m:n scheduling. Go has no constructs for massive parallelisation, where you hand it a loop and it efficiently runs that on a hundred cores, at least not built in.


Having lightweight userland threads fundamentally changes the way that you write concurrent programs. There is no need to worry about creating too many threads or of thread creation being too heavy. Old school patterns like process/thread/goroutine per connection are very natural to write in Go and result in dead simple code that performs well under load.

Reasoning about threads & basic blocking calls is a simpler mental model compared to async/await style concurrency in my opinion.

Go is fundamentally designed for developing backend applications, not high perf mathematical/scientific computing. Features like unrolling for-loops across hundreds of cores or offloading to GPU's would be out of place in the language.


> Having lightweight userland threads fundamentally changes the way that you write concurrent programs. There is no need to worry about creating too many threads or of thread creation being too heavy. Old school patterns like process/thread/goroutine per connection are very natural to write in Go and result in dead simple code that performs well under load.

Well... there's still a problem in the tens-of-millions of Goroutines.

But the issue is that OS-threads generally had issues at the ~100,000 of OS-threads. Making something "more lightweight" than pthreads makes sense, because 10,000,000 coroutines is a fundamentally different program design than 100,000 pthreads.


Async/await gets you the same thing as go routines sans the for loop unrolling for yield points, no?


Sure, you can rewrite a program using lightweight threads to an async/await model and vice versa. I find the difference is in how easy it is to reason about concurrency in the program.

With threads I find I can more easily mentally organise which code in my application is executing in parallel.

In async/await land, you end up with 2 classes of functions async & non async functions, it's up to the callee to determine wether it is a blocking call or not. With threads you just block by default, and let the caller determine whether it's appropriate to block or execute the function on another thread. See https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... for someone mere eloquent than myself.


There is sometimes a problem, when you run out of file descriptors for example.


I don't think there's anything too special with Go's constructs, it's just that they are very well optimized and widely used.


All day on the front page and not a single complaint about 1-based indexing nor Unicode identifiers. It’s a new day.


Those both are fine. I do wish the dependency management system was friendlier to either airgapped or nexus/solarcube development environments.


Have you asked about this on the discussion board? (https://discourse.julialang.org/)

I don’t know anything about it myself, but I’ve heard that some people have had success setting up private package servers. I see there’s some discussion at https://discourse.julialang.org/t/pkg-private-registries-and...


Every time Julia pops up I can't help but get disappointed all over again. I was so intrigued by multiple aspects of the language, and even the type system seemed promising at first. However the inability to subtype concrete types is a total dealbreaker for me.

Everything else - even much of the type system - is extremely interesting. However the inability to subtype concrete types means you are essentially stuck with the unfortunately common inability to trivially specify how the heck your program is modeling your problem.

I just want the ability to specify an Apple_Count is not an Orange_Count without having to define my own damn operators. Why do so many languages make this so hard?

And Julia was so close, until one little sentence moved it so far away.


Speaking as someone who started in Java and moved to Julia, my experience has been that a lack of concrete subtyping has generally not made an impact on difficulty with respect to problem modeling (at least, not relative to Java or other languages with concrete subtyping). Moreover, some of the emergent properties of Julia’s type system (the “Tim Holy Trait Trick,” etc) enable problem modeling in different — and perhaps more “Julian” — ways, modeling by behavior rather than structure.

That said, if you absolutely need concrete subtyping in Julia, you can emulate it for structure with the tricks described in this (old but gold) Chris Rackauckas blog post: https://www.stochasticlifestyle.com/type-dispatch-design-pos...


I spent far too many hours of my life writing a far too long response to another comment. Some... portion of it touches on some of this. I can't in good faith suggest reading it, but you could.

But the more specific and quicker version - I don't find Java is typed the way I want either. It seems to somehow find a way to be too much and not enough. I also don't do a lot of Java, so YMMV. Specifically I'm thinking of Ada when I start comparing type systems, if that helps.

I'd need more brainpower and probably Julia chops to give those a real shot. However I started to get a 'build your own type system' vibe from some of what I was seeing while nearing the end of that gargantuan post. This feels like some initial level of confirmation, so yay?

I'll give it a look later. I'm curious how far you can integrate it with external code, what sort of guarantees you get in practice, and how much it requires doing 'the right thing.'

Maybe in practice the way dispatch works ends up taking care of more problems than I'd expect, but also seeing the fairly widespread use of implicit conversions makes me a bit concerned. Again, maybe a thing sidestepped by other factors.


Do not be fooled, julia is not a oo language, despite some similarities. If you try to replicate oo patterns in Julia it's not going to work well, but in idiomatic julia I've never found the need for concrete subtyping: generally julia under emphasizes class hierarchies in favor of types as dispatch vessels and data holders. It's very different from OOP but when you get used to it it's quite nice. The things I miss from oop is the consistency (when there's just one way to model the world you don't have to think quite as much about design) and things like tab completion for method discovery.


I don't actually do much OO. I typically stick with Ada. It does have OO, but I rarely reach for it if I don't need it. Most of my typing interests are actually from the non-OO side.

Too much detail is around in a far too large response to another commenter, though I can't actually suggest it... But at least a bit of the start should provide a little more detail on what I'm looking for with types.


I haven't seen it done yet, but in principle someone could probably write an editor plugin that gives some sort of autocomplete for method discovery in Julia based on `methodswith` or similar, which would be a nice thing to have!


I don’t get it. Aren’t apple counts and orange counts both integers? Can you give an example of why you would want to subtype concrete types, and how this deficiency would prevent you from writing the kind of program that you want to write?


I guess he does not want to be able to add Apples to Oranges.


That's interesting.

But Julia really does that already. Check out Unitful.jl


The approach might work but I don't think in general it actually solves the same problem. While I used integer-like things in the example, it wasn't actually important.

It could be strings (think Apple_Description and Orange_Description) or even more complex type from a library (maybe something like Apple_Throw_Plot and Orange_Throw_Plot).


Yes, I intended both to be integers - but in general they could be anything and the overall idea applies.

The overall idea is to be able to tell the language that an Apple_Count type represents an integer value and some extra meaning that is unique to the Apple_Count type. Additionally we specify that Apple_Counts interact with other Apple_Counts in the same way integers interact with integers, and that the 'unique meaning' of the type always passes through the operation unchanged (I'm sure there's some cool math term for this?).

Meanwhile we don't say how to mix together the 'extra unique meanings' that come with the _Count types, so any attempts to do so lets the compiler know to yell at us, because that's just nonsense according to the rules we provided.

This kind of perspective applies to every single variable in our programs - they all mean something* beyond their basest value to us, even if it's a string for a joke you haven't cleaned up yet. So my desire is to specify all the general kinds of meanings in my programs, and then be able to describe how all of those types of things can and cannot meaningfully interact when transforming the input into the output.

I've been awake and writing this for far too long and am just going off on wild and grandiose tangents, so I'll try to actually be brief. I've been totally convinced by this sort of approach. The problems uncovered are all very 'real' problems. All of them result from some basic incompatibility between the fundamental 'what' and 'how' of your program.

If you want more detail at how this all can (not must by any means, but can) work, I encourage you to look at how Ada handles type derivations (aka derived types). Do be aware Ada also uses the term 'subtype' but in a very different way. An Ada subtype, more or less, specifies a subset of values from a base type.

---

As far as Julia goes, it's been awhile since I looked into it all that much aside from a bit of a refresher this evening. The I believe the overall point is roughly correct, but I don't stand by the details, and neither should you.

Julia has at least a few issues that mean the kind of type safety I really desire is not trivially easy as the language currently exists - at least as far as I know. Unfortunately the ease-of-use of this kind of thing seems to be rather binary - it's either easy or it's hard. Partly I think that's because of how pervasive the concept is. Even a small amount of boiler plate balloons quite severely even for very simple programs.

However some of the problematic aspects of Julia seem like they might have existing solutions. For example the multiple dispatch mechanism is happy to mix and match any and all types, but type parameters provide a high degree of control over what methods are available to match in the first place. These features seem - to me anyway - to be powerful enough to have some serious potential* for making Julia one of the nicest languages to try to bolt all this typing onto after the fact. (Or maybe there's a little too much complexity to allow very nice solutions, and then the sheer number of type interactions drags Julia down to be one of the worst. Who knows!)

But subtypes of concrete types - or rather their lack - seems different. However I'm increasingly reluctant to say much more about this in any kind of useful detail. (Yes, the irony burns.) What was supposed to be a short look to confirm a few things has led me down a massive hole. The size of this hole seems to be increasing the deeper I go, and I think the expansion is accelerating. I've also passed by a few other, possibly equally sized holes along the way... I don't think I'm even that deep, and it's already pretty scary. Although I'm well past the point of firing on all cylinders, I don't think it matters down here, in the dark.

But anyway. My basic thoughts, some or all of which are probably irrelevant to varying degrees depending on how far down the hole/s you've gone.

The very general issue with the lack of concrete subtyping - at least assuming you can get at least something out of it - is that the main alternative appears to probably largely be struct wrapping. That's the closest thing to a concrete statement I have for this section. It's a big, big hole.

Struct wrapping sucks. It really, really sucks. Julia seems to do thing that both make it better and worse than it could be. In terms of making it worse - the lack of abstract composite types (though they seem to have gotten some recent attention!). This seems like it could rule out at least _simple_ methods of making the wrapping process that much nicer. Even if you did make an abstract type to grab all the behavior from the composite type of interest, you'll still need to copy the whole damn struct every single time you want to make a subtype. It might save you from all the forwarding for them all though? Maybe? Maybe not?

Even for non-composite types, any time behavior isn't implemented on an abstract type you may be looking at doing more struct wrapping. A brief glance around the standard library does not show a tendency to restrict the implementation of behavior to abstract types. I don't know if that's a common trend, but I didn't see anything in the main documentation even suggesting you might want to adopt such a practice either. So, probably means quite a fair bit of struct wrapping maybe, I think, unless you want to venture down one of the bigger holes.

Open questions. What the heck does a Union over one type actually do, and is it useful for wrapping things like this? Seems like maybe? But I can't find any mention of anything about it, and after venturing this deep I'm no longer willing to hope I can even begin to actually judge it just by poking at it. Can you shoehorn in parametric types somehow? At some point it feels like you're just going to have to write your own type system though. Primitive parametric types seems promising in principle, but I have a feeling going that route means you might just end up giving up on any code you didn't write yourself. Does "solve" the wrapping problem though...

And then macros. I.. yeah. Macros. Good, evil, both, I don't know anymore. It seems like some aspects of struct wrapping have been made easier to by them to various degrees. In some cases it appears to be better but still probably a bit of work, in others... I don't even know. And how wise is it to even dive this particular hole? I don't know. It's a thing. It might help, some. But something makes me feel only wizards are going to end up saving time this way.

I dunno. I give.

[The below was written earlier, before I had first glimpsed the Holes. Now, well. The pure naivety may be worth a laugh.] *Wild and rampant speculation warning. I can imagine it may be possible to create a stricter version of the existing type hierarchy that adds subtype equality checks to their methods. Maybe it would even be trivial to convert existing code simply by defining those methods without the subtype checks. Nothing that sounds so good is ever so easy, but hey. This does sound similar to how convert (which itself might be a whole other thing you'd need to tame...) is handled though, so maybe?


This is an interesting point. Julia might well benefit from

    Base.not_understood(f, x::Apple_Count, y::Apple_Count) =
        f(int(x), int(y)) |> AppleCount
with semantics borrowed from Smalltalk. I'm a bit rusty with Julia, please forgive any syntax errors.

I wonder how hard it would be to hack the compiler, catch method not found exceptions, and do that? It wouldn't be fast, but it would work as a proof of concept.

Edit: the semantics I have in mind go like this. If you call f(x, y), where x and y are Apple_Count, but there is no method defined for f(::Apple_Count, ::Apple_Count), then the compiler tries not_understood(f, x, y).


One thing you might want to check out is https://juliapackages.com/p/unitful which is a package that allows you to add a unit system to numbers that will error if the units don't match in a way that makes sense.


Maybe you will find this helpful: https://github.com/gcalderone/ReusePatterns.jl

It includes an implementation of concrete subtyping.


For a job interview I did the same parallelization of a`isPrime` in Rust, and it the basically as simple as the one for Julia, replacing a call of `into_iter()` with `into_par_iter()`. I also show the strong scaling relationship, which i wish all things talking about concurrency did. Link for the curious: https://github.com/pcrumley/parallel_primes_rs


This article uses very inefficient mathematical computations to illustrate concurrency. That’s fine, they are just examples. But you might also be interested in learning how it’s really done, for example by reading the Primes package. [1]

[1] https://github.com/JuliaMath/Primes.jl/blob/master/src/Prime...


Yeah, as I wrote there, “As with all of the examples in the article, isprime() leaves many potential optimizations on the table in the interests of simplicity.”

The source you link to is longer than my entire article. But it is good to see an example of a “real” program.


The care you took in showing the usefulness of the task migration was appreciated, thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: