Hacker News new | past | comments | ask | show | jobs | submit login
Julia: A Post-Mortem (chrisvoncsefalvay.com)
107 points by LittlePeter on March 8, 2021 | hide | past | favorite | 252 comments



This feels like a very strongly worded title for a fairly lacklustre article.

There’s Python the language and “Python the dsl for tensor frameworks”, the former has arguably lost some ground/mindshare to newer, less “hobbled” languages, and the latter exists not by virtue of its own strengths but as the convenient vehicle for frameworks written by 2 enormous corporations.

Julia hasn’t overtaken Pythons ecosystem yet, but how far its come in relatively little time is the real testament to its community-that which the author decries as lacking.

Using weird outdated rankings and citing “that one package that kind of works in very specific circumstances” (Numba) as proof that all Pythons issues are solved and Julia is over is a weak argument at best, and deceptive at worst.

What Julia could really, really benefit from though is some vocal commercial backing-had google chosen to rewrite Tensorflow in Julia I think 2 things would have happened: 1. the project would have actually succeeded and been finished (in a “this is the tensorflow engine now” sense) 2. It would have done absolute wonders for the Julia ecosystem and mindshare and we wouldn’t be having this taudry discussion. That didn’t happen (unless some tensorflow product managers want a promotion by re-launching a tensorflow rewrite? ;) ) so they need another source of major support IMO.


> What Julia could really, really benefit from though is some vocal commercial backing-had google chosen to rewrite Tensorflow in Julia I think 2 things would have happened

We have the example of "Swift for Tensorflow", which was pitched by Google as the "next-generation platform for machine learning" (that's from their own website) and didn't result in anything meaningful and is now dead as far as I understand. I'm not that convinced that would work better with Julia.


At the time they were “picking” candidates for the Tensorflow rewrite, Julia already had Flux with an early version of Zygote and was doing auto differentiation at the language level. This was literally the killer feature they wanted, and were prepared to re-wore how significant portions of the swift language and compiler functioned to get it.

Had they chosen the language based on actual suitability and not because one of them was Chris Lattners pet-project, they could have automatically been significantly further ahead in far less time and energy.


Yeah Google really messed this up, they had a golden opportunity and they went the wrong direction. Julia would have been better but IMO they should have just created a new language from scratch


> they should have just created a new language from scratch

God, please not another one with new tools, new libraries, etc.


Right because there are so many great existing options for ML


... yes?


Python — no threads, or built in types

Julia — odd language

R — not a general purpose programming language

Most of the data scientist and ml engineers I work with want a new language. So yeah it’s not a bad idea


> Julia — odd language

What a profound criticism. If data scientists can't use Julia then they're not great data scientists - you have RCall and PyCall, Flux, Knet, MLJ, plotting libraries, etc. It has built-in and user-defined types, threads, multiple dispatch, ...

"Please don't think that Julia is only useful for 1. custom units 2. custom GPU kernels 3. Custom array types 4. custom bayesian priors. 5. AD through custom types 6. Task based parallelism 7. symbolic gradients with modeling toolkit 8. Agent based modeling 9. physics informed neural networks 10. abstract tables types ..."


What a great statement, its not that they can't its that they don't want to use it.

I really tried to like Julia, wrote it for months, but it misses the mark. I've been working in analytics for 10 years and of everyone I know who's tried it I only know a couple people who like Julia.

Thats a problem, and you can see it in the adoption numbers. The Julia community keeps wanting to bury their head in the sand and pretend its not which is fun.


The project was pushed by Swifts creator and it failed supposedly because he left Google AFAIK, so for Julia it would most likely have worked out much better.


Are you sure you have cause and effect in the right order?


I'm not certain of what happened, I'm just recalling what I had read some time ago.


Python's strength is its simplified coding style. People who use these frameworks are not software engineers who want to deal with object passing mechanics in excruciating detail. There are software engineers to do just that. So until something beats Python in simplified writing style or Python becomes incredibly slow, it won't be easy to supplant.


"Python's strength is its simplified coding style"

Anyone familiar with Python is unlikely to struggle with the syntax of Julia or with reading straightforward Julia code. In fact, I'd argue that Julia is easier to learn than Python in many aspects.

Python is now a large language, not the small, simple language of years past. Of course, in both languages (Julia and Python) you can ignore complex features.

But what Python also gives you is abundant examples and solutions to problems that are easily found on the Web.

In contrast, new(ish) programming languages suffer from a lack of libraries, examples and tutorials that slows down momentum.


Yes. First mover advantage and network effects. It is difficult to topple the leader even if your own product is somehow way better.


This 100%. A lot of people working with code are folks who code as a means to an end rather than as a craft unto itself, and these folks are important because they often have expertise in a domain other than software engineering. Python hits a sweet spot in terms of power and ease-of-learning/ease-of-use that makes it incredibly well-suited to these types of users.


I see this reflected as in the DBASE - *gres impedance.

~ the one end Ashton Tate delivering a shallow learning curve declarative language for semi referential data that SOHO novices can create applications with, and at the other, Stonebraker wanting to put the neatness that pure client computing enabled (dedicated cpu time and architecture leading to multimedia inexpensively) together with large system semantics [0] long before capabilities, a curse I believe was academic grant seeking behaviour in origin.

I seriously keep seeing the results of driving forces individually uniquely capable in project conditions which are very difficult to reconcile with the ultimate purpose software is used for. like, before Jim Keller quit Intel, he gave a talk explaining how Tesla was founded on a unique case where reductions to silicon were optimal for CAFÉ and that's a rare case, and how he couldn't see the 22 people in the audience he knew can tune the full performance out of modern silicon software instructions. (probably that should have called out his imminent departure considering the proliferating instruction set at Intel and his unusual call out to a success he thought worthy of mention). it used to be academia thinking about small systems applications and development, eg the Edinburgh AI group up to the 80s and Carl Hewitt thinking about how to make "next use unknown" a good thing in PLANNER. But certainly lately academic prowess has adhered to scale and the apparent needs of the largest computing users. If there's a impedance mismatch here I think it has evolved from putting the solo developer in the employ of hyperscalers without any thought about the effects of that situation.

[0] personally I remember the talk preceded the walk by some 20 years or so) and simultaneously take on the big iron boys at IBM. Oracle drove it's ugly polluting truck thru the middle and created a empire that we're still far from superannuating (not replacing, in the view of this guy seriously targeting openVMS 9.3 with the ability to have workstations join the cluster and run transactions across the system and db against the same native transaction manager and define time series objects using FORTRAN and make the cursor over the resulting blobs addressable from DEC BASIC inside user applications including Excel soon to be patched to the VMS TM via SRV-IO via a switch on the same NIC. All very much old functionality becoming the new. if I'm allowed to write about it, I will probably emphasise just how much innovative design has been enabled by expensive licensing and recount my memories of how nothing ever cropped up to address comparable needs in the FOSS world despite the foundations resting upon some of the most well documented CS and being wonderfully unencumbered by IP inhibitors due to the passing of time.


There are a lot of people who are somewhere in between these groups and providing a language that goes up and down the stack well would be super useful.

I agree it needs to be as simple as python on the high level though, or the data science end of the spectrum will just never adopt it.


That is currently a non issue since software is lagging hardware. So you can join 2 systems together fairly inefficiently and still make a go of it. Sure, a good language for people down the stack can topple python from the top, but it won't be easy. It'd be far easier if we were approaching software limits on current hardware and needed to pull everything into one language and system to make it work together.


Its useful for ML in that it requires so much optimization, that being about to drop down in the same language is incredibly handy. For ML software isn't lagging hardware its the other way around.


foreign function interop ideally at the system (or intermediate) level does so much.

before getting distracted by dot net and mono, I feel I have to lay the blame (and my nineties) at the feet of CORBA and the Object Management Group. Seriously though, what was the OMG but a UNIX consortium trying to break the hold of DEC and VMS over the most complex critical systems..


"Pitched by Google" == pitched by one dude (Lattner), basically.


If you want to be precise it would be: "Swift for Tensorflow", a project lead by Lattner, while working at Google, as part of the Tensorflow project (published on tensorflow.org and part of the tensorflow github organization, which are two official channels). Presented at Google IO 2019 [0] as "Swift for TensorFlow is a platform for the next generation of machine learning that leverages blablabla...".

I stand with my "pitched by Google" :)

[0]: https://www.youtube.com/watch?v=3fJsqGHhlVA


With zero presence on the Tensorflow 2020 conference, not even an honourable mention on the overview blog post about the state of nation regarding TensorFlow.


I think even since 2014 there have been many niche technical computing companies supporting development quite successfully. Bigger players definitely would help, but I think most of that at this time is more public/private research money for scientific computing. Hence Julia has many academic contributors (grad students and post docs a likely majority), many of which are now becoming professors and industry leaders.

The post 1.0 world in Julia has been spectacular for development stability. In the early days it was somewhat tiring trying to develop basic foundational libraries, and keep pace with language changes. 1.0 has stabilized things quite a bit, and the forthcoming LTS (sometime this year maybe) I think will really start to button up some of the major issues people have with package load times and installation.


Yes, writing pre-1.0 Julia code was like balancing on a log in the water - the platform kept moving under you :D I'm glad it's gotten a lot stabler now.

And I agree about academics as a key user base. I think Julia's growth is far from over, there's a lot of organic spread yet to come. It may never replace Python in terms of global popularity, but then, why should it have to?


I'm sorry but deep learning is only a very small part of why Python is preferred by data scientists. The fact that Python was the the preferred language is why the enormous corporations wrote bindings to them. Both of these frameworks exist in the Julia ecosystem.


As someone who does data science, I roll my eyes every time I have to touch Python. It’s ubiquitous, but it actually sucks once you get used to better languages.

It is somewhat circular: it was preferred because your earlier alternatives were Java or C(++) both of which had their shortcomings. SKLearn is still one of the most feature-complete and powerful libraries and it was Python only and thus drew a crowd. A lot of people who write data science code, I would be confident to bet that if you taught them Julia first, they’d prefer that.


I have a few questions since you have a broader view.

What other languages do you prefer ?

What python traits do you think are weak ? (I'm no python fanatic but I like it as a tiny swiss army knife)


There are lots of little things that add up. Things like expressing multiline closures in Python is clunky compared to almost any other dynamic language whether Ruby, Lua or Julia.

AsyncIO is quite complex in how it works. You use the same patterns for concurrency in Julia, but it is so much easier to grasp and work with.

Multiple-dispatch as used in Julia e.g. makes API design so much cleaner. You can see that almost anywhere. I make some comparison of making REST calls in Python and Julia here. Julia is much cleaner IMHO: https://erik-engheim.medium.com/explore-rest-apis-with-curl-...

The Python problem is that you cannot reuse the same function name easily for different types. Hence instead of creating one abstraction across many different types you need to invent all these different names which can be hard to guess. In Julia there are often far fewer core concepts to learn which can be re-applied in far more ways.

Python insistence on an object-oriented approach often creates problems. I have some observations on that in machine learning, looking at PyTorch compared to Julia's Flux: https://python.plainenglish.io/python-experience-in-machine-...

Out of the box experience is not all that great. The Python REPL is very bare bones. https://erik-engheim.medium.com/python-vs-julia-observations...

Things like calling shell programs is done more elegantly in Julia. Same with calling C functions. String interpolation is more obvious. There are not like 4 different ways of doing it.

Package management and environment management is much simpler and elegantly done.

I agree some of this may seem unfair as Python has baggage from being an older language. But that also counts in its favor with wider selection of libraries. Both should be taken into account when evaluating your choices.


> The Python problem is that you cannot reuse the same function name easily for different types. Hence instead of creating one abstraction across many different types you need to invent all these different names which can be hard to guess. In Julia there are often far fewer core concepts to learn which can be re-applied in far more ways.

At least there's `functools.singledispatch` in the standard library. There's apparently also multiple dispatch libraries. I've never used either, duck typing with some try/except (and some isinstance) has served me well so far, but I agree it's not as clean.

> There are not like 4 different ways of doing it.

Yet.

> Package management and environment management is much simpler and elegantly done.

Yeah, it's a nightmare in Python. After setting up projects dozens of times now I still don't grok it.


Not the OP, the community approach to performance, that rather rewrites code into C, still calling it Python (?), instead of being more supportive of ongoing JIT endeavours.

Yes Python is very dynamic, not more than Smalltalk, SELF or Common Lisp, all with quite good JIT engines.


That's my opinion, and coming from you (from what I get you're seasoned in many area of the computing field) it's not surprising. But the current mainstream is not really aware of all this. There's tiny python cult due to scipy et al.


Yep same wavelength here.

There is another thing, most of the major libraries where Python is used as DSL, written in a mix of C, C++ and Fortran, can be used in other languages just as well, nothing special about Python there other than lack of awareness of what everyone else is doing.


In the ML, DL world or in the physical simulators, people just compose a task and throw it to a CPU/GPU/TPU or a cluster of these and let it run for a long time. I don't see how Julia will be different for this kind of tasks. I understand that in Julia you solve the 2-language problem and all the goodies that multiple dispatch brings but the Python ecosystem progressed a lot in the past 2 years, now you have Numba JIT, Jax JIT, PyTorch JIT, XLA JIT and many other proprietary JITs that are not open-sourced. Since JAX (as an example) is mostly numpy and Python, you can leverage your existing knowledge instead of having to learn a fundamentally new paradigm. I would say that Python has many "specialised" JIT engines and it seems to work great for the community. Don't get me wrong, Julia is interesting, I can't deny that but I expect a huge adoption period for it. It can find its niche as C++ did for extremely high performance computing or Scala for Big Data (though Java is starting to replace many use cases). If you ask me now, I would say that the world converges around Java, C++ and Python when it comes to data, the old trio and it will remain this way for at least another decade.


> but the Python ecosystem progressed a lot in the past 2 years, now you have Numba JIT, Jax JIT, PyTorch JIT, XLA JIT and many other proprietary JITs that are not open-sourced.

Python has a bunch of specific use-case, non-interoperable limited JIT’s that you have to learn separately.

Not the case with Julia: you write some arbitrary code, it gets optimised, so much simpler. The Julia community did some cool things with Flux where complex and field-specific equations were dropped wholesale into neural-network definitions without having to rewrite anything. That sort of power is invaluable.


In what concerns my use cases, Java and .NET languages can make use of the same GPGPU libraries just as well.

I follow Python and Julia more as language geek than anything else.

Specially since ecosystems like CUDA have been polyglot since the early days.


Not OP, but I absolutely prefer R, the Scheme inspiration is obvious and allows for flexibility completely impossible in Python (here's to PEP 638, but there's a ton of hostility to it from what I can tell).

I also really really really (really) like Julia, but don't quite think it's there yet. I'm optimistic though, these things take time.


> What python traits do you think are weak

It's growing in a sprawling, disorganized fashion. It's developing generics (with horrid ambiguous syntax) for semantic typing that's not generally used outside some applications. The walrus operator is obscene.


This is the very recent trend. I think it's a global wave, most languages accelerated pace in the last decade. es6, php, heck.. even java went chrome-speed now. Can be a bit messy yeah.


Rust is hands down my favourite language and it’s what I write my personal projects in these days. Julia is second. Rusts philosophy of “the complexity doesn’t go away, so just be aware and pay it up front” and the type system and compiler feel aaaamazing to use once you grasp them: writing some code, knowing exactly how and where the failure points are and having it compile and know that it will be correct is hard to go back from.

I used to be a huge Python fan, and I’d dug through docs and guides for pretty much everything I could, so my frustrations aren’t “outsider criticisms” as such.

My biggest issue is the amount of “magic” that goes on and is actively encouraged; it happily lets you get away with anything, and I’ve read and had to fix too much horrible Python code that technically does what’s required, but in the most torturous and difficult-to-untangle manner possible. I’ve come to resonate with this idea of “what does the language/tooling encourage you to do” and in my experience, Python doesn’t encourage a lot of good things, but it does end up encouraging you to “hack around problems” rather than fixing them at their root, lean on magic wherever possible (which is never backed up by any kind of correctness guarantee) and let the programmer do whatever they want, regardless of how much of a bad or non-idiomatic thing it is. The “type system” leaves a lot to be desired, there’s optional type hints now, but the larger community seems ambivalent at best-uptake is glacial in my observations and mypy is just sort of ok. The performance is pathetic, there’s no getting around that, and the response of “just write it in C if you need speed” is a poor answer. The Python core dev team seems insistent on continually stacking pointless new features in (walrus operator why?) whilst simultaneously not really doing anything about real issues (like the packaging situation). I’ve also come to really dislike exception-based error handling: having no compiler, type-checking and knowing that anything could explode anywhere isn’t a reassuring feeling once your codebase gets big enough. Yeah you can put try-catch, and code defensively to head-off issues, but it doesn’t take much before you’ve spent as much time and energy doing that as it would have taken to write it in a more suitable language but with maybe 1/10th of the guarantees and none of the performance. If you want to write a web API, you’d be better off writing something in Golang, .Net, Typescript on NodeJS, possibly even Swift.

General purpose stuff you could replace with any of those languages + Rust. Admittedly it does still have prime position for ML frameworks, but I’d be using Julia at work if it was up to me.

Edit: a sibling comment mentioned Async in Python - an experience that was so frustrating I’d excised it from memory.


Yeah, I've used Python in university and before, for ML and DL, for scripting, at work... it's very inconsistent and annoying. The package management situation is horrible. Yet it's seen as this magical beginner friendly and clean or even beautiful and elegant language while the reality is quite different.


I use Rust for ML nowadays and I love it!


Oh awesome!

How are you finding it? I’d like to think that with some suitable package evolution/development doing production ML stuff in it would actually be pretty reasonable.

What packages are you using? Linfa looks like it’s developing strong legs and SmartCore seems be ticking away in the background quietly...


yeah I keep up with the Linfra group, they are making steady progress. I hadn't seen SmartCore yet but that looks promising.

I mainly use tch-rs which are just bindings around libtorch, there are a couple edges wrapping c++ (function overloading) but overall it works great. I've also used ndarray a fair amount which is nice.


Do you see Rust coming up a lot in your field or is it mostly a personal favorite right now ?


It’s mostly a personal favourite, but once Ballista [1] gets a bit more developed, I expect we’ll tear out our Java/Spark pipelines and replace them with that.

The ML ecosystem in Rust is a bit underdeveloped at the moment, but work is ticking along on packages like Linfa and SmartCore, so maybe it’ll get there? In my field I’m mostly about it’s potential for correct, high-performance data pipelines that are straightforward to write in reasonable time, and hopefully a model-serving framework: I hate that so many of the current tools require annotating and shipping Python when really model-serving shouldn’t really need any Python code.

[1] https://github.com/ballista-compute/ballista


Some companies I did consulting for, the data scientists would rather use a mix of Excel, Tableau, VBA and VB.NET.

So neither Python not Julia.


Excel has proved itself shockingly resilient. We all make fun of it, but man, with smaller data sets it gets the job done.

A lot of people would be horrified to see how many business critical processes depend on a bunch of hacked together excel workbooks.


I get the first 3, they are surprisingly powerful for data. But why VB.NET? Its many times alower than VBA.


Since when is a compiled language with JIT/AOT toolchain slower than an interpreter?

They reach for VB.NET when there is nothing left for VBA.


Julia only hit v1.0 in 2018. It's doing pretty well for being so young in my opinion.

All the arguments against Julia are basically that python has a lot of momentum and it takes time and effort to switch to a new language. I think Julia should really seek to displace MATLAB as a near term goal .


> I think Julia should really seek to displace MATLAB as a near term goal .

Have you studied MATLAB and its ecosystem? It ranges from real time control to image recognition to sophisticated engineering-specific toolboxes (RF, 5G, LTE, etc.). They also have proprietary algorithms that do things no Julia package can do. Sure, there is PDE solvers and the ecosystem is growing for Julia, but it is at-least an order of magnitude smaller than MATLAB if not more. I urge you to explore the documentation, APIs and toolbox details on MATLAB: https://www.mathworks.com/products.html

Take a look at the LIDAR toolbox for example: https://www.mathworks.com/products/lidar.html

or LTE: https://www.mathworks.com/products/lte.html

and I've used DSP toolbox the most: https://www.mathworks.com/products/dsp-system.html

I am not condoning use of MATLAB, just stating the facts having used Julia and MATLAB extensively. I personally like Julia language FWIW. At work, we use MATLAB and happily pay for it. Their support is absolutely top-notch and for us it is the reason alone to use MATLAB. Julia has support but not even close to MATLAB's direct line to compiler engineers (yes, I've had them fix a bug and do a release in an afternoon).


You're absolutely right. I'll agree with the person you replied to as well though: displacing (at least some portion of) MATLAB use cases really would be a good goal for Julia.

Personally, I managed to switch over 100% and DifferentialEquations.jl is the reason that made sense for my work.


You've used "former" twice in your second paragraph, and I believe you might've intended to use "latter" the second (latter ;)) time.


Aaahh good catch, thank you! :)


I like the idea of Julia but had a poor time making it work (circa october-november 2020) on my Ubuntu laptop. It also refuses to work in environments like repl.it. I feel like it's quite imature still, and that I'll try to pick it up later when it's better.


I'd be very curious to see a source about your claim that Python is slowing down for uses outside of deep learning, I'm pretty sure you just made that up and it's false, not to be blunt


It’s pretty ubiquitous among new developers but for working software engineers I think it’s uses are becoming limited.

If you need a web server there are much better choices. Data analysis and scripting are about the only places I see it used successfully nowadays


there have been Julia bindings to Tensorflow for a long time, and you can just use the Python ones in Julia with PyCall anyways. People haven't been doing that.


I hope Julia succeeds and replaces the clunky R and numpy+python "ecosystems". Every few months I try to decide to do all my computing on Julia and quit fooling around with lesser environments. And every time I get stuck at the same point: the slow startup time. I want to draw 100 plots per second, by calling the same Julia script on a bash loop 100 times; but this is utterly impossible. Of course, the Julia community has a standard answer to this concern: this is not how you are supposed to use the language. But I don't listen to them, for the best tools are those that perform well doing tasks they were not designed to do. I feel the same dismay when I get stuck at slow loops in Python, and the Python people tell me that I'm not supposed to use loops. Well, this is the main reason that I want to move away from Python and into Julia.

I'm not interested in the implementation details of the language interpreter. The language is already very good, and the libraries are excellent. My main point of friction against using Julia is the slow startup time that forbids its use in a wide variety of contexts. I feel like the best usage of resources for the Julia community would be to spend all available money in hiring a Mike Pall-esque figure to advise them on JIT. Even if it was a part-time or a one-shot hire.


This just seems like such an artificially constructed problem. You just move that loop into Julia and the problem is solved. It is not hard to write shell code in Julia.

I have rewritten complex build scripts in bash into Julia code. It was not very hard and it made everything run way faster despite mostly shelling out to external programs.

I don't know how secret/protected this shell script of yours is, but I'd be willing to have a go at rewriting the loop part in Julia if you showed it to me.

Anyway I do think the startup speed problem can be solved in Julia and they don't need a rockstar advisor to do it. I think the solutions are already pretty well known. It is more of an issue of manpower. Somebody has to put in the hours to do it.

And that is already being done. First time to plot will be twice as fast as it used to be in the next Julia release. Further improvements can be done, but again that requires somebody to put in the hours. Julia does not have access to Google level resources.


> the startup speed problem can be solved in Julia and they don't need a rockstar advisor to do it. I think the solutions are already pretty well known.

> And that is already being done. First time to plot will be twice as fast as it used to be in the next Julia release. Further improvements can be done

I'm really, really happy to hear that! I hope the Julia runtime becomes more and more streamlined in the (near) future.

See, I was just stating my use case, without pretense that it is representative at all. Yet I received 20 upvotes in a few minutes, and Julia is maybe the only language where "time to first plot" is a thing. So I'm not completely alone in my (admittedly minoritary) concern.

You say that rewriting everything in Julia would solve my problem. I'm sure that this is the case, but this is not at all my point. Some of us do not want a shell replacement, we want a bc replacement, and julia is a nearly perfect one, if it wasn't for the outrageously slow startup time. I have zero interest in the julia REPL, I just write julia scripts (among scripts in other languages) and I'm not willing to change that.


Julia is a compiled language, like C. Using it in this way is basically like writing a shell script in C and calling GCC on every invocation. You can't expect good performance from that, and it's really a testament to Julia that it feels so dynamic that you feel as it it should do that.

If you AOT-compiled your Julia "script" (program) to a binary and invoked that, your startup time problems would go away. Julia's "application deployment" stack is underdeveloped compared to its REPL experience, but it's still possible to do this today with PackageCompiler.jl and will only get easier with time. I think that will prove to be the "right" way to solve this problem in the long run.

(Or, if you don't care about performance, you can just turn off the compiler and interpret everything, and you basically have Python-but-in-Julia. Fast startup, slow running. Just run it with julia --compile=no)


> writing a shell script in C and calling GCC on every invocation. You can't expect good performance from that

Of course you can. Have you ever used the "-run" option of the TCC compiler? It's blazingly fast. With gcc it's a bit slower, but still orders of magnitude faster than julia. You can use pre-compiled libraries and the linking to your freshly compiled code is extremely fast. The fact is that compiling and linking C code is much faster that just launching the Julia environment with some packages. There's no fundamental reason for that enormous disparity in running time. I agree that it is a completely irrelevant nuisance for most people; but still, for some workflows not blessed by the Julia developers, it is the main point of friction.

EDIT: if you want to try it yourself, write the following text into a .c file, chmod +x it, and you can run it like a C script on most unix systems:

    //usr/bin/gcc -O0 "$0" -lpng && exec ./a.out "$@"
    #include <png.h>
    int main(int c, char *v[])
    {
            // do stuff with png images
            return 0;
    }


This seems too obvious to even comment, but timing the compilation of a no-op program doesn’t show much. The meaningful comparison would be compiling a C program that does the same thing as some Julia code with `gcc -O2`. Btw, you can also run Julia in `-O0` or even better `-O1` mode — Julia even uses these same flags at the command line. These low optimization modes are extremely snappy — time to first plot is no issue. Of course, if you want to run some compute intensive code, it’s much slower, which is why `-O1` isn’t the default.

This is not to dismiss the TTFP issue, just pointing out that your argument seems to be that gcc is faster than Julia, which is definitely not the case. Indeed, gcc is about the same speed as clang, which like Julia, uses LLVM. The way Julia uses LLVM is a bit different, but something would be very wrong if took Julia much longer to compile code with the same functionality as it takes gcc or clang. Julia spreads the compilation out over time, but when you do something complex, a lot of compilation happens all at once. However, a static compilation would not do this work any faster, static compilers just do the work in a separate phase rather than interleaved with execution.


Point taken, but I think you overreach a bit with:

>There's no fundamental reason for that enormous disparity in running time

I mean... Julia does full type inference, which it uses to present a dynamic type interface. It's not necessarily possible to statically compile a module ahead of time, because the module is designed to be generic and will generate different code if fed different types, which is how Julia attains its extraordinary composability. In other words, it's a much nicer language than C, and correspondingly much harder to compile. I'd call that a pretty fundamental reason.

Perhaps C was a poor example for me to pick. C++, maybe?

p.s. your "script" example clobbers any file named "a.out" in the current directory.


> In other words, it's a much nicer language than C, and correspondingly much harder to compile.

Sounds like an anti-feature to me. What I want is "matlab with fast loops". I couldn't care less about "extraordinary composability", "full type inference" or "genericity". Well, I do care because these unneeded features make my stuff much slower. It's a hefty price to pay for uncalled-for features!

Yes, my script was just a silly joke, to show that compiling and linking C code is really fast.


Do you want a dynamic language with the speed of C? Then you want type inference.


This is not necessary. There could be a single type, for example (e.g., the multidimensional array of floats).


And how do you multiple dispatch on that?

But if you just want a free MATLAB, use Octave.


yep, Octave is what I currently use when I can choose. I loved the concept of Julia as an "octave with fast loops". But it seems that there are some compromises with the Julia interpreter that go against my intersts. Maybe thee's still space for a modern language for numerical computation whose efficiency is not encumbered by the need to support strings, dictionaries, multiple dispatch and the like?


> Maybe thee's still space for a modern language for numerical computation whose efficiency is not encumbered by the need to support strings, dictionaries, multiple dispatch and the like?

I don't think so: I don't know of any scientific code that doesn't have to interact with its environment, if only to import/export data, and for that, strings at the very least are necessary.

Multiple dispatch is the same thing for me; either it or another form of polymorphism will be very quickly asked for by the users of a scientific language, as no one wants to write dozens of time the same functions for different types, and the HPC community loves its programs to go fast, so they need/want to be able to chose their types.


Okay, so go code in Fortran and be happy. Why are you complaining?


Often the reason I have a bash script is because I'm running some AI tool to solve lots of problems in parallel, gather/filter results, and then gathering results, then finally doing plotting and things.

Personally, I never really like solutions which are "you can't do 5% of stuff in X, you have to do everything in X". I like trying new things out, but I'm not willing to 100% invest everything in Julia up front, rather than just trying out some small bits.

Also, I think my collaborators would get annoyed if I rewrote all our bash scripts in Julia -- I can't expect them all to learn Julia.


A tool with slow startup time that forces me to structure the workflow around its idiosyncrasies is less convenient than a tool with fast startup time that can be seamlessly integrated into pre-existing workflows (eg driving pipelines via Makefiles).


Start up time for the interpreter is 0.13s for me: ~ time julia -E "1+1"

What takes time is precompilation of packages and functions - with Julia 1.6 the precompilation is much faster now than before.

Your bash script that calls Julia 100 times is indeed not something that Julia was made for. It excels in many other areas and that's quite fine. I'm okay with plotting in a bash script in Python if it means that I can use Julia for everything else.


Your last sentence seems to ignore the reality that plotting is only one step in a larger pipeline. It sounds miserable to need to write analytics code twice, once in Julia “for everything else” and again in Python just for the plotting. I’ll just write the whole thing in Python and save myself the headache.


So why not write the whole thing in Julia? That is the whole problem here, that the whole thing was NOT written in Julia.

Why manage two different languages? Julia is a better shell programming language than bash anyway.


I mean, the comment I was replying to answers your question:

> not something Julia was made for


What I meant with this is that Julia isn't intended as a bash replacement. You can write your code in Julia and circumvent the overhead of having to start up the interpreter every time. But if you try to execute it 100 times per second then of course the overhead will add up.


Ah, I understand what you mean now. And you may be right that there’s a Better Way to do it natively in Julia. But, there’s lots of friction to adopting entirely new dev practices, and I’m inclined to just stick w what tried and true methods I’m already familiar with—old habits die hard! And that’s a big friction against Julia adoption (IMO).


There's nothing wrong with using the tools you know. But IMO it's quite interesting to use languages that might just be a big improvement over how things have been done so far. I think Julia is such a language when compared to Python (excluding the ecosystem, of course).

Also, if you come back to Julia sometime there's this:

https://github.com/JuliaPy/PyCall.jl

https://github.com/JuliaInterop/RCall.jl


My main use case for that would be: I'm generic user. I just want to run and use a Julia script for some output, by adding 'julia somescript.jl'. I don't want to modify it because I don't know the language.


Are you fitting, evaluating, and plotting complex models 100 times per second?

I would be quite okay with logging the results into a file and only plot it with python if this was the cost for using Julia, yes. This might not be for everybody but your scenario sounds strange for me to begin with.


No, but I am applying a serialized fitted model to 100 separate out-of-sample datasets and generating diagnostic plots for every output of predictions / scorings.


While moving the loop into Julia (as others suggested) is probably the better option, an alternative you could consider is DaemonMode: https://github.com/dmolina/DaemonMode.jl

I.e., have a background Julia process so that you only have to pay the precompile cost once.


If you are going to use Julia in one of the absolute worst workflows for how it is designed, you shouldn’t be surprised it doesn’t work well... That said, have you tried using PackageCompiler to add your needed libraries to the system image? This seems to show a factor of 100 speed up for the time to first plot: https://julialang.github.io/PackageCompiler.jl/dev/examples/...


Why do you find R’s ecosystem “clunky”? The Tidyverse is unequaled for its elegance. I come from the CS world, so I’m supposed to like languages like Python, but I really, really like R, mainly for its elegance.


The Tidyverse is great, but vanilla R is a monstrosity. After five years of heavy use, I still don't really understand the random idiosyncracies of its various types. Arrays and lists and dataframes and tibbles are confusingly named, and operations that work on one type often balk at the others, without telling you what's wrong. I have lost many many nerves with it.


I agree that for statistics and data exploration R is certainly not clunky. My use case is more discrete PDE, where the R capabilities for sparse matrices and advanced linear algebra are a bit limited (but this may be just because I'm more used to the annoyances of numpy).


If this is your big setback did you try using a sysimage? The VSCode extension even has a build task for it. To make the sysimage just be in the base environment and then Ctrl+shift+B and select the Juild build sysimage task. The terminal will tell you where its saving the sysimage. It reduced my startup time to unnoticeable (at least to a person used to MATLAB/Python). I am not a bash guru so I don't know how you do it on command line but its a parameter to julia interpreter.


One thing to be aware of is that you're not running a bash script which simply causes julia to "include" stuff, effectively recompiling everything each time the intepreter is run.

As long as you make sure that all the custom code you want to run is in the form of a precompiled module, I think the time required for the interpreter to launch per se shouldn't be that much of a problem.


Yes, but if the road to cached builds or running in interpreted mode is not low friction or no friction, that matters and it's the fault of the language / ecosystem.


Calling R clunky is definitely a reach.

Parameterized reporting in R/knitr is unmatched in the industry


> I get stuck at the same point: the slow startup time…

The language maintainers steadfastly refuse to include ahead-of-time compilation. They seem super focused on their narrow use case scenario and ignore everything outside that.


It is not nice to lie about people like that. They have never refused to do that and in fact you can already do ahead-of-time compilation in Julia. Many of us have already done it.

It is not great yet, but it is an ongoing problem, which they constantly work on and improve.

Claiming they refuse to do it is either ignorant or a flat out lie.


> It is not great yet, but it is an ongoing problem, which they constantly work on and improve.

That's just it. This effort has been dragging on for years now, slow as molasses in winter. If it were a regular goal it would have been a solved problem long ago. That hacks and workarounds for this have been deemed acceptable for so long just goes to show that it's not on the list.


Please show me a source where they outright refuse AOT compilation.


They don't have to outright say, "we refuse". Not doing this obvious step for years and years and years clearly shows their priorities lie elsewhere.


I'm sorry if I'm blunt, but the last year of compiler improvements have been nothing but targeted for this exact purpose? That's half of all spent time since 1.0! The issue tracker is filled to the brim with PRs and issues about making _everything_ faster. How does this constitute "refusal of a obvious step" for you?


Lots of truths there but also colored by someone who evidently has not been a fan of technologies before and seen how it works.

I am not as pessimistic as a Julia fan. Why? Because I have been where this guy is now ever since I was an Amiga user. I was like that with Python, Ruby, Go and a whole other languages.

And what I learned from that is that it takes time. Neither Python, Ruby or Go had success over night. Yet almost all these languages I homed in on in the past have seen what I would call success later.

I was certain Perl would go downhill when it was at its top. I kept argue with Perl fans that Python was a cleaner language that would see more success. Ditto with Ruby. That turned out to be quite accurate.

I said while D looked like a nice replacement for C++, it simply wasn't big enough improvement to go anywhere. That also seems to have been an accurate prediction.

Go on the other hand had a very obvious appeal from the get go. It did something important different and sufficiently better than before while keeping things simple.

I am confident about Julia, because I have not in 20 years since a language with so much potential and so many advantages. You just got to give it time. This reminds me of myself following solar and wind energy in the early 90s and deciding it was not going anywhere.

Ironically I see people like Michael Shellenberger having become a cynic about wind and solar now that it has actually achieved some measure of success. Why? Because it didn't happen fast enough. He poured all his enthusiasm and hope into this industry long before there was reasons to do so.

People have to be aware of the same about Julia. It is not taking over the world any time soon. There is a long road ahead. And I would not define success as getting to Python scale. But if Julia can get to a similar position as Go is today, then I would call that success.


I rewrote some stochastic simulation R code in Julia recently, and it sure feels like the future.


Julia's already making (fairly substantial) in roads. It's just not as visible as Python, since the niche it's occupying is much smaller and out of the way.


When Julia finally comes of age, there will be a new, emerging language claiming it will eat Julia for lunch.


Absolutely and there is nothing wrong with that. But over the decades I have followed computing I have seen languages with much smaller advantage relative to the competition than Julia rise to prominence.

But I am also pretty sure that whatever replaces Julia in the future, it will have borrowed heavily from Julia in terms of semantics, syntax, libraries etc. Multiple-dispatch that Julia pioneered is almost certainly going to dominate future languages.

This is in similar ways to have functional programming has entered almost every mainstream language and OOP entered almost every language before that.


AFAIK, it was Lisp that pioneered multiple dispatch. It hasn't “dominated” a substantial number of languages since then. Maybe the problems it solves haven't been seen as important enough? I really don't know, it seems such a logical extension of the single dispatch known from many OOP languages.


Arguably it may also be because without JAOT compilation as in Julia, multiple dispatch normally comes with a significant runtime performance cost, as far as I understand.


Languages have addressed that in a variety of ways before Julia.


I'm not aware of any that had the sort of complementary designs between the compilation model and multiple dispatch. E.g. in Common Lisp, two of the most important functions for multiple dispatch, addition and multiplication aren't generic functions and need to be shadowed and replaced by generic functions if you want multiple dispatch. My understanding is that they did this for performance reasons, but julia doesn't have any performance degradation from making literally everything other than like 5 builtins generic functions because of the way it's compilation model dovetails with multiple dispatch.

I just checked in my repl and in my current session with a few packages loaded, + has 195 methods defined on it and * has 347 methods. If you look at https://en.wikipedia.org/wiki/Multiple_dispatch#Use_in_pract... they present some evidence that multiple dispatch sees far more use in julia than other languages.


Dylan was/is generic functions everywhere.

> need to be shadowed and replaced by generic functions if you want multiple dispatch.

yeah, but there are extensions for some Common Lisp implementations which would allow that with reasonable speed. I've seen a bunch of CLOS extensions in recent years in that direction.

> ulia doesn't have any performance degradation

Common Lisp generally has a different generic function model from Julia. It's extremely dynamic with an optional meta-object system. Thus its use case is different. But: for example a CLOS-based CAD system might use generic functions everywhere and needs the respective performance for that.

> I just checked in my repl and in my current session with a few packages loaded, + has 195 methods defined on it and * has 347 methods.

In CLOS one would not like that design. Though there also might be generic functions with many methods. My default CL has for example 84 print-object methods, some might have hundreds. Though generally it is not seen as desirable to group semantically very different operations under one name - especially given that CLOS provides different types of method combinations and a CLOS generic function might create a more complex interface.

CLOS has before, after, around and primary methods for a default method combination. One can write arbitrary new combinations and have different ways to dispatch, different inheritance strategies, etc. Thus its optimization problems to provide faster dispatch is different from what Julia wants to address.

Dylan then for example had to address the 'generic function everywhere' problem. There binary+ is a generic function.


> In CLOS one would not like that design. Though there also might be generic functions with many methods. My default CL has for example 84 print-object methods, some might have hundreds. Though generally it is not seen as desirable to group semantically very different operations under one name - especially given that CLOS provides different types of method combinations and a CLOS generic function might create a more complex interface.

Totally agreed here. However, addition, multiplication and so on have a very uniform and well defined set of semantics that apply to many types from all the various representations of real and complex numbers to the many many different types of matrices in Julia's LinearAlgebra library (lots of wrappers for things like Symmetric, or Tridiagonal matrices, lazy matrix factorization objects, adjoint matrices, etc.)

That's why there's so many methods on addition and multiplication in julia, and why I found it so surprising that CL doesn't do this on purpose. But I get that scientific computing isn't as big a part of the demographics in the CL community so I guess it makes sense.


AFAIK, it was Lisp that pioneered multiple dispatch

Cf. CommonLoops, https://en.wikipedia.org/wiki/CommonLoops


a bunch of languages (incl. Julia) has been influenced by CLOS' multiple dispatch (Fortress, Perl 6 (aka Raku), R, C#, ...). Many languages have extensions for multiple dispatch incl. C, Factor, Java, Python, Scheme, Ruby, ...)...


This post overlooks that language adoption is something that happens over decades. It's much too early to conclude anything about Julia adoption.

I also think there's way too much focus in the post about Julia taking share away from Python or R. The real market is Matlab programmers, for one and only one reason - Matlab is extremely expensive.

A number of years ago I was talking with a vice president of one of the Federal Reserve banks that told me one of their priorities was to move away from Matlab due to the high and growing licensing costs. The New York Fed has in fact moved much of its code from Matlab to Julia: http://frbny-dsge.github.io/DSGE.jl/latest/ Julia has momentum in parts of economics that have traditionally been heavy users of Matlab: https://quantecon.org/


> This post overlooks that language adoption is something that happens over decades. It's much too early to conclude anything about Julia adoption.

Spot on, I was doing R back in 2005, when it was slowly become more famous. R fist official release was in 1995. Which means that it took R 26 years to get to the point where it is now. If Julia was launched on 2012, its tipping point may come 20 years later, around 2030.


R is one of the languages I was thinking about, but even 1995 understates how long it took. R is an open source implementation of the S language that dates back to 1976.


This is interesting. Glad to see matlab being phased out.


Julia is a language that looks very simple, but the more you use it, the more you realize how complex and unpredictable it is.

I think that Union types do more harm than good (why would you want a function to return Union of int and float instead of compile error? It totally slows down the program)

Array{Number} is totally different from Array{:<Number}, and shouldn’t be allowed, as it is inefficient.

1-based indexing was a mistake, and I have seen it emitting inefficient code in the PTX compiler.

But the worst and hardest part is the rule/heuristic for multiple dispatch: it’s so complex, that it isn’t even documented. It should probably throw more errors and be predictable instead of trying to be so smart.


> 1-based indexing was a mistake

The most software I write, the more I am convinced that 1-based indexing is what most languages should be using, either the exception being the very specific use-case of needing to track offsets by hand for some reason. 1-based indexing is so much easier to reason about, but zero based indexing is seen as “the one true way” due its ubiquity, and not because it’s actually better.

> and I have seen it emitting inefficient code in the PTX compiler.

Have you raised this with the devs? This seems like something worth raising as an issue.


Both approaches have their advantages, IMO. Inclusive ranges are usually easier to understand, but e.g. when picking a (uniformly distributed) random element from an array, nothing beats a[floor(rnd() * len(a)].


How is that very readable?

In Julia, this will pick a random value from an array: rand([3, 4, 5, 1])

While this picks a value from a range: rand(1:10)

Much more straightforward to read IMHO. Where I prefer 0-based index is when dealing with coordinate systems and memory locations.


> nothing beats a[floor(rnd() * len(a)]

What's wrong with rand(a)?


Already mentioned, in julia just do rand(a). But if you didn't have that, couldn't you just switch out floor for ceil and have the exact equivalent for 1 based indexing?


No - rnd() can not return one, but can return zero, at which point you'd access index 0 even with ceil().


Ah, thanks! I think I'll stick with rand(indexable_collection) in Julia since anything else just doesn't feel Julian, but I'll remember that for other 1-based languages if I come across it.


To tell you the truth, I don’t really care about 0 or 1 based indexing, there’s not much difference when I’m coding.

At the same time if you look at TIOBE, 0 based indexing won.

The biggest problem for me was porting numerical code from Ruby: the syntax is so similar to Julia, that I didn’t have much to do, but porting 0 based code to 1 based was the biggest work actually.

I’m sure it’s painful to port code from R or Marlab if it’s 0 based, but that will need to happen anyways.


Returning a Union of int or float isn't that useful but the point is that Julia is a dynamic language, and if there was no implicit union type it would have to just box the return type into an "Any" box, which actually slows down the program (the union here will cause functions to have two optimized versions, one for int and one for float instead of a generic dynamic one for Any). If it had a compile error for every function that can return more than one type it would make the language even more restrictive than some static languages.

Though for intentional uses of unions, most of the time I use a union of a success type or an error type, or a union of a type and null, or a union of type and missing. They could all be special cases, but I don't see the point of not being just one mechanism.


My C-like language has union types and it's amazing. Basically it makes it really simple to add an upgrade path to a new type in an interface - you can just switch from taking Class1 to taking Class1|Class2, and the compiler guarantees you are handling both cases everywhere it comes up. Then you can gradually switch all your calling code to the new type, and then remove the old type at your leisure.

In a language without union types, you'd just have two fields for both cases, and you'd have to check everywhere that you're handling both cases and that you can't get into broken situations where both or neither fields are set. Union types give you peace of mind there.


That's what sum types are for.


That's what "union types" are, sum types.


For many people (myself included), there is a distinction: A sum type is a tagged union. Using ‘|’ for union types and ‘+’ for sum types, the difference is that for a type T the type T | T is simply T again, whereas T + T is two copies of T. That is, given an element x of T + T, I can tell whether x comes from the first or the second copy of T. This means, for example, that T + null always adds a new null element to your type, whereas T | null only adds a null element if none existed before.

If your type system has a different way of turning one type into two different (but isomorphic) types (for example by wrapping them in records with a single field), you can simulate sum types with union types. Hence union types are more general than sum types.

However, in my experience you (almost?) always want sum types anyways. In this case union types being the basic construction tends to be a disadvantage because the basic construct it usually more convenient and therefore gets used. Union types tend to work fine for a while and once you realize that sum types would be better, you have quite a bit of refactoring to do.


There are also advantages to union types. For example, a Union{T1, T2} can be narrowed to just T1 (by the compiler or the user) without being a breaking change - indeed, the compiler is free to do exactly that in Julia.

In contrast, in Rust, the compiler can't look at a function that produces an Option<T>, decide that in this case, it will always return a Some(T), and then have it return a T instead. That happens all the time in Julia.

And also, sum types take more boilerplate code to use than union types.

In my opinion, sum types are more useful for static analysis, union types are more useful for ease-of-use.


I was not aware of this distinction.


I think GP was telling they are overused and harm performance.


I actually have very similar experiences. I want to like Julia and I look at it from time to time. The hype is always that it allows one to write C-speed code with the simplicity of python, without the idiosyncrasies of numpy. But every time I look I find just as many little things to look out for if one cares about performance, don't use abstract types is another one.

I understand why this is the case and I don't have an issue generally with it, but considering that I know Python well, and know how to speed up Python using numba, pythran or cython, I don't see what I would gain by investing in Julia.

Instead I've now decided to learn Rust, as it fills a different niche and I can use it for writing python modules for some of my bottleneck code.


This chimes so much with me. I started coding in ~2011. I know how to use Cython, Numba, etc. and the options for accelerating Python are getting better all the time. I'm a confident C/C++/Fortran programmer by virtue of having worked in those languages for HPC codes anyway. Whenever I've tried to use Julia for things, it just feels like an awkward hybrid of Python and C++.

Plus, I think people forget that Python now works pretty well on Windows, and I don't think the same can be necessarily said for Julia. In my University, we support a lot of people on Windows with Python packages - particularly outside of traditonally HPC type areas.

As a final thing - on one occasion last year, I tried to integrate a Julia interpreter into an application - something Python does really well on all platforms. I found that copying the code from the website example verbatim did not work on Windows, didn't appear to have done for the best part of a year, and wasn't in the testing framework. I've just checked and it's still not resolved.


Examples being out of date is definitely an issue since a lot of things are still changing quickly in the ecosystem as a whole. I think it’s getting better as more packages go 1.0 and internals stabilize a bit, but nonetheless.

As far as your comment about “awkwardness”, the key point for me was when I actually started to understand/embrace multiple dispatch as a programming paradigm. If you try to write Julia like in a an imperative or oo or etc. style, it may work, but will be clunky and quite possibly full of type instabilities. But if you write Julia in a “dispatch-oriented” paradigm then it really is just as performant and elegant as advertised, IMO.


I think one based makea sense if you consider it is targeting users who want their matlab to run faster but not look like c. Or people who want their fortran "codes" to look more like matlab but not run any slower. I always considered it a language for physicists and meteorologists. That isn't going to top the tiobe even if that actually was a good metric.


Yeah, this is similar to my experience. I used to like the language a lot but there are many aspects of it that are a hindrance to productivity.

I think Swift with the best bits of Julia would be the perfect general purpose language.

Some people think multiple dispatch is a killer feature but in langauges like Swift, extensions are a much neater approach.


My biggest issue is the lack of coherence in the ecosystem, from using macros in whatever cases, to the inability to do otherwise trivial stuff in python, or the huge bottleneck of unpacking (star operator in python). I tried to use it for a class and there was just too much friction between the expected behavior and the observed behavior that I gave up.

My issue with python is speed, but I will probably end up implementing what I want/need in rust and call that from python.


> to the inability to do otherwise trivial stuff in python

Like what?

> or the huge bottleneck of unpacking (star operator in python)

In what sense is unpacking a bottleneck?


Unpacking (or splatting) can be excruciatingly slow in Julia if you have more than a few dozen elements in the iterable.

That happens because it gets splatted into a tuple with N elements, which in Julia has a type parameter for every element - i.e. a generic type Tuple{N1, N2, N3 ... N} with as many type parameters as there are elements in the iterable. And the compiler has a hard time working with an instance of a type with e.g. 10,000 parameters.


Was this improved in recent versions? I tried the following in Julia 1.6-rc1 and they run instantly:

    @time v = [(1:10_000)...];  -> 0.000639 seconds

    f(args...) = sum(args)
    @time f(v...);              -> 0.001039 seconds


That seems like a niche case. Surely flattening arrays is faster.


> In what sense is unpacking a bottleneck?

Performance wise, I can't recall the exact scenario though.


> Array{Number} is totally different from Array{:<Number}, and shouldn’t be allowed, as it is inefficient.

Rest of this I disagree with, but this one I just don't understand. Why is it wrong that it's totally different?

Edit: missed a word which changed the meaning of my question


These are two different types in Julia. The first is an array of numbers. The second is an infinite set of types, namely "all arrays which have Number or a subtype of Number as its element type".

There is a distinction, because e.g. `Array{Int32}` is a subtype of the latter, but not the former (because the former is a concrete types that does not have subtypes).


But that seems very fine and specific to me. I don't understand why it shouldn't be allowed to be like this.


Array{Number} is an array of objects with unknown type and size, basically an array of pointers with possibly totally different method implementations.

Array{:<Number} is when LLVM gets the exact type at compile time and can optimize the code using SSE instructions.

You have a factor of 10 between execution, and if you are not experienced, you don't understand why your code is slow and why the garbage collector kicks in so hard.


Is the post only based on TIOBE? The same index that currently ranks JavaScript below Visual Basic and SQL below Assembly? That ranking is off by so much that anyone who takes it seriously loses at lot of credibility from the start.

I'm not entirely on board with the Julia hype train but they certainly got some things right and both the ecosystem and the community are healthy and growing. Julia doesn't need to replace a bunch of other languages immediately to be successful. Implying that the language is dead by using the word post-mortem is just demonstrably false.


I'm also skeptical of TIOBE but the post is not based only on that... The author seems to have been very active with the language, and even drafted a book on the subject. It seems to me that he may have a good feel or intuition for the scene; while this is not very scientific, it is miles ahead of basing an analysis on TIOBE alone. I'm going to be lurking this thread for potential counter-arguments though.


TIOBE is known to change drastically when google decides to change its search engine algorithm.

Also TIOBE is heavily squeued by the sheer number of publications containing a keyword and it does not really evaluate the content or the quality of these publications.


I've checked google trends and for Julia programming language they are showing slow increase, at least that's my guess as generic search for Julia (choosing programing language suggestion) shows decline while searching for "Julia programming language" shows increase.

Stackoverflow trends show show some increase (https://insights.stackoverflow.com/trends?tags=julia), it was also high on the "most loved" list - 6th place just after Kotlin and Go, but it is very low in the "wanted" category. Seems those who use it like the language, but outside that circle there is no much interest.


By calling it post-mortem the author already showed his level of stupidity (or absence of morality). Julia is a new language that have to compete with languages that accumulated a huge ecosystem of developments and continue to grow. Of course Julia is losing in comparison right now.

I don't see any value in this article. It's a wasted time.


Maybe you should read to the end instead.


I did. I still don't get how the author's decision to not use the language is enough to declare it dead.


That’s not the only factor. Their publisher also decided there wasn’t a market for a book and declined to publish. The author also didn’t say Julia was a dead language, it just failed to realize the dream of becoming the language, and is instead just a language in a sea of many.


"Post-mortem" doesn't imply that it's dead? And also, of course there isn't a mass market for a book on a programming language that's not python and due to most learning material being available online. This is a stupid metric.


I got the impression, that he refers more to the fate of his own hopes.


The title doesn't and can't contain all the nuance of the article. Considering that books were published about data analysis in Clojure, the bar to clear for market size isn't that high. A lot of people buy books for reference even when you could learn everything online.


They might have been published, but how successful are they? I have no idea, but it's very realistic that one publisher is more risk averse while another isn't. This doesn't say anything about the language, really.


Honestly, I learned Python in 1995, and used it where I could for CGI script programming and scripts into the early 2000s, and tried to get jobs programming it throughout that time with no success at all. Throughout that time, Perl was still in use all over but declining, PHP was growing like an unpleasant weed, and it seemed Ruby was ascendant for the "cool" stuff... but Python wasn't cool, most hiring managers had never heard of it, and while Google used it, that was pretty much it for large company support.

Then while I wasn't looking (I lost interest in dynamically typed languages and focused on modern static typed languages) it became insanely popular -- and I actually got pushed out of a lead role at a job in part because folks wanted to rewrite into Python something we did in Java/Scala. Python popularity skyrocketed over the next 10 years. Numpy had something to do with it but also many other factors, not a lot of them initiated by the Python community itself, or by Guido.

All this to say, language popularity is fickle and not terribly predictable. Prognosticating like this is silly.

Julia is nice. Kinda wish I had a job where I got to play with it. Which is about how I was with Python, 20 years ago.


Julia has an advantage python does not - high quality packages written in pure Julia that compose well with each other. They aren't just C wrappers.

Some of their AD and differential equation libs are state of the art.

Being the best at a certain niche will attract users who will eventually stay and contribute.

I use python a lot but it can't beat Julia for simulating an ODE. So I use it too


They aren't just C wrappers

The big advantage with the C wrappers approach is that I know I'll get the same features and results as everybody else using that C library.


Nothing is stopping you from using the C library.


> They aren't just C wrappers.

I'd be warried about that. To some people (battle tested libs) C wrappers are better than a fully coherent single-language ecosystem.


You can just use the wrappers. Also, many libraries are using wrappers where available. Julia allows you to not exclusively rely on C libraries because it is efficient enough to implement them in itself.


One problem with this is that, as mentioned, some julia packages actually are state of the art. There isn't a C equivalent for everything DifferentialEquations.jl can do. Ditto for Zygote.jl and Enzyme.jl.


Julia v1 was released in 2018. It's currently about as popular as elixir. Is the author really complaining that it hasn't over taken as the lingua franca in two years? Python is over 20 years old and it's only hit it's stride in popularity over the last 5.


Actually, Python came into TIOBE Top 10 in around 2004. So it took more than a decade for it to became decently popular; your point still stands.


Exactly!! Much too early to draw conclusions. Look at Go, it having a decent measure of success today. A couple of more years in the market makes a big difference.


Unconvincing.

1. There's plenty of niches. I don't think anyone is pushing Julia as the new general purpose programming language.

2. Ecosystems and network effects are real, but if you conquer a niche, you can still win. R's doing fine.

3. Specialists seem to like Julia and are writing good stuff in it. I don't see why that attraction shouldn't bring more users in. Indeed, it's still rising in PYPL, as linked by another comment.


> We were all going to heaven, and that right soon. The language wars would end, we would finally get a lingua franca for anywhere code performance mattered, Julia would take over the TIOBE Index, and we’d all be home for tea and medals.

This type of thinking is so alien to me. A mainstream language like python has its appeals (like a large quantity of docs and many libraries), but TIOBE ranking? Language wars? Who cares about that? At the end of the day a language is a tool, nothing else (Matlab is a good example).


While I don't think this piece was well argued I must agreee with the sentiment that Julia is somewhat of a disappointment, at least for me personally. Last year I started learning Rust and wanted to use it instead of C++ in the future. I since have programmed quite a lot in it, contributed to open source software and absolutely have fallen in love with it.

So this year I wanted to replace my default scripting language (Python) with Julia and potentially use it for my simulations or ML in the future. But every time I take a stab at Julia - last time around the release of version 1 - it never really stops feeling foreign. Everything is slightly unintuitive and weird. Also, learning Julia made me realize how amazing Rust's learning resources are. Besides the official book and the API docs (with an amazing template), there are a lot of great third party and actively maintained resources likes cheats.rs. I never felt Rust was unituitive or hard to learn. Even if some concepts are unique, the resources do a great job of introducing them to you. While Julia's learning website lists an exercism course, its manual and a bunch of video lectures.

I think selling Julia as a Python competitor raises wrong expectations. Julia may look easy or Python-like but is far more difficult to use IMHO. They may be similar in terms of applications but at least in how they feel to use, they are very different.

I still haven't given up but whereas I speak very highly and enthusiastically of Rust and recommend it everywhere (yes I am one of those people), I cannot see myself doing the same for Julia in the future.

Ps.: I wish there was a book club-like way of learning programming languages and discussing progress with friends. That would make learning even more fun.


I share the exact same sentiment. I really tried hard to like Julia, spent months with it because of all the shortcomings of python. I submerged myself in its community, but at the end of the day it just never stopped feeling odd.

I’ve also recently learned Rust to replace C++ and it’s an absolute pleasure.

I do think there’s a lot of room for a new data analysis language. It’s just not Julia, I’ve been writing all my ML stuff in Rust and that’s actually been a lot of fun although I don’t see that as getting picked up by the mainstream.


Yeah it is really apparent how much excitement their is for Rust. This is also quite noticeable in the community which is also super friendly and helpful. And the fact that tools get rewritten in Rust left and right instead of using mature frameworks in other language speaks to the incredible momentum. The hype train is certainly deserved.


> Ps.: I wish there was a book club-like way of learning programming languages and discussing progress with friends. That would make learning even more fun.

That might be an awesome idea for a Clubhouse club! A weekly programming language meetup with a theme decided in advance.


Or an Element chat? I would not sign up with that service because of their sketchy privacy setup.


This article starts off with the following premise: “one of Julia’s original promises was to take over other languages in terms of popularity” (paraphrased).

Honestly, I cannot find such promise in the original announcement: https://julialang.org/blog/2012/02/why-we-created-julia/

Ex falso quodlibet


I think this post is a bit immature. These things take time. Julia should just keep on striving for excellence and I'm sure it doesn't need me saying so. I've spent the last decade writing python professionally for web backend software engineering: it is a fantastic language for getting out of your way and allowing you to think and build what you want. And things like pattern matching show that it's still taking significant steps forward. But, languages should be statically typed: modern compilers offer too much benefit to development workflows to turn this down. So you can try to patch typing on to a dynamic language but my impression is that mypy is a hack compared to what was delivered to javascript in the form of typescript.

I suspect that people will eventually swing back towards statically typed languages for the development tooling. Especially as the future needs more programmers and more help for programmers to build things correctly vs the early 21st century era of hackers writing untyped python in vim.

Above comments are mainly aimed at web backend. I'm aware of Python's dominance in data science and machine learning but perhaps ultimately the same will hold true there -- that humanity will ultimately go for solutions that keep developers on the rails more. (Jupyter notebooks are a disaster when it comes to building correct solutions: no encouragement to use version control; out of control invisible global state etc.)


Julia has its own notebook that solves the issues you mention with Jupyter: https://lwn.net/Articles/835930/.

With Pluto there is no hidden global state, everything is deterministic, and the notebooks are Julia files, so version control is as natural as when writing straight jl programs.


I think claims of Julia's death are premature. I used it all throughout grad school shortly after it was released, and there are things the language does so much better than any other language I know of that I find it hard to imagine it just disappearing into the ether. The only way I can think of it really dying off is if more popular languages absorb the best features of Julia, making it redundant.

It's hard to make inroads into Python's domain because of the strong networks effects, but I think if one or two major companies onboard the language, then we might see a snowball effect.

Personally, I'd like to see a version of Julia with all static typing and Rust-like memory management. That would be pretty close to the perfect language for me.


> Personally, I'd like to see a version of Julia with all static typing and Rust-like memory management. That would be pretty close to the perfect language for me.

Can you elaborate on two things:

1. What would this language have that Rust doesn't? I don't know Julia. 2. Why?? Rust's memory management is the main reason it isn't being adopted even faster than it is, IMO. The VAST majority of domains don't WANT Rust's memory management. Garbage collection is great and not even "slow" (see OCaml). Rust's memory management has to be the way it is because it needs to be useful in domains where C and C++ are used.


> What would this language have that Rust doesn't? I don't know Julia.

Multiple dispatch, homoiconicity (well... more specifically, the ability to generate dependently typed functions at runtime, which is useful for custom matrix routines)

> Garbage collection is great and not even "slow" (see OCaml)

I did HPC work in Julia, and garbage collection was quite a pain. Most of my debugging time was spent figuring out where tiny memory allocations were occurring in for loops. IMO, Julia is as fast or faster than C/C++ provided you don't unintentionally allocate memory in performance critical parts of your code. I ended up eventually just passing preallocated chunks of memory into functions, but then I had to keep track of all that.


> Multiple dispatch, homoiconicity (well... more specifically, the ability to generate dependently typed functions at runtime, which is useful for custom matrix routines)

Ooh. Fair enough. Sounds very cool!

> I did HPC work in Julia, and garbage collection was quite a pain. Most of my debugging time was spent figuring out where tiny memory allocations were occurring in for loops. IMO, Julia is as fast or faster than C/C++ provided you don't unintentionally allocate memory in performance critical parts of your code. I ended up eventually just passing preallocated chunks of memory into functions, but then I had to keep track of all that.

Okay, fair enough. I was just thinking in terms of what I'm familiar using Python for. For "data stuff" it seems that the community doesn't mind how slow Python is because Pandas and NumPy, etc, all call through to Fortran/C libraries. Since that seems to be "good enough" for the industry at the moment, I was kind of assuming that garbage collection wouldn't be a problem (especially because you can get WAY faster than Python before giving up garbage collection).

Also, is it the garbage collection that was a pain, or the allocation that was a pain? Allocation certain takes time. But one of the nice things about garbage collection is that the cost of reclaiming memory can be deferred. People in Rust have even started noticing that being not-garbage-collected doesn't mean "fast"- it's just a different performance profile (https://abramov.io/rust-dropping-things-in-another-thread).

As an aside, Swift has structs and classes, were classes are automatically reference counted and structs are value types that have scope-based lifetimes, like C++ non-new'd objects.


>I used it all throughout grad school shortly after it was released, and there are things the language does so much better than any other language I know of that I find it hard to imagine it just disappearing into the ether

That is also true of Common Lisp and Smalltalk, though, and both are as good as "into the ether".


God there's been so many times I've been working on something and fighting with mainstream languages to produce what I want when I find a language that allows me to solve my problem expressively an simply with ease. Some of them see industrial usage but they're still not very common.


Yeah. There's "dead" and there's "DEAD". Many languages are "dead": COBOL, probably Perl 5, Groovy, Common Lisp, etc.


I agree that Julia can't stack up to Python in terms of a data science / statistics domain. But in everything people use MATLAB for, Julia actually has the upside in many of the qualities discussed in the article: Community, Package Ecosystem, annoying licensing bureaucracy etc etc.


The best bet for Julia might be to be a Trojan horse in the Python ecosystem. By making the best of breed packages in specific domains it can become the first pick in the Python community rather than C/C++ based packages.

If Julia is the engine that drives all the critical parts in Python rather than C/C++, then it has a way to get the foot in the door. People will stop and ask: Why am I using Python if I could just use Julia directly?


The same with R. There's already the excellent JuliaCall[1] package, for embedding Julia code in R. Listed in the README are some R packages, using Julia code through JuliaCall.

[1] https://github.com/Non-Contradiction/JuliaCall


To make Julia compelling as library code, it would have to be statically compiled. Having a massive laggy Julia runtime embedded in a library seem like a complete no-starter to me.

I'm convinced Julia will eventually be able to be compiled statically, but doing that will likely make it feel decidedly un-Julia like (e.g. no dynamic dispatch, no type inference failure), to the point where a library maker would probably just want to use an actual static language instead.


Because you want a snappy repl?


Every major release has improved performance in the language and its use, and it doesn’t really take that much before most of the stuff you do is already compiled and you get a repl experience far superior to pythons.


That may be true for elaborate analyses, but doesn’t really address exploratory analysis that changes dramatically from command to command. The use case for REPL-driven data science is experimentation, not performance


Julia has advantages over Stata and SPSS as well.

The main thing holding it back is its ecosystem. R has better libraries and a much better IDE (RStudio), while Python is still the best option for machine/deep learning (and general programming).

However, Julia has much better performance. R is arguably easier for someone with little coding experience to learn, but Julia isn't that much more difficult, and it's much more intuitive to code in than Python.

Julia + RStudio + CRAN/BioConductor would take the cake.


Julia has advantages over Stata and SPSS as well.

I can't speak for Stata, but literally everybody I know using SPSS use it because of the easy to use GUI that allows quite complex analysis with basically zero programming.


I don't intend to downplay either Stata or SPSS for exactly that reason - they're very good at what they do. However, they have a limited scope (which isn't a bad thing), and if you want to go beyond it, you have to turn towards other options (e.g., R for extensibility, or Julia (or C, or whatever) for performance).


> R is arguably easier for someone with little coding experience to learn, but Julia isn't that much more difficult, and it's much more intuitive to code in than Python.

Having some experience with all three of these languages, I find Julia much harder than R and less intuitive than Python. The relatively clean syntax isn't enough to make Julia an easy language.


Except plotting things


Here's the funny thing there are more full time developers working on Julia than Python and the rate of progress in Julia ecosystem is truly amazing.


> What’s the unique selling point?

It’s multiple dispatch, how did the almost-author-of-a-book-on-the-language not get that?


Maybe that's where the almost comes from.


I've always viewed Julia as a language for scientific computing professionals [1].

The article pronounces Julia's death only based on popularity relative to other languages. Yet, it's not clear what the author is comparing it to.

The comparisons I see are MATLAB and FORTRAN, to which Julia seems to stand third in TIOBE Index [2] that the author is using. The author doesn't seem to focus on this.

The author mentions

> Julia’s target user is harder to define. I have struggled with this while writing Learn Julia.

I wonder if it may not be the case that the author has developed his own notion of what Julia ought to be. And I'll agree that Julia may have failed his grand vision to displace large parts of Python, but I do not think that that vision is based in reality. Python users that want to use frameworks written in other, faster languages (like C++) will forever continue to use Python and enjoy the vast libraries that it offers which aren't centred around scientific computing.

[1]: There seems to be a list on https://juliacomputing.com/. Arguably their needs might be very different than the author's. But I can't say because the article's arguments are not based on technical shortcomings.

[2]: In the TIOBE Index (as a proxy for popularity) MATLAB gets 1.04%, Fortran 0.83%, and Julia 0.41% (GNU's Octave, the main FOSS Matlab competitor, is nowhere to be seen). I do not know what these percentages mean though https://www.tiobe.com/tiobe-index/


I found Julia scoping rules to be weird, and a bit off putting

While this clearly doesn't seem to impact its adoption, or its community from creating great things

For the new comer to the language, the scoping rules just gives a bad feeling for the language, I wish they fix it in future versions

check my github issue if you want to know more what I mean by off putting scoping rules https://github.com/JuliaLang/julia/issues/37187


Good stuff, hugged to death [1]:

> In the end, code doesn’t make software – people and communities do... It’s hard to beat an incumbent, and even harder to do so without having a large target user community you can capture with a compelling use case.

We need a better name for the category of Machine Learning Workbench tools that include Julia, R, and NumPy+ (the ML ecosystem that uses Python as an Internal DSL). The compelling use case for Julia is that it is a modern language built on top of the LLVM platform (like Rust and Swift). There is no shame in not winning when you were late out of the gate; see the story of ARM. The value proposition hasn't disappeared it just didn't outpace the evolving bricolage of NumPy+ and its new complementers (Jupyter, PyTorch, TensorFlow).

[1] http://webcache.googleusercontent.com/search?q=cache:Awfh5p8...


cached because the blog was hugged to death: http://webcache.googleusercontent.com/search?q=cache:Awfh5p8...


Interesting..

I thought Julia was on the rise ?


It is. But if your aim is to displace (never mind replace!) an old, established niche, it's a very steep uphill.

Like the OP, I still use a ton of python. And I consider python to be a newcomer in the sci community, still gaining momentum with perhaps a bigger uptick in the recent years.


The goal is to create a great programming language to solve real problems.


The vast majority of people look first for existing tools to solve the problem, then for ready-made libraries to solve the problem, then for libraries/code which is easily adaptable with the least effort.

Don't delude yourself into thinking the actual language matters in most cases. Historically it never did.

(I love julia, btw)


I already solve real problems in Python—why bother learning yet another language, and one our data engineers/backend folks don’t know? That larger context of implementation / development / deployment is a huge deciding factor for real-world problem solving, not how “great” a language is from a design perspective.


Julia didn't come about because a bunch of CS people wanted to make a pretty language. It came about because people experienced with scientific computing were frustrated with the current state of affairs and wanted to improve the situation.

If Python works fine for your use case, then indeed, Julia probably seems like a waste of time. But if your situation - like mine was, three years ago - is that you program mostly in Python, but then constantly hit the performance wall and begin doing questionable gymnastics with Numpy, Cython or Numba to try to scale it, before eventually giving up and writing C code that is 15% business logic and 85% pointless boilerplate that segfaults if you look at it wrong... well, then the point of Julia is obvious.

Right now, Julia mostly attracts the users for whom Python is obviously not good enough. Eventually, I hope the ecosystem matures enough that even a user who might otherwise use Python would use Julia simply because it's nicer.


It depends on the problem you have, "do excel-type business analysis" or "write a CRUD app" are things any language can do. Julia solves hard problems.

Neuro-differential equations, energy grid optimization (Ivenia), database implementations (RelationalAI, see Jamie Brandon's talk on why Julia specifically is suitable for this), DSL implementations (Stan, &etc). Writing high-performance simulations (no, slow langs won't cut it, if you want tight loops you need low-level control such as cache-coherent memory layouts, vectorized instructions, &c).


>(no, slow langs won't cut it, if you want tight loops you need low-level control such as cache-coherent memory layouts, vectorized instructions, &c).

Yawn. You can have all those wrapped under a high level API, with much nicer implementations, with more support, more documentation, more eyeballs (due to more adoption) in a "slow" language.

You can even have them run faster than Julia's and in more stable and mature implementations (like there's LAPACK).


Right up until you have to implement, debug, or enchance these lower-level packages yourself. Then suddenly

* 95% of your users can't contribute or even understand what's going on, because they don't understand the language it's written in

* even if you do understand, you have to write in a much less expressive language (you came using Python for a reason, but end up doing all the hard work in C)

* performance gets lots at the boundary between the languages (which is why Julia nearly always outperforms Numpy)

* error messages and bugs often fall through the crack of the two languages

The fundamental argument "you can just use Python, you just have to write all the hard stuff in not-Python" is very strange to me.


>Right up until you have to implement, debug, or enchance these lower-level packages yourself

That's the very benefit of a larger ecosystem: you don't. Whereas with a smaller ecosystem, you often do.

>95% of your users can't contribute or even understand what's going on, because they don't understand the language it's written in

That's even more of a problem with Julia, since the code written in Julia will be unfamiliar to both first level users (much much fewer use Julia than Python) and to library developers (fewer know Julian than C/C++, which such Python libs are written in).


> You can have all those wrapped under a high level API, with much nicer implementations, with more support, more documentation, more eyeballs (due to more adoption) in a "slow" language.

You 'can' have those things, but 'should' you? Julia is well designed, so 'nicer' is not a given. Julia is also quite a high level language with excellent metaprogramming facilities. Otherwise, everything you listed is based on adoption level.

Why use a "slow" language if you don't have to? Especially when the faster language is actually better designed...


>You 'can' have those things, but 'should' you?

As opposed to what? Wait for Julia to catch up in 10 years?

If so, yes, you absolutely should get those things from where they're already available.

>Why use a "slow" language if you don't have to?

Because that's where the action is, and who gets those niceties first.

Julia hasn't even fixed their slow startup/load situation all these years...


Progress is sadly not instant.

Julia seems to be gathering momentum rather than losing it. The future looks bright if trends continue!

(As to the startup time problem, significant progress is being made... Have you looked at standalone binaries?)


What's the point of the wrapper?

import simulation from 'simulation_code_written_in_c'

simulation.run()

What value does the above code have?


That particular strawman code little.

Code that e.g. can be configured to load data in a particular way, and then run in a particular way, and is then run, hella fast in C, does. Numpy is not:

import nympy numpy.run

Neither is pandas and others...

You might as well ask "what is the point of glue languages". Just ask anybody who ever embedded e.g. Python/Lua/Guile/etc in a C/C++ program (which has similar mechanics)...


"What is the point of glue languages" indeed. Loading parameters into a program makes sense for a scripting language, especially for interacting with code you didn't write. But when 99% of your problem is solved with custom-made C/C++/whatever (as is the case with simulations), the value-add of a glue language is very little. Professionally, I've written trading simulations in C++ where Python was used for glue code. It added very little value and added more dependencies/headaches. Julia could've have replaced both.


I’m personally of the opinion that one should learn new languages, so that even if you don’t put them into production, you’re exposed to the ideas and approaches they bring-which might be foreign to your current language of choice-and this can give you new ideas about how to best tackle problems in your day-to-day.

I’ll probably never convince my .Net-wielding team members to write a web API in Haskell, and that’s ok, and I’m not about to rewrite my data pipelines in it (even though that might be fun), but the ideas about purity/mutation, cleanly-separating IO, composition, etc have benefitted my teammates and I immensely with the code at work though.


This is a good point! I like your way of approaching this space :) Though, what you are describing is very different from what GP was claiming, which is using Julia for production.


>> But if your aim is to displace (never mind replace!) an old, established niche, it's a very steep uphill.

If your goal is to beat someone else then you're playing the wrong game.


This doesn't say much.

First, as an analogy, since in most games, the goal is precisly to beat someone else, and there's nothing wrong with that (it's the very meaning of sports competition).

Second, because for a programming language community and adoption and ecosystems matters, and you don't get all those without beating "someone else" or at least comming close to doing it.

Either you magically double the number of devs/people doing data science, or you atract data scientists from Python and R (either attract existing users of Python, or get more new users into you rather than in Python).


It probably still is, but by all accounts it is rising a lot slower than many people had hoped for.


Hopes are free. If you want it to rise quicker you have to use it and contribute. "Why would I use it if few people do?" is quite paradoxical.


Definitely not paradoxical. Network effects of out-of-the-box tooling/features—I mean a robust, extensive set of libraries to accomplish basic and moderately complex statistical / ML tasks—is the table stakes here, and depends strongly on whether more than a “few people do” already.


Yes. For 10 years and probably 10 more before it gets anywhere near significant adoption...


Julia will never become a mainstream language because it is not general enough. The syntax is inspired by matlab which is good for linear algebra. Also it is not object oriented which is probably the most popular paradigm. Python will not be overtaken soon by such an exotic language.


Of course Julia is general enough. Whether you end your for-loop with `end` or `}` doesn't make any difference in what you can use the language for.

That Matlab is not great at general programming has absolutely nothing to do with the syntax and everything to do with semantics. In Matlab pretty much everything is a matrix. That is not the case in Julia. Julia work with scalar values just as well as Python. Actually it does it much better.

Object-oriented programming has been on the way out from all major new languages for years now. Go, Rust, Kotlin, Clojure, Julia and many others have downplayed object-oriented programming significantly.


'Also it is not object oriented which is probably the most popular paradigm.'

Popularity isn't a suitable metric for 'good'. Look at McDonald's food.

'Python will not be overtaken soon by such an exotic language.'

Python and Julia aren't competitors. Python is suitable for scripts and gluing high performance code together. Julia is a general purpose language which is suitable for writing high performance code. Object orientation was likely a mistake. The industry is slowly moving away from it. There is no abstraction you can express with object orientation that you can't express with multiple dispatch. Structs are very much like the data part of objects.


I'd actually say the opposite. In my experience, n-dimensional arrays for example are very useful for problems that don't have anything to do with linear algebra. And in Julia it's just so convenient that all the high level functions manipulating such arrays are fast, and therefore much simpler to write.


“...not general enough [...] not object oriented...”

The object oriented model is a subset of the more general multiple dispatch model offered by Julia. So these two notions contradict each other.


Object orientation is no “must have” for a general purpose language to become popular. OO is a paradigm like many others, and it is one that in my opinion is much more abused than, say, functional features.

In a few years we will look back at all those class hierarchies and wonder: man, what the hell were we thinking.


Y'all broke my server. :)

Lots of great comments here. I don't do much Hacker News-based polemics, so I have extended the original post on my website with some summary responses to a bunch of really thoughtful comments here. Thanks all!


I have an irrational aversion to LLVM languages and I don't know why, because I have close to zero personal experience with them. But when I find out that some new language uses LLVM my interest in it drops.


Archive link as the article seems to be down: https://web.archive.org/web/20210308114008/https://chrisvonc...


my takeaway: basically python ate the toast and there is fierce competition for OS devs. Ecosystem gravity becomes stronger the more mass it gets.

I also think that feature creep is a proven strategy for mainstream language (see Java, C++, Python) but a killing feature for rising languages.


There is another way to add value over python. Transpile python to Julia.

One place where Julia has an opportunity is to make easily distributable small binaries, which work better than par files.


tl;dr -- Julia is not that popular (yet!).

A pretty good indictment of the language is that a "port-mortem" didn't mention any of Julia's technical limitations, but rather its popularity.

With regards to popularity, Julia has had a lot of adaptation over the last year. And once it can produce small-ish static binaries (.so/.dll), I think there would be a surge in popularity.


And once it can produce small-ish static binaries

That would be a real killer feature. If Julia could easily create small stand alone .exe files that I could just give to colleagues to run, that would be a seriously tempting feature.


This is a promised feature (one that Viral Shah made recently). Julia already provides user-facing compiler hooks (see JET.jl for static Julia analysis). So it theoretically should be possible for someone to create an AOT compiler today. Not sure what the current limitations are.


Check out StaticCompiler.jl. it's very beta, but it works (at least for some small problems)


Python is terrible for Agent Based Modelling because of the Python Tax when dealing with so many entities.

The question is would Julia be any better?



Agents.jl benchmarks really well. See https://arxiv.org/abs/2101.10072 . Or if you use the libraries you will immediately feel the difference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: