
Julia 1.4 - pella
https://github.com/JuliaLang/julia/blob/v1.4.0/NEWS.md
======
komuher
Something about Julia from me. I was big Julia fan using it for last 1.5 year
and in my company we are using Julia for data preprocessing and parsing
hundred of GB's of data. (We also use Python for ML and start to moving julia
code to Nim for data prepro) If u consider Julia as ur next language dont use
it to any medium to big project outside pure scientific purposes it just isnt
ready yet. We have about 3.5k loc in Julia at the moment for our preprocessing
service and as much problems as we have with GC in Julia i never saw in any
other language: random crashesh, memory leaks, big slowdowns with long running
processes and there is a lot of topic on julia forum about that still not
solved. Julia is beautiful and easy to use but it is nowhere near stable
and/or fast as any other "new" languages (RUST, Nim, Golang, even Swift). And
if u consider using Julia for Deep Learning then best performance and
stability in this order are: Pytorch, Tensorflow/MXNet/(JAX), Nothing, Flux.
Its a long way to go for Flux before it'll be even close in usability
considering other flag frameworks.

~~~
ced
We're using it in a soft-realtime setting to monitor industrial chlorine
production, and for us it has been a very pleasant experience overall. Yes,
we've had some issues, but similar to other ecosystem IMO, and our support
contract with Julia computing helped us in the one case we really couldn't
solve ourselves.

Julia works really well for power users. There are no huge libraries full of C
code like pandas or scipy. Instead there are dozens of small, well-tested
packages that fill the same role, all hosted on github. That makes fixing
issues so much easier.

Granted: outside of numerical/technical computing, the libraries can be
lacking (eg. web development) compared to other languages. We're doing it
anyway, but it's a more difficult decision.

~~~
komuher
How much LOC did u guys have at the moment, and how much data are u
monitoring? For smaller problems Julia was working perfect for us.

~~~
ced
For data, we're monitoring ~1000 time series per plant, at about 2
points/minute. Julia's speed is not necessary there, but it is critical for
historical simulations of algorithms.

------
eigenspace
Surprised by some of the negativity here! I've been extensively using julia
for my graduate physics research and a lot of hobby programming for almost 3
years now and absolutely love it.

It's a beautifully designed language with incredibly responsive and wise
developers and 1.4.0 is a great release I've been on 1.4 release candidates
for over a month and haven't had a single issue, and love the new features and
improvements.

~~~
systemvoltage
The negativity is warranted if you've ever run Julia or had to maintain it in
production. Use it for your own Jupyter notebooks and personal analysis?
Great! Need to debug some weird obscure error (which Julia does a poor job of
reporting to the user, let alone which line in the stack trace) while having
production pressure to get it up and running again? Julia is unquestionably,
unarguably and utterly unsuitable. Do not use it in any production workflow,
heed my advice as a maintainer of Julia repos and the tech debt that graduate
students have created in our company (I am sorry but just stating the facts).
You're still unsure? Let's switch sides, you maintain our company's production
Julia repositories and I will take your spot to write beautiful code in a
couple of hobby projects. I guarantee you, with absolute certainty, you will
come down to your knees. You'll lose sleep. You'll hate management for letting
this happen.

~~~
dnautics
I've thought about this problem, quite a bit, and I think that productionizing
Julia would require at least a few additional things above your criticisms
about error codes.

1\. Get rid of global dependencies. Store all of your dependencies in a
project local deps directory with a project lock file.

2\. Opinionated file system structure. Maybe you don't need this in one-off
scripts but definitely for some sort of "project" layout

3\. Force all packages in the package manager to obey these constraint.

The ship may have already sailed on 3), sadly.

~~~
oxinabox
You are describing every Julia package. These are the rules of the package
manager.

1\. Every package must declare it's dependencies (and to register must declare
compat bounds on them, and good devs do that always anyway) 2\. Every package
must have Project.toml in the base directory, source code goesin `src`, test
code goes in `test`, documentation goes in. `docs` (and more structure there
if using Documenter.jl) What more could one want? If a package needs more
folder structure within `src` then it should be multiple packages. 3\. These
are the rules, done.

Further you are describing every sensibly done julia application.

Basically no seriously Julia developer uses the global environment for
anything but dev-tools, like BenchmarkTools or ProfileView. Certainly one does
not depend on the content of them for any reused code -- that is what
Project.toml and Manifest.toml is for.

~~~
dnautics
> every sensibly done julia application.

Can you point me to some documentation on those best practices. My girlfriend
- architectural acoustics consultant - is working on a project in Julia and is
having a hell of a time managing dependency shift underneath her program (also
she barely knows how to use git).

~~~
oxinabox
Sadly I can not point you to a single piece of documentation that covers
everything. That's definitely an area Julia can improve. Writing down the
things "everyone" does, so newcomers don't have to learn them again.

The Pkg manual doesn't have a tutorial on standard practice. But it's worth
reading the compat section and making sure to always set your compats
[https://julialang.github.io/Pkg.jl/v1/compatibility/](https://julialang.github.io/Pkg.jl/v1/compatibility/)

And the section on creating one's own project. So as not to need to use the
global environment. And alternative to using `activate` after starting Julia
(and my preferred way) is to start Julia with the `--project=.`

[https://julialang.github.io/Pkg.jl/v1/environments/#Creating...](https://julialang.github.io/Pkg.jl/v1/environments/#Creating-
your-own-projects-1)

And can go further and create projects via PkgTemplates which is was
"everyone" does. Because a good project looks just like a package (one might
as well consider them synonyms when looking for thus kind of advice)
[https://github.com/invenia/PkgTemplates.jl/tree/v0.6.3](https://github.com/invenia/PkgTemplates.jl/tree/v0.6.3)

~~~
dnautics
It's got anaconda-like project spaces? While I can see why they did that
because it's what people are used to... That's a terrible, terrible choice (I
joke there's a reason why docker got invented and it's that conda is awful).
Why not just put deps in the project directory instead of in ./julia/..

------
KenoFischer
To answer the usual complaint whenever the release announcement gets linked
here, the NEWS file is intended to give current users of Julia an overview of
all the things that changed that they may want to adjust to. It is however,
not designed to give people who are only casually following the project an
overview of all the work that's going on and why it's happening. We've been
talking about writing a more casual document like that for the releases, since
they do tend to get a fair bit of attention, but that's just one more thing on
an already-extensive list of things that need to be done for each release.

~~~
Someone
_”It is however, not designed to give people who are only casually following
the project an overview of all the work that 's going on and why it's
happening._

The NEWS file may not be designed for that, but IMO, it still is way better
than this text, which, for casual followers, doesn’t say much more than “1.4
replaces 1.3, no breaking changes, a few new features”, without even
mentioning those features, or why they were added.

Maybe, combining the two in a single document, and copy-pasting NEWS.md in the
announcement, would decrease the amount of work and improve things?

~~~
disgruntledphd2
I dunno, the comments were pretty informative.

Looks like Julia still haven't managed to handle updates well.

I remember being so amazed that they'd blown their 1.0 announcement (went from
0.7 to 1.0 over a Juliacon).

And because there were _so many_ changes (most of which i thought were good),
it was pretty broken for new users at that time, which I firmly believe
limited their adoption.

And to be clear, I love the _idea_ of Julia, and that first document made me
fall in love. And I think that the design for statistical computing is really,
really good.

It's just a shame that this ops/packaging stuff is holding them back.

~~~
ViralBShah
The 0.7 to 1.0 was a planned move that was communicated for over a year. If
anything, the 1.0 transition went very smoothly and the Julia community grew
significantly soon after its release. All the download and community stats
broadly demonstrate this.

IMO, 1.0 was rough not for new users but people who had invested a bunch of
time in Julia codes pre 1.0. We anticipated this and therefore had a very
carefully planned release strategy to ease the transition with depreciation
warnings and preparing 0.7 as a migration aiding release for 0.6 users to 1.0.
In fact our release announcement discussed all of this at great length.

[https://julialang.org/blog/2018/08/one-point-
zero/](https://julialang.org/blog/2018/08/one-point-zero/)

~~~
kgwgk
> The 0.7 to 1.0 was a planned move that was communicated for over a year.

That sounds as if the communication was of the “1.0 will be released at this
date one year in the future” kind. But it was more like saying for a year “it
will be finished and released someday” and then, according to the wikipedia,
“the release candidate for Julia 1.0 was released on 7 August 2018, and the
final version a day later”.

~~~
KenoFischer
Releasing at JuliaCon was always the communicated goal, though admittedly with
a bit of a hedge that we may not manage to get it done. As for the 1.0RC
business, 0.7 and 1.0 are the same release except that 0.7 includes additional
depreciation earnings that are not in 1.0. This decision was made to keep with
our communicated policy of having at least one versions where deprecations
would produce a warning. There were a number of 0.7 release candidates in the
months leading up to the release. Since the changes in 1.0 were minimal over
0.7, it didn't need extensive validation and the one day was enough to make
sure it worked. Would a bit more time have been better? Sure, but in
retrospect it was totally fine. 1.0 was a fairly solid release and has gotten
more solid with the LTS patch releases that many people still use. The big
problem was packages in the ecosystem needing 2-3 months to catch up, but I'm
not sure that could have been avoided. One of the learnings we have is that
most people won't upgrade their software until the new version is released and
upgrading is absolutely necessary.

~~~
disgruntledphd2
So, for me at least, the issue was that I downloaded 1.0 (having played with
Julia in the past).

When I tried to install packages, I got errors after error as a result of
deprecation warnings from 0.7 becoming errors at 1.0.

Again, I really like Julia, and want it to succeed. But the 1.0 situation put
me massively off, and killed my plans to start evangelising Julia at my
company.

It's just a shame, that's all.

------
pella
Julia v1.4 Release Notes :

[https://github.com/JuliaLang/julia/blob/v1.4.0/NEWS.md](https://github.com/JuliaLang/julia/blob/v1.4.0/NEWS.md)

~~~
dang
That seems more informative so we switched the URL from
[https://discourse.julialang.org/t/julia-v1-4-0-has-been-
rele...](https://discourse.julialang.org/t/julia-v1-4-0-has-been-
released/36324) above.

------
greendave
The language has a lot of really nice stuff and the introduction of threads
and support for them in the standard libraries (beginning in 1.3) has helped
us a lot.

That said, the tooling is still frustrating. Generating compiled binaries is a
slow, painful process.

------
xor0110
Julia is an amazing language and anyone who loves computers and/or science
should be trying it right now.

------
dklend122
Julia is an amazing, elegant and beautiful language. It's almost perfectly
suited for scientific computing and ML. However, I'm not very bullish on its
future given s4tf

Swift can get you 90% of the way there, and that extra 10% can be more than
made up the by the efforts of apple, google and other companies (including
money, network/clout, kaggle which is owned by google etc). Despite
predictions to the contrary, Chris Lattner's departure doesn't seem to have
slowed down the project, and more team members from google have been added
since.

Swift is rapidly approaching usability on windows with investment from google.

Further, at some point google will facilitate Swift's use for android apps,
and then Swift's popularity will skyrocket, and all those developers will be
naturally inclined to check out the ML stuff. Even facebook is getting in on
the party:
[https://twitter.com/nadavrot/status/1241150682104606720](https://twitter.com/nadavrot/status/1241150682104606720)

In addition, Swift has its own benefits over julia for production and large
codebases, such as compilation to small binaries and static typing. Julia
doesn't have a good story for either of these (yet?), and chasing down type
instabilities in larger code isn't fun.

~~~
ddragon
You mean the future of Julia or the future of Flux? While an amazing
accomplishment (that is still on the way of becoming truly mature, just like
s4tf), Flux is just one of Julia's current ML libraries, and it definitely
doesn't feel like a Rails (or maybe Flutter) situation in which the library is
larger than the language. ML isn't even Julia's core target (it just happens
to fits extremely well with numerical and scientific processing).

Julia will be just fine even if s4tf somehow steals all the mindshare
(especially since a mature differentiable programming library will inevitably
serve as inspiration for Flux itself) as the language and target audience is
not very similar to Swift's and as such Flux and s4tf will also find different
niches (for example one can be more used on high performance scientific
research thanks to Julia's ecosystem and focus while the other can focus on
mobile deployment of ML models).

~~~
dklend122
If my scenario holds, at some point Swift's scientific computing ecosystem
will rival and overtake Julia's.

I don't see the ML ecosystem developing in isolation because there's going to
be overlap, especially as more and more code can be differentiated.

~~~
eigenspace
Who is going to make the scientific ecosystem? Julia and Python's scientific
ecosystems are so strong precicesly because they get domain experts in those
ecosystems to write the software they need for their niche.

Machine learning programmers aren't about remake DifferentialEquations.jl or
scipy in Swift. I've yet to meet a single scientist from a field outside of
machine learning who was seriously excited for swift. This sort of machinery
is hard to make and takes deep expertise, I really doubt it'll be made in
Swift any time soon. Does swift even have plotting libraries yet?

Swift has a good automatic differentiation story, mostly because it is very
focused on machine learning use-cases, has corprate backing and all efforts
are on one implementation. However, having only one automatic differentiation
implementation has drawbacks. It won't be suitable for everyone.

Julia on the other hand has a gigantic basket of different automatic
differentiation tools all of which have strengths and weaknesses. This allows
people to choose the right tool for the job and explore a very wide design
space, allowing us to find which approaches work best for different
circumstances. Our AD machinery is still evolving and definitely has problems,
but progress has been fast and really encouraging.

Even if Swift becomes the next Python and eats scientific computing, I
strongly doubt this will seriously hamper Julia's community. We've been doing
great living in Python's shadow. Julia doesn't need to be the most popular
language in the world to be useful or successful.

~~~
dklend122
> Who is going to make the scientific ecosystem?

Google and apple. Apple already is working on a swift-numerics package.

Look at TF python and jax. They've re-implemented chunks of scipy and numpy
twice, hired people to work on plotting (altair) etc

And that's with python. Their engineering time will go much further with
swift, obviously.

~~~
eigenspace
Google and Apple are _not_ going to make a full on scientific ecosystem
because they don't have the domain experts or the motive.

Numpy is not the same thing as scipy.

DifferentialEquations.jl in julia is a great example of what it actually takes
to make a real, competitive differentiation equation library. The sort of
stuff that was built there requires a deep connection to the scientific and
mathematics literature. Cash won't cut it.

Another great example that'll resonate with physicists at least is things like
ITensors.jl
[https://github.com/ITensor/ITensors.jl](https://github.com/ITensor/ITensors.jl).
Apple and Google are _not_ going to make something like that.

~~~
dklend122
It's not JUST google and apple, I'm sure they have enough cash and expertise
and will to create enough momentum to attract more domain experts in other
areas. Especially once google brain and deepmind start working on things more
complex than stacking layers, which is happening now.

In particular, do you have another example aside from DifferentialEquations.jl
?

Neural ODEs are hot enough that something like that could easily pop up in
swift.

ITensors is interesting, but that's only one.

~~~
eigenspace
> It's not JUST google and apple, I'm sure they have enough cash and expertise
> and will to create enough momentum to attract more domain experts in other
> areas.

Maybe, but I'm doubtful. Scientific domain experts flock to languages like
Python, Julia, Matlab, R, etc. because they're interactive and allow them to
quickly iterate on ideas, query data, produce plots, etc. Swift is not much of
an interactive language and is not built around that kind of repl driven
experience.

> In particular, do you have another example aside from
> DifferentialEquations.jl ?

Sure, here's a smattering of high quality packages made by and for research
scientists:

    
    
        https://github.com/JuliaApproximation/ApproxFun.jl
        https://github.com/BioJulia
        https://github.com/JuliaDiffEq/ModelingToolkit.jl
        https://github.com/crstnbr/MonteCarlo.jl
        https://github.com/chriselrod/LoopVectorization.jl
        https://github.com/JuliaNLSolvers/Optim.jl
        https://github.com/PainterQubits/Unitful.jl
        https://github.com/mcabbott/TensorCast.jl
        https://github.com/JuliaPhysics/Measurements.jl
        https://github.com/Jutho/TensorOperations.jl
    

There are many many more, these are just the first that came to mind.

~~~
newen
JuMP
([https://github.com/JuliaOpt/JuMP.jl](https://github.com/JuliaOpt/JuMP.jl))
for optimization is really nice and I don't think there is an equivalent this
good in another language.

~~~
eigenspace
Absolutely, that's a nice point.

------
signaru
I wish it had more dedicated IDE besides the atom/electron based ones (like
GNU Octave has its own). I think Julia's performance deserves an equally
performant IDE.

~~~
newswasboring
This. I have been trying to learn Julia so I can promote it to replace Matlab
in my company. I think everyone will like the performance, and a more modern
and extensive library (also the fact that it's free will save the company a
lot of money). But the atom Juno IDE looks so unprofessional, I think we will
have a really hard time convincing scientists to use something which can't
even undock the editor properly (I know we can open the file in a new window
but it's really not the same).

~~~
mbeex
Seems to be a thing of electron, VS code cannot do this either (and the proper
handling of multiple monitors in general).

~~~
newswasboring
Yeah, that is my guess too. But at least VS code can save your workspace in
atom you have to install a package to do that. To programmers it might look
like flexibility but to non programmers it just looks like a chore.

------
sk0g
I've been meaning to check Julia out for a while now. Are there any deep
learning libraries that are as feature complete and user friendly as PyTorch/
Tensorflow?

~~~
komuher
Flux but its not even close to PyTorch or TF in terms of features and
performance

~~~
darsnack
I have been using Flux for a year (or more?) and I have never found it to be
slower than PyTorch or TF. Granted I am training at most ResNet-20 and mostly
smaller models, so maybe there is larger training routines where people have
issues. Every single one of these deep learning libraries is mapping to
CUDA/BLAS calls under the hood. If you wrote the framework correctly, the
performance difference should not be drastically different. And Flux doesn’t
have much in terms of overhead. My lab mate uses PyTorch to train the same
models as me and his performance is consistently the same or worse.

As for features, I think this is because people coming from TF or PyTorch are
used one monolithic package that does everything. That’s intentionally not how
Flux or the Julia ecosystem is designed. I’ll admit that there are a lot of
preprocessing utility functions that could be better in the larger Julia ML
community. But for the most part, the preprocessing required for ML research
is available. This is mostly the fault of the community of not having a single
document explaining to new users how all the packages work together.

Where the difference between Flux and other ML frameworks is apparent is when
you try to do anything other than a vanilla deep learning model. Flux is
extensible in a way that the other frameworks are just not. A simple example
is with the same lab mate and I trying to recreate a baseline from a paper
that involved drawing from a distribution at inference time based on a layer’s
output then applying a function to that layer based on the samples drawn. I
literally implemented the pseudo code from the paper because in Flux
everything is just a function, and chains of models can be looped in a for
loop like an array. Dumb pseudo code like statements where you just write for
loops are just as fast in Julia. And it was! Meanwhile my friends code came to
a grinding halt. He had to resort to numerical approximations for drawing from
the distribution because he was forced to only use samplers that “worked well”
in PyTorch. This is the disadvantage of a monolithic ML library. I didn’t use
“Flux distributions,” I just used the standard distributions package in Julia.

This disadvantage to TF and PyTorch will become even more apparent when you do
model-based RL. Flux was designed to be simple and extensible from the start.
TF and PyTorch were not.

~~~
komuher
Ok my fault i was writing from perspective of ML engineer not reasercher (I'm
using Julia for 1.5 year now and my bois reaserchers prefer pure julia
solutions cause its easier to write u can use symbols and not using OOP etc.)

But for production ready models PyTorch and TF is miles ahead first of all:
NLP, audio and vision based packages building frameworks, (attention layers,
vocoders etc.) then u have option to compile models using XLA and use TPU
(about 2/3 times cheaper then gpu for most of our models [audio and nlp])

Next inference performance (dunno about now maybe this change but about ~8
months ago flux was about 15-20% time slower [tested on VGG and Resnet's]then
pytorch 1.0 without XLA)

Time to make it to production: Sure maybe writing model from scratch can take
a bit longer on PyTorch then Flux (if u not using build in torch layers) but
getting in into production is a lot faster, first of all u can compile model
(something not possible in Flux) and u can just use it anywhere from Azure and
AWS to GCP and Alibaba Cloud make a rest api using Flask/Fast-api etc. or just
using ONNX.

Dont get me wrong i love Julia and Flux but there is still a LONG way before
most people can even consider using Flux on production enviroment not for
reasearch or some MVP stuff.

~~~
FabHK
I have no special insight into ML or Julia (though I love it), but one thing I
can confirm from experience is that there is a huge difference between getting
a model work once in an academic or research setting, and having something
reliably and scalable work in production day after day. Mind boggling, totally
different challenges.

------
vasili111
What is the state of Julia dataframe? Is it ready for production use? How is
the Julia dataframe performance in comparison with R and Python dataframes?

~~~
oxinabox
My employer uses DataFrames.jl in production. It's fine, nothing wrong with
it. Used to be a bit unsafe to unwarey uses, now it's safe by default and you
need to do a bit extra to get all the performance.

It's worth knowing it is more like R's DataFrames than like Pandas.

It is getting pretty close to a 1.0 release. Probably a few months out (One
more minor, then if all goes well 1.0 a month or so later)

Further one should know that there are many tabular data packages in Julia and
they all use the interface defined by Tables.jl, and all interop very well.

Query.jl (which is something like Linq or TidyR) works with all of them, and
do packages for loadingand saving (CSV.jl, LibPQ.jl etc)

------
metreo
I love Julia and the hard work of the team behind this is very exciting!

