
Julia 1.0 - montalbano
https://julialang.org/blog/2018/08/one-point-zero
======
athenot
As an outsider, I'd like to see somewhere near the home page a few short
snippets of code to get a feel for Julia and hopefully show the kind of uses
for which it is a natural choice.

Nim's home page¹ shows a piece of sample code right at the top. Perl6's page²
has a few tabs quickly showing some patterns it's good at. Golang³ has a
dynamic interpreter prepopulated with a Hello World.

Julia's home page shows a nice feature list and links to docs to deep dive but
it doesn't do a good job of selling it.

¹ [https://nim-lang.org](https://nim-lang.org)

² [https://perl6.org](https://perl6.org)

³ [https://golang.org](https://golang.org)

~~~
ChrisRackauckas
For scientific computing, showing the package ecosystem is the most important
thing. When you look at this thread, people are asking about dataframes and
differential equations. Julia's site reflects this: yes there are things like
Pandas, and for plotting, etc.

~~~
Svenskunganka
The blog mentions that Julia is supposed to be a general purpose language, and
not a language built specifically for scientific computing. Is that wrong?

The first impression does leave me thinking that using Julia for different
programming domains like distributed internet-facing servers or web services
is not something it was built for.

~~~
Accipitriform
> The blog mentions that Julia is supposed to be a general purpose language,
> and not a language built specifically for scientific computing. Is that
> wrong?

No. Julia is a general purpose language that has so far been mainly focused on
scientific and mathematical programming.

It's design is probably least friendly to the real-time programming domain (GC
based) but it can apparently be used there as well:

[http://www.juliarobotics.org/](http://www.juliarobotics.org/)

I see an extremely bright future ahead for Julia!

~~~
ChrisRackauckas
Fun fact, the GC really isn't an issue and instead the opposite issue was
found. There had to be callbacks built to slow down the computations for the
robotics simulations in order to get it to run at real-time because it was too
fast.

[https://github.com/JuliaRobotics/RigidBodySim.jl/blob/34ac43...](https://github.com/JuliaRobotics/RigidBodySim.jl/blob/34ac4352f882b5b3beb62ba2b4b90c1e57817b39/src/core.jl#L154)

Notice that this function is purposefully sleeping the differential equation
solver in order to slow it down to the exact amount to get the simulation back
to real-time.

~~~
CyberDildonics
That's a very amateurish way to solve this problem. Games typically have a
main loop which will check the amount of time that has passed on every
iteration of the loop.

You can see how much extra time is left for that frame at the targeted frame
rate, then sleep for that amount of time.

Then you never have to slow down anything else, you can set a maximum amount
of cycles per second and you can sleep once per cycle. The faster the CPU, the
less power it should use.

~~~
ChrisRackauckas
Interesting to hear. This just isn't a thing I think is ever encountered in
this kind of stuff. The author's previous struggle was to get to real-time.
Getting below it wasn't something they really considered. Here's a talk they
gave:

[https://www.youtube.com/watch?v=dmWQtI3DFFo](https://www.youtube.com/watch?v=dmWQtI3DFFo)

------
bsdubernerd
I'm a quite happy Julia user, however I feel there are still some warts in the
language that should have warranted a bit more time before banging 1.0 on the
badge.

Exception handling in julia is poor, which reminds me of how exceptions are
(not/poorly) handled in R. Code can trap exceptions, but not directly by type
as you _would_ expect. Instead, the user is left to check the type of the
exception in the catch block. Aside for creating verbose blocks of boilerplate
at every catch, it's very error prone.

Very few packages do it right, and like in R, exceptions either blow up in
your face or they simply fail silently as the exception is handled incorrectly
upstream by being too broad.

Errors, warnings and notices are also often written as if the only use-case
scenario is the user watching the output interactively. Like with R, it's
possible but quite cumbersome to consistently fetch the output of a julia
program and be certain that "stdout" contains only what >you< printed. As I
use julia also a general-purpose language to replace python, I feel that julia
a bit too biased toward interactive usage at times.

That being said, I do love multiple dispatch, and julia overall has one of the
most pragmatic implementations I've come across over time, which also makes me
forget that I don't really like 1-based array indexes.

~~~
Q6T46nT668w6i3m
Yeah, I agree with your comments about error handling. It’s far from ideal in
non-interactive contexts. It’s especially disappointing since you could easily
imagine something like Julia replicating Python’s success at transitioning
code from interaction (e.g. Jupyter notebook) to production.

I initially defended the choice, but I now agree that 1-based indexing now
seems like a poor choice since Julia has become something more than the
original mission of a better MATLAB or Octave. It’s a, admittedly, minor
tragedy of Julia’s success.

~~~
peatmoss
> 1-based indexing now seems like a poor choice since Julia has become
> something more than the original mission of a better MATLAB or Octave. It’s
> a, admittedly, minor tragedy of Julia’s success.

I’m curious as to why this is a problem outside numerical computing. From my
perspective, this is consistent with a long history of mathematics dealing
with matrices that predates electronic computers.

0-based arrays are popular because C decided to deviate from what had
previously been standard in math and in Fortran.

Is there a reason other than aesthetic preference and habit that makes 0-based
indexes better for computing in non-numerical contexts?

I realize both indexing standards are arbitrary and boil down to “that’s the
way grandpa did it,” however 1-based indexing grandpa is way older and more
entrenched outside computing circles.

EDIT: I suppose with Julia it’s not that important, as other commenters have
pointed out that you can choose arbitrary indexes.

~~~
kgwgk
> the technical reason we started counting arrays at zero is that in the
> mid-1960’s, you could shave a few cycles off of a program’s compilation time
> on an IBM 7094. The social reason is that we had to save every cycle we
> could, because if the job didn’t finish fast it might not finish at all and
> you never know when you’re getting bumped off the hardware because the
> President of IBM just called and fuck your thesis, it’s yacht-racing time.

[http://exple.tive.org/blarg/2013/10/22/citation-
needed/](http://exple.tive.org/blarg/2013/10/22/citation-needed/)

~~~
peatmoss
Wow, I hadn’t seen this before. The history of this feud goes back before C.
Thank you for a fascinating read!

~~~
jandrese
Even Dijkstra weighed in on this:

[http://www.cs.utexas.edu/users/EWD/ewd08xx/EWD831.PDF](http://www.cs.utexas.edu/users/EWD/ewd08xx/EWD831.PDF)

~~~
e12e
I've always found this a rather weak argument - he even implies that 2...12 is
at least as clear (it being the starting point of the text).

I also thought I'd seen a longer text focusing more on the counting/indexing.

I still don't see the appeal of "element at zero offset" vs simply "first
element".

I do agree that < vs <= etc can get messy. But outside of now fairly archaic
programming languages I don't see the need. Just use a construct that handles
ranges, like "for each element in collection/range/etc". (Or for math,
"pattern matching" (or "for n in 1..m").

------
taliesinb
I have high hopes for Julia becoming the defacto open-source scientific
language. Despite Python and R both having a massive head start, I'm willing
to bet that talented engineers and scientists will be drawn to Julia to
implement their next-generation frameworks owing to the powerful features that
it offers.

For example, the fact that an array of unions such as Array{Union{Missing,T}}
is represented in memory as a much more efficient union of arrays is a perfect
example of where a clever compiler can make the logical thing to do also the
efficient thing to do!

~~~
mlthoughts2018
I am a machine learning library developer and I don’t share your feelings. For
example the specific example you cite, I feel, should never be something
scientists or engineers actively think about, only language implementers.

Once you make that distinction, then whether you write it as a Cython module
exposed in Python or you can use native language features to do it in Julia,
nobody cares. It’s encapsulated away from people who use numerical libraries,
as it should be.

I also spend time developing these “runtime overhead avoidant” backend
numerical libraries, and I would say I’ve seen no significant reason to prefer
Julia over Cython.

Don’t getme wrong, Julia is great, just not offering anything fundamentally
different. And since there’s already a critical mass of people with
engineering and optimization experience in the Cython & Python extension
module stack, I’d expect that community to continue dominating Julia just by
attrition alone.

~~~
marmaduke
I see your comment grayed out, and I just want to chime in, as some who does a
lot of numerical stuff (more than a decade, published stuff, support multiple
lab research projects etc), I want to second this point of view. When it’s
time to get real work done Python is more than good enough, and there’s plenty
of strategies for acceleration where required.

And when I want Julia’s promise of fast loops, I use Numba. If all the effort
gone into Julia had instead been spent on fixing remaining warts in Python
workflow for science, we wouldn’t even havee this conversation.

~~~
celrod
I think it is easier to write fast, complicated code in Julia than in C, C++,
or Fortran. Not just the syntax, but because of great support for generic code
by default and metaprogramming making it relatively simple to write code to
generate the code you actually want for any given scenario. Interactive
benchmarking and profiling are a boon too.

An example of the value of generic code is that forward mode AD is extremely
easy, and almost always just works on whatever code you run it on.

Then, once that's done, multiple dispatch (and possible macros for a DSL)
allows for a much cleaner user interface than Python offers for numeric code.

I have a lot more experience with R than Python, but seeing more of the
scientist/mathematician/researchers side of things, I have to strongly
disagree with the view that they should write slow code and contact a CS guy
to write fast code in another language when they need it. Do you honestly
think that's practical for grad student's projects? Recently, one of my
friends wrote a simulation in R. Most of the work was done by hcubature -- a
package written in C -- integrating a function written in R. Could just have
easily been written in Python. That function was slow, and the simulation ran
for days. Before an error caused it to crash, losing days of compute time. I
-- a statistics grad student -- helped him rewrite it in Julia, and it
finished in 2 hours.

That C/C++ code will still run slowly if they have to call your R/Python code
is a problem. They also can't apply things like AD easily. A common solution,
used by Stan for example, is to create a whole new modeling language and have
users interface through that. Learning a new language -- albeit relatively
simple/domain specific -- which they then cannot debug interactively, is
another pain point. All this can be avoided by simply using Julia.

~~~
marmaduke
What happens for that scientist when they have to dive into Julia’a stack to
debug something weird? In Python and C, you have established debuggers,
semantics etc, which means that, yes, there are two languages instead of one,
but neither is a moving target compared to a language which just had a 1.0
release.

I get the issue with scientists writing poor code, but Numba has largely
solved this problem, by packing an LLVM JIT into a decorator which can be
applied to any numerical code to get same speed ups as Julia, except no
language switch required.

Citing slow code in the wild with a fast rewrite is a hilariously poor
anecdote performance wise. I’ve rewritten Fortran code into Python and gotten
speed ups. Regardless of the language, garbage in, garbage out.

Stan is an example where the modeling is “just” a DSL implemented as C++
templates. Does that make that a good choice?

~~~
ChrisFoster
Sometimes low level debugging is a surprisingly pleasant experience as the
julia JIT generates proper DWARF debug info. So for instance, you can break in
gdb and see the julia source code for any julia generated stack frames, neatly
intertwined with the frames of the C runtime.

To be clear, I don't remember needing to do this as a regular user. As an
occasional compiler hacker it's been quite nice though.

~~~
ScottPJones
As someone who's spent decades programming in C/C++, and diving into assembly
code (and writing a fair share when the compiler just couldn't do what I
needed), I love being able to directly inspect the output code at many levels,
including all the way down to "bare metal". Yes, there's a lot of work to be
done in the area of debuggers for Julia, but there are already useful
debugging tools (like Rebugger) that I haven't seen for any other language.

------
ViralBShah
We merged the 1.0 PR at JuliaCon live, with streaming on YouTube. Some people
remarked that this might be a first for a major programming language release.

[https://youtu.be/1jN5wKvN-Uk?t=1h3m](https://youtu.be/1jN5wKvN-Uk?t=1h3m)

It was fun to do this with everyone at JuliaCon and online, and thought it was
worthwhile to share here.

------
LittlePeter
Every release I am downloading Julia and trying to wrestle through some
tutorials. Every time (0.4.0, 0.6.0, 1.0.0) I get stuck at some error, usually
during the pre-compiling of some dependency.

For example, I have downloaded julia-1.0.0. I try to follow this tutorial
here, linked in this post by someone:
[http://juliadb.org/latest/manual/tutorial.html](http://juliadb.org/latest/manual/tutorial.html)

Then I do this and get an error:

julia> using JuliaDB [ Info: Precompiling JuliaDB
[a93385a2-3734-596a-9a66-3cfbb77141e6] ERROR: LoadError: UndefVarError: start
not defined Stacktrace:

Every time. Even the screenshot of Julia code that julialang.org used to have
was not runnable per admission of core devs.

What am I doing wrong? How are you able to run large Julia programs
successfully?

Edit:

Let's try tutorial at
[https://www.analyticsvidhya.com/blog/2017/10/comprehensive-t...](https://www.analyticsvidhya.com/blog/2017/10/comprehensive-
tutorial-learn-data-science-julia-from-scratch/). First command:
Pkg.add("IJulia"): command fails to install dependency Conda.

Same for the tutorial at
[http://ucidatascienceinitiative.github.io/IntroToJulia/Html/...](http://ucidatascienceinitiative.github.io/IntroToJulia/Html/ToolingDocumentationCommunity)

Sigh. Give up

~~~
ChrisRackauckas
It was literally released today. Of course the packages don't work yet...

~~~
LittlePeter
That makes sense. Maybe I was used to R (CRAN) where uploaded packages are
actually tested against the version they declare to support.

~~~
candhee
JuliaDB declares support for 0.6 not 1.0 -
[http://juliadb.org/latest/](http://juliadb.org/latest/)

Which package declared support for 1.0.0 and didn’t compile?

~~~
kgwgk
A binary package will be available in CRAN for a given platform/version only
if it works (it passes all the tests). It seems that Julia lets you install a
non-working package without any warning (you will probably get errors when you
run it, but I guess it may also fail silently which is worse).

~~~
et2o
I was excited to try using Julia 1.0.0 today after a couple years since my
last try and... couldn’t.

None of the packages work with it yet. I guess I could go find an older
version, but it seems like a problem that Julia will happily allow you to
install a package it isn’t compatible with. What’s the point of the Pkg system
then? CRAN’s model makes a lot more sense.

~~~
StefanKarpinski
Many packages have declared them selves compatible with any future version of
Julia... which is clearly a lie. These need upper bounds on their version
compatibility, but that will take some time to propagate through the system.

------
AriaMinaei
Never had a proper look at the documentation until now. This language is very
interesting!

One thing I found is that types in Julia are first-class values [0]. You can
put them in variables, pass them around, inspect them, even produce new ones
in runtime. Opens up all kinds of metaprogramming opportunities. Very Lispy!
(Well, Julia _is_ Lispy). It's also interesting that types are optional, yet
they're significant for the optimising compiler. Like, if you do type the
arguments of a function, the optimising compiler will have less work to do.

It also looks like much of the compiler pipeline is exposed to the user [1].
You have access to the parser, can tweak the AST. Macros also there of course.

[0]
[https://docs.julialang.org/en/stable/manual/types/](https://docs.julialang.org/en/stable/manual/types/)
[1]
[https://docs.julialang.org/en/stable/manual/metaprogramming/](https://docs.julialang.org/en/stable/manual/metaprogramming/)

~~~
StefanKarpinski
> Like, if you do type the arguments of a function, the optimising compiler
> will have less work to do.

Giving types for function arguments doesn't actually have any effect on
performance: the compiler specializes on concrete runtime argument types
anyway, so completely untyped code is just as fast as fully type annotated
code—since the types are known when the code is compiled. On the other hand,
giving type information is essential for performance involving memory
locations, i.e. field types and the element types of collections.

~~~
AriaMinaei
Thanks for the correction!

~~~
ScottPJones
I've actually seen many cases where overuse of concrete types (on function
parameters) in Julia can lead to poor performance. For example, if functions
are written declaring an argument as `Vector{Int64}`, and then people using
the function end up calling `collect` (and causing a lot of memory
allocations), when they had a value that was an iterator and were forced to
convert it into a vector just to call the function. Simply leaving off the
`::Vector{Int64}` and getting rid of the `collect` on the caller speeds things
up nicely.

------
wbhart
It may be a tiny thing, but I'm still amazed at how fast the new REPL starts
up, and how quick the package manager is, compared to what was there before.
Congrats!

------
jordigh
> with a liberal license

I hope Julia doesn't rely on inferior libraries just out of copyleft phobia. I
would much rather use FFTW than FFTPACK or whatever other alternative they
have in mind. FFTW is really best in class.

I'm okay with them making FFTW optional, but please make it opt-out, not opt-
in. People should be getting the best software by default. Copyleft isn't
going to hurt anyone but people who are trying to hide source code, and
scientific computing needs all of the visible source code we can get.

~~~
stabbles
Many things in Base have moved to separate packages to make it light-weight.

FFTW is available in a package under the MIT license [1]. Also its author is a
top contributor to Julia ;).

[1]
[https://github.com/JuliaMath/FFTW.jl](https://github.com/JuliaMath/FFTW.jl)

~~~
tavert
That MIT license only applies to the Julia wrapper code. The package downloads
and dynamically links into an FFTW shared library, which means any code that
uses it needs to be GPL if distributed as a whole.

~~~
staticfloat
The README for that package [0] states:

> Note that FFTW is licensed under GPLv2 or higher (see its license file), but
> the bindings to the library in this package, FFTW.jl, are licensed under
> MIT. This means that code using the FFTW library via the FFTW.jl bindings is
> subject to FFTW's licensing terms.

If you have an idea on how to make that clearer, we would be happy to review a
PR to the FFTW.jl repository.

[0]
[https://github.com/JuliaMath/FFTW.jl](https://github.com/JuliaMath/FFTW.jl)

~~~
tavert
My mistake, docs there are fine. A few other BinaryBuilder-using packages have
neglected to mention this issue, last I checked. And BTW BinaryBuilder is
violating even MIT licenses if you don't package and include the license file
along with the shared-library download.

------
dnbgfher
I'm sure this is too late to get much visibility, but I recently looked into
using Julia (for my MS thesis) and found it sorely lacking in one major way
that I found unforgivable.

Their type system is pretty interesting, and allows for some really cool
abilities to parameterize things using types. I'd like to have seen more work
done on, effectively, strong typedefs (or whatever $lang wants to call them).
However that sort of thing is fairly uncommon so it's hard to hold it against
them too much.

The biggest issue, and one they seem unwilling to really address, is that
actually using the type system to do anything cool requires you to rely
entirely on documentation which may or may not exist (or be up-to-date).

Each type has an entirely implicit interface which must be implemented. There
is no syntax to mark which methods must be present for a type. No marker for
the method definitions, no block where they must be named, or anything like
that. You can't even assume you'll find all the interface methods defined in a
single file because they can appear literally anywhere.

Whoever wrote the type has in mind some interface, a minimal set of methods,
that _must_ be present for any derived type. There are only two possible ways
to determine this. The first is to look to the documentation. Even for the
basic types defined by Julia this documentation doesn't seem to exist for all
types. I don't have high hopes for most libraries to provide, and keep up to
date, this documentation either. This concern gets even greater when
considering the focus is largely on scientific computing.

Without up-to-date documentation, the only option is to manually review every
file in a library and keep track of the methods defined for the type you're
interested in. With multiple dispatch, you can't even get away with just
checking the first parameter either. Then you need to look at the definitions
for those methods to narrow your list down to the minimal set required. This
is not an easy task.

This issue has been brought up before and discussed, but nobody seemed very
interested in it. This is a fairly major issue in my view, as it cripples the
otherwise very interesting type system. As it stands, it seems to be a fairly
complex solution to the issue of getting good compiled code out of a dynamic
language. It could be so much more.

~~~
iamed2
This is truly an important issue. Right now, every interface represents
something that needs to be documented by the author. The AbstractArray and
Iteration interfaces are well-documented, but the AbstractDict interface
isn't. I believe that documentation for an interface is enough, but I also
don't think enough people will take the time to write it. So I agree there
should be a technical solution.

The main reason this has not been implemented as a language feature is that
people are worried about settling on a design that would be impossible to make
fast and concise. It is certainly on the designers' radar, and was discussed
specifically at JuliaCon 2017 in Jeff Bezanson's talk.

There are some people who plan to attempt a trait system as a package on Julia
1.0. Perhaps this will be successful and we won't need language changes! Stay
tuned.

As an aside, I wouldn't take the lack of action as lack of interest. People
are interested, but it wasn't prioritized yet. It will get effort and
attention!

~~~
dnbgfher
I think you need language changes no matter what. I've seen some of the
previous trait packages and while extremely cool, they're insufficent for
tackling this problem for a couple reasons.

First, this extends deep into the core of Julia, and I don't see how a traits
package would be involved with that.

Second, and this dovetails with the first issue, this needs to be something
that people actually use as a default action. Part of that means ensuring the
built-in types make use of this.

Related to all of this, I worry it's too late to really make a meaningful
change here. The culture and existing packages are already set without it.
Adding the feature as a requirement isn't going to happen anytime soon unless
people are willing to continue to break things post 1.0. And it needs to be a
requirement or it won't get used except by people that will already provide
documentation.

The lack of interest I mentioned mainly came from some github issues which
either received little attention, had creators that had a very "maybe it would
be nice if" attitude, and responses that were questioning the benefit compared
to the cost of implementing the syntax changes.

~~~
ChrisRackauckas
>First, this extends deep into the core of Julia, and I don't see how a traits
package would be involved with that.

No, Tim Holy described over dinner how all that was necessary to complete the
existing traits packages was method deletion, and that's in v1.0.

~~~
dnbgfher
Interesting.

But as an external package it still has the issues I described as not being
part of the defaults of the culture. Without that it just becomes a nice-to-
have that only people serious about writing good quality, usable code are
going to use. And these are the people most likely to have good documentation
in any case.

~~~
ChrisRackauckas
Yes indeed, but the cultural issue can be addressed by adding it into Base in
a 1.x since it's not breaking. I don't think it will make it into a 1.x though
but it can.

------
multiply
Wonderful! The one untold story is the more I use it the better the programmer
I become. It is so easy to benchmark and profile code. It has a great
community that will help you how to write high performance code. Congrats!

~~~
ai_ia
I really hope for Julia to become mainstream and maybe replace Python as the
defacto lang for data science. Julia is an incredible language. Kudos to the
team developing it.

~~~
fpoling
This is my thinking as well. Python is nice to glue things, but doing high-
performance math is not its strength. Things like GIL should be addressed long
time ago, but it seems it is so fundamental to make things work in Python that
I have big doubts that it will ever be addressed.

~~~
Q6T46nT668w6i3m
I agree that the GIL has become a problem for a variety of high-performance
tasks, but, I’m curious, what kind of problems have you encountered with
numerical computation? I contribute to both NumPy and TensorFlow, two
libraries with different processing models, and I don’t see any obvious area
where removing the GIL would provide substantial benefits. However, I’ll
readily admit that I don’t think about this too often and it’s entirely
possible I’m missing something obvious! Maybe Julia could provide some
guidance around this.

I would also bet (but not too much) that we eventually see major progress in
removing the GIL. I really don’t think it’ll be around forever!

~~~
fpoling
numpy wiki summaries that well [1]. Too many things especially with complex
math cannot run in parallel unless one spends a lot of time on workarounds.

One starts with quick and dirty solution, makes it work on a small dataset and
then struggle to make it utilize at least 4 cores to cut running time with
more realist datasets. Surelly I can code numerical calculations in C++, but
then the code cannot be maintained by python-only guy. So I hope that Julia or
anything else with better parallel support replaces Python for scientific
calculations when scaling quick and dirty solutions is straightforward.

[1] [http://scipy-
cookbook.readthedocs.io/items/ParallelProgrammi...](http://scipy-
cookbook.readthedocs.io/items/ParallelProgramming.html)

------
stabbles
Julia is such a delightful language! It allows rapid prototyping and at the
same time it runs fast. It has a great REPL, package manager and workflow with
Revise.jl.

------
DarkWiiPlayer
Julia is one of the few languages that doesn't make me think "I'd rather be
doing this in Lua instead" every 5 minutes

------
rrock
Congrats to everyone behind this effort! I’m looking forward to helping out
with getting the image processing packages in Images.jl packages updated for
0.7/1.0.

~~~
ViralBShah
Tim Holy did say he is looking forward to catching up with upgrading Images.jl
to 1.0.

------
samuell
Does it support light-weight threads (co-routines) channels yet? (and if it
does, does it multiplex them on mulitple CPUs?). I had a look a few years ago,
and then it did not.

I think this is so badly needed to get an easy route to pipeline paralellism,
which is simply everywhere in today's data analysis "pipelines".

~~~
ChrisRackauckas
Yes it does:

[https://docs.julialang.org/en/latest/base/multi-
threading/](https://docs.julialang.org/en/latest/base/multi-threading/)
[https://docs.julialang.org/en/latest/manual/parallel-
computi...](https://docs.julialang.org/en/latest/manual/parallel-
computing/#Multi-Threading-\(Experimental\)-1)

It's listed as experimental since in a 1.x released it's planned to be changed
to work on top of the task interface.

~~~
samuell
Thanks. Yea, I also found this statement:

> [...] it may change for future Julia versions, as it is intended to make it
> possible to run up to N Tasks on M Process, aka M:N Threading

M:N threading is (I think) the same as the "multiplexing" I mentioned. Have
seen it called "M:N multiplexing" before.

(At the very end of [https://docs.julialang.org/en/latest/manual/parallel-
computi...](https://docs.julialang.org/en/latest/manual/parallel-
computing/#Multi-Core-or-Distributed-Processing-1))

~~~
StefanKarpinski
Just to clarify:

* Julia has had tasks/co-routines basically forever and uses them for all blocking operations so that no explicit non-blocking I/O or callbacks are required.

* It also supports multithreading using the @threads macro.

* However, it does not yet map tasks to threads, but that is very close to ready: [https://github.com/JuliaLang/julia/pull/22631](https://github.com/JuliaLang/julia/pull/22631).

We expect this work to be finished in a near-future 1.x release and then Julia
will support M:N multiplexing.

~~~
samuell
Many thanks for the clarification!

------
bagsvaerd70
It's a simple and extremely powerful language.

Very few pieces of software combine simplicity and power. I'm very excited.

It'd be great if Julia devs could help deploying Julia Pkgs on Nix, now that
Julia is getting ready for prime time and Nix is steadily gaining more and
more users:
[https://github.com/NixOS/nixpkgs/issues/20649](https://github.com/NixOS/nixpkgs/issues/20649)

------
swdunlop
Julia 1.0 will be rough until major packages have caught up with the
deprecations introduced in 1.0 and 0.7. 1.0 does not tolerate these
deprecations, while 0.7 will warn about them.

This makes a poor first day impression for new users who expect Plots.jl,
IPython.jl, Juno.jl or other prominent packages in 1.0. The package
maintainers are scrambling to catch up. I recommend using 0.7 for a couple
weeks until the community catches up unless you are proficient enough to
contribute PRs and help out.

(edit) To be clear, 0.7 was released at the same time as 1.0.

~~~
3JPLW
The core devs are in on the scrambling, too! This is now one of my favorite
interactions on GitHub:
[https://github.com/JuliaStats/StatsBase.jl/pull/404](https://github.com/JuliaStats/StatsBase.jl/pull/404)

------
taliesinb
I'd love to know from any resident Julia experts: what are your favorite
examples of active, high-quality Julia packages?

And perhaps more importantly, what is missing?

~~~
ChrisRackauckas
This is my summary: [https://discourse.julialang.org/t/what-package-s-are-
state-o...](https://discourse.julialang.org/t/what-package-s-are-state-of-the-
art-or-attract-you-to-julia-and-make-you-stay-there-not-easily-replicateable-
in-e-g-python-r-matlab/11294/4)

------
tomkwong
Congrats to the Julia dev team for making it happen!

Julia v0.6 was already very functional/usable release but having 1.0 allow me
to write code and have confidence that it will continue working in the same,
solid foundation in the years ahead.

I started my coding career over 25 years ago and I had done much work in C,
Java, JavaScript, and PHP. Julia is the best language that I have worked with
so far. Fast, dynamic, and highly productive!

I'll be moving to 1.0 release when my dependent packages are upgraded to
support 1.0.

------
amrrs
I use both R and Python in my work but when we move our models to production
it's not real time, just a batch execution like once in a day. I'd like to
hear from anyone who uses Julia in their actual job/work. Is it worth learning
Julia, hoping to use it in work some day?

~~~
rogerluo
I think sometimes people are distracted by Julia's performance, according to
[https://github.com/JuliaLang/Microbenchmarks](https://github.com/JuliaLang/Microbenchmarks)
Julia is not the fastest language/compiler (maybe LuaJIT is).

Julia is not only good for its performance, but also its multiple dispatch,
its type system and more. Because of those features we have
[https://github.com/JuliaGPU/CUDAnative.jl](https://github.com/JuliaGPU/CUDAnative.jl),
more elegant package interface, like
[https://github.com/JuliaOpt/JuMP.jl](https://github.com/JuliaOpt/JuMP.jl) and
there is more.

I still believe there is no silver bullet, if you want something Python then
you get another Python. But think about this: is the full Python's dynamic
feature really what we need in works like simulation, HPC, etc.? Probably no.
I think Julia is kinda of balance for the related field (like for scientific
computing). Just like in the old days, people start using FORTRAN, MATLAB,
Lisp as the most advanced tools at that time. We start using Julia now.

~~~
celrod
Those benchmarks draw a lot of hate, because anyone coming from language X
will get offended at how unoptimized code in language X is. Also true if X ==
Julia. For example, they never turn off boundschecks, which disables
vectorization.

The point is mostly to (a) show the difference between fast languages compiled
to efficient assembly and (b) represent code someone new to a language may
bang out to get something done (while avoiding performance pitfalls) and avoid
the benchmark game.

That said, I agree. I try (poorly) not to advertise speed, because people
coming from languages like R will rarely fail to write type unstable code that
is slow, observe JIT compilations that make Julia a little laggy, and then
come away disappointed.

Things like the type system and meta-programming shown off in your examples
are amazing, and also not something you can reproduce in other languages by
adding binary dependencies.

~~~
igouy
> … and avoid the benchmark game.

?

~~~
celrod
Escalating benchmarks that add increasing optimizations to continuously one up
one another. In the "logical conclusion" of this left unchecked, the Fibonacci
benchmark for example (d)evolves into a lookup table. Which is missing the
original point, hence the desire to just leave everything un-optimized.

~~~
igouy
Unfortunately the arbitrary "leave everything un-optimized" also misses the
point because in practice we don't.

> (d)evolves into a lookup table

We can make the arbitrary decision not to accept that, and instead try to use
our best judgement on what optimizations to accept.

"One can, with sufficient effort, essentially write C code in Haskell using
various unsafe primitives. We would argue that this is not true to the spirit
and goals of Haskell, and we have attempted in this paper to remain within the
space of "reasonably idiomatic" Haskell. However, we have made abundant use of
strictness annotations, explicit strictness, and unboxed vectors. We have,
more controversially perhaps, used unsafe array subscripting in places. Are
our choices reasonable?"

[http://www.leafpetersen.com/leaf/publications/ifl2013/haskel...](http://www.leafpetersen.com/leaf/publications/ifl2013/haskell-
gap.pdf)

------
typon
Reading through the docs, looks like they have 1-indexed arrays..?

~~~
ur-whale
Yeah, same here. Once I hit on that, it was really hard to convince myself to
read further.

~~~
goatlover
You do realize that 0-based indexing is mostly because of C's huge influence
and not because it's natural to higher level languages, particularly ones that
are designed with scientific computing in mind. Fortran existed before C.

It's also not hard to get used to. No more OB1 errors.

~~~
ur-whale
>No more OB1 errors

That makes no sense. Neither 1-based indexing or 0-based indexing will save
you from OB1 errors.

As a matter of fact, if you make more OB1 errors in a 0-based indexing
language, it's probably because your brain is wired to think 1-based.

The problem: there are legions of programmers whose brain is wired to think
0-based and are guaranteed to suffer through a lot more OB1 errors if they try
to adopt Julia.

0 based indexing is not because of C's influence. It's because that's how
computers work. ASM is 0-based indexing. The first memory cell on a computer
doesn't start at address 1.

~~~
goatlover
Programming languages are an abstraction from the hardware. There's a reason
we stopped using assembly for most programming, and C is considered by many to
not be a good high-level general purpose language. Anyway, 1-based is not
exactly a new thing in the domain Julia is targeting.

------
Tarrosion
Since this is currently on top of HN and not everyone knows of Julia, a few
thoughts on Julia from a very-contented-but-hopefully-rational user:

\- Julia is by far my favorite language. (I've also written significant code
in Java, C++, and Matlab and small projects in Python, Mathematica, R.)

\- Julia is my favorite because it is super expressive but also fast. You
don't have to make (big) compromises. There's a great blog post called "Why We
Created Julia" with the punchline "we are greedy." [1] 6 and a half years
later, it holds up well.

\- In Julia, nothing hurts. There are so many little quality of life
improvements that add up to more than just quality of life. Some are small,
like multiple assignment (x, y = lst[1], lst[2]). Others are more conceptual,
like well-supported first-class functions (that are also fast). Another
example: you're not forced to write code in an arcane style or with special
libraries to get speed. Your normal for-loop or vectorized code or functional
code will all compile to something efficient.

\- Because Julia is fast and expressive and extensible, in Julia everything
can be Julia and not a mashup of other languages. I've been doing some work in
Python recently, and it's painful to have Python lists, numpy arrays, Pandas
series, and so forth. Converting between types isn't that hard, but it's real
mental (and textual) overhead which just doesn't have to be dealt with in
Julia.

\- Yes, Julia has 1-based indexing by default. There are packages for custom
array indices (including 0-based, symmetric around 0, pick your favorite)
which are, surprise, super performant and easy to use. It seems
uncontroversial to me that for some cases 1-based indexing is a more natural
mental model and for some cases 0-based is more natural. When it matters a
lot, you can pick your indexing. When it doesn't matter much, which is most of
the time, it doesn't matter. Julia catches a shocking amount of flak for
this...if the worst thing about a language is that it sometimes makes you add
or subtract 1, you must really like that language :)

\- The Julia package ecosystem is young and evolving. It has some standouts
such as DiffEq (differential equations) and JuMP (optimization modeling
language) which are, to my knowledge, best-in-class in any language. I'd say
the modal experience is more like DataFrames: already super functional and
productive, not yet as full-featured as the <popular language>-equivalent, and
slowly evolving towards something better than the popular language equivalent.
E.g. DataFrames is just a wrapper around Julia lists which makes it much
lighter weight / easier to understand / easy to interop with than Pandas.

\- There are some growing pains around a young-ish language which, until
today, hadn't reached its first stable version. Presumably those will taper
off now that we're at 1.0, but it'd be a lie to say there aren't any.

\- My first open source contributions, modest as they are, are all in Julia.
Pre-Julia I never knew how to get started, but Julia makes it easy to
transition between user and developer.

[1] [https://julialang.org/blog/2012/02/why-we-created-
julia](https://julialang.org/blog/2012/02/why-we-created-julia)

~~~
lewis500
> Because Julia is fast and expressive and extensible, in Julia everything can
> be Julia and not a mashup of other languages. I've been doing some work in
> Python recently, and it's painful to have Python lists, numpy arrays, Pandas
> series, and so forth.

This is exactly what I like about Julia. Even though getting started in Julia
was tough (esp the older versions), once you get off the ground Python starts
to seem like a very hard language, in that Python requires you to use these
special libraries to run fast code. In Julia, the most obvious way to do
something (e.g., for loops) is perfectly fine.

The main thing Julia lacks, for me, is an equivalent to Pandas. The DataFrames
library lacks many very useful features of Pandas. But I am sure someone will
tackle that.

------
conjectures
Julia is a great language and was really useful in my PhD.

The #1 requirement I have is the ability to make binaries for some program.
You can compile a C program and get a binary. There's no practical equivalent
for Julia at the moment and I think this limits its production potential.

~~~
ChrisRackauckas
Well...
[https://www.youtube.com/watch?v=kSp6d3qSb3I](https://www.youtube.com/watch?v=kSp6d3qSb3I)

~~~
conjectures
Haha, great! The aim the speaker announces is indeed what I wanted.

Not had chance to how nicely PackageCompiler.jl works but am hopeful :)

------
vectorEQ
say you have an array of memory items. [][][][][][] this will be marked with
some marker (your varible names contain this offset into memory.. logical.
thats how computer works.)

then in computer, (assembly) you will request an element by saying for example
base_offset+(1 _elem_len). that would make it logical to use 0 as an offset,
because then you can use 'n_elem_len' as a generic number to incrememnt the
offset by to select array elements.

thus for a computer and how it functions, 0-based array would make more sense,
and any other thing, would just be some overlay over how a computer works just
to make it more human readable....

computers dont care for what is first idex, it will always equate to
base+n*elem_len to have to find the actual location in memory...

if people have been discussing that for decades and what is better, it's just
another example of people not understanding what is an opinion,and what is
objectivley true... none is better, your compiler is taking care of business,
really it is, and no longer like it was 1999... they have been patched many
times!

~~~
mikec3010
> for a computer and how it functions

And what about for a human and how it functions? Are humans here to make
computers' lives easier or vice versa?

Humans think from 1...N inclusive and this is the source of a litany of bugs
when users first learn a language.

And what you described in asm is just one implementation. In fact, the array
documentation says Julia doesn't guarantee tight packing so it doesn't even
apply. The next element could be anywhere, and in fact wont be in the case of
heterogeneous arrays. Asm is also not straightforward. For instance zeroing a
register isn't done with the mov instruction but xor

~~~
ur-whale
>Humans think from 1...N

I think you're assuming everyone thinks the way you do. I assure you that's
not the case. There are legions of people (and not just programmers, math
folks do it too) who don't "think from 1 to N"

~~~
mikec3010
True, after they learn a programming language. I meant when we're children and
taught to count, we don't start at 0.

~~~
ur-whale
Again assuming: that may be true in the place where _you_ were educated.

~~~
orhmeh09
If you look at ranks for worldwide sporting events , you’ll note that
approximately none of them indicates first (zeroth ?) place with a 0. Can you
give an example of a place where education at early childhood level is carried
out as you claim ?

~~~
ur-whale
>ranks for worldwide sporting events

You're right ... sporting events ... how could I have not included that very
scholarly pursuit of engineers, scientists and programmers in my analysis.

I'm not trying to claim that 0-based is better than 1-based. I'm just trying
to point out that outside of the fairly limited crowd who spend their workday
in things like Matlab and R, the _vast_ majority of coders in the world in
2018 are working in 0-based indices languages.

If Julia is a worthy language which aims to attract a crowd beyond the niche
R/Matlab folks, then choosing 1-based indices is poor tactics.

~~~
orhmeh09
Since you mentioned children learning to count, I tried to find a general,
widely known case that would be as applicable around the world, and I provided
a study examining why 0 can be a difficult concept (which humankind developed
very recently). The number of toddlers who are engineers in the sense I think
you’re talking about is approximately zero, too.

Furthermore, the widespread mathematical / scientific computing languages have
used 1-based from FORTRAN through Matlab and Mathematica. Statistical papers
are published with accompanying R code , very rarely with Python. If 1-based
indexing is too hard to get used to, you may not be in the target audience.
Anecdotally, I used C and Python well before started R, and I’m not really the
smartest bulb in the box. I was annoyed for about a week. If you have the
knowledge of what you will use Julia for, this hurdle seems very minor in my
opinion.

------
montalbano
Some discussion from yesterdays (slightly premature) HN post:

[https://news.ycombinator.com/item?id=17719489](https://news.ycombinator.com/item?id=17719489)

------
boardgoat
For R users who have started using Julia how's the datascience stack compare
to tidyverse? The advertised speed of Julia seems great so I want to ask
someone else who's made the switch how's life on the other side?

~~~
andrestan
I think if you compare R's tidyverse to Julia, you aren't at all in Julia's
wheelhouse. Julia's value comes from algorithm development primarily, not from
interacting with data.frame type objects. If you really want to speed up that,
use data.table in R. It'll provide you with the _fastest_ data.frame
implementation around.

------
fiatjaf
How to do stuff with Julia:
[https://rosetta.alhur.es/compare/Python/Julia/#](https://rosetta.alhur.es/compare/Python/Julia/#)

------
gjm11
I wanted to give this a try with Jupyter (having played a bit with earlier
versions of Julia that way) but haven't had much success.

It seems that the first step in getting Jupyter to know about a new version of
Julia is to do Pkg.add("IJulia") in Julia. Except that that doesn't work; it
seems that now you're supposed to use some special pkg mode in the Julia REPL.

So, I hit ] to enter pkg mode and type "add IJulia", which seems to be the
appropriate thing. It churns a bit, tries to build something called "Conda"
(which is apparently the dependency-management bit of Anaconda, the Python
distribution thing), and gives me an error message that starts like this:
"ERROR: LoadError: ArgumentError: isdefined: too few arguments (expected 2)"
followed by a stack trace whose _first and last_ entries are "top-level scope
at none:0", which doesn't exactly help to nail down where the problem is.

Related operations like "build Conda" and "build IJulia" give similarly
unhelpful error messages (some of them enjoining me to do things like
Pkg.build("Conda") that so far as I can tell don't actually work at all).

Do I just need to wait for release 1.0.1, or is it likely that I've done (or
left undone) some unfortunate thing, that I could fix and make everything
work?

~~~
simonbyrne
No, just need to wait for IJulia to be updated. I imagine it will be a day or
two.

~~~
gjm11
Fair enough (though it looked as if at least some of the trouble was not
inside IJulia).

I thought it might be interesting to try the Juno IDE, but met with a similar
lack of success: first of all it told me I needed to do Pkg.add("Atom"); when
I had done (not that but) the approximately equivalent ]add Atom, starting
Juno yielded only a cascade of error messages (no method matching
eval(::Module, ::Expr); failed to precompile Media; failed to precompile Juno;
failed to precompile Atom).

Presumably, again, the answer is to wait a little for things to settle down.
It feels as if it might have been better to get all the ducks in a row
_before_ declaring version 1.0, though...

~~~
simonbyrne
Actually, you can try 0.7: this is more or less equivalent, but is backwards
compatible with 0.6 (it will warn when incompatible uses are used).

Of course, more time would have been nice, but you can always find a reason
for a delay. At some point you have to rip off the bandaid.

------
JorgeGT
Ask HN: as a researcher using MATLAB daily, is there a Julia IDE that offers a
similar experience?

~~~
ChrisRackauckas
I like Juno for that: [http://junolab.org/](http://junolab.org/)

It will take awhile for it to fully utilize v1.0. Remember, it just came out
so packages will need to catch up. The debugger for example needs to be
updated, along with the plotting packages and such. But once everyone updates
it should give a very similar experience.

------
cmroanirgo
Please forgive my ignorance, but does it compile to a binary and can it be
targeted to a different platform than the current one? eg. Can I make for OSX,
Windows, Android, etc?

The best documentation I can find is this article, but it seems a bit spartan
on how-to: [https://medium.com/@sdanisch/compiling-julia-binaries-
ddd6d4...](https://medium.com/@sdanisch/compiling-julia-binaries-ddd6d4e0caf4)

------
askaboutit
I don’t do any scientific or numerical programming. So as someone interested
in web development. Backend programming. Serverless. High performing code
without being any more strenuous than Ruby or Python. Is Julia a good fit? I
tried Crystal. It’s nice but the type system definitely makes it much harder
to achieve things than Ruby.

~~~
jabl
In principle Julia is a general purpose language, although so far the
ecosystem is heavily biased towards scientific computing.

So while you might find the language itself pleasant, you'll probably find a
lack of libraries.

(personally I'm a fan of strong static type systems (e.g. haskell), and I
think it is a shame the designers didn't go down that path. But hey, they did
all the work and not me, so who am I to complain. Kudos to the 1.0 milestone!)

~~~
dnautics
there's a reason why julia doesn't have a strong static type system.

In julia, you can write a custom type, and immediately have access to all of
the builtin libraries.

For example, I wrote a drop-in replacement for floating points, and
immediately had complex numbers, matrix math, gaussian elimination, fourier
transform, etc... And could rapidly compare the numerical performance of that
with fp.

For a more exotic example, I wrote a galois field GF256 type and was
immediately able to reed-solomon encoding and decoding using the matrix
multiplication and matrix solving libraries.

For an even more exotic example, I wrote a "lexical GF256" type (basically
passing strings) and had the system generate optimized C code for reed-solomon
decoding, using the builtin julia matrix solver without having to manually do
matrix solving for every possible (n,k) reed-solomon system.

It was also relatively easy to write a verilog generator (~3 days of work), so
you could pass bit-arrays representing wires, run you unit and property tests
on the binary representation, then redispatch the function passing lexical
types and get verilog code out the other end, then transpile the verilog to C
using verilator, dynamically load the verilog into the julia runtime, and then
unit and property tests on the transpiled verilog.

I'm sure it's possible to do this in haskell, but I imagine it would be
harder.

~~~
jabl
I don't see why what you've done couldn't in principle be done in a language
with a static type system (except maybe with some difficulty the thing with
verilog, which apparently needs access to the compiler at runtime?)

To get good performance with a static type system, you need generics. Which
means longer compile times, as the libraries you use must be compiled as part
of your project so that the code specialized for the types you're calling the
library with is generated. You can't have piles of native code lying around
just waiting to be used (like in the C world, for example).

But this is similar to how Julia does it at runtime, with the multi-methods
being JIT-compiled for the types you're calling them with. So either way you
pay the cost somehow (and yes, there are ways to reduce that overhead, of
course).

~~~
dnautics
the primary use case for julia is where you expect to be doing
computationally-intensive work over and over again (e.g. in a supercomputer
deploy where I was using 20 HPC nodes over the course of 4 days), so the
amortization of the (short) recompile time is worth it, and the ease of
writing software using the julia generics is totally worth it. Also see the
Celeste.jl project where they were able to very easily optimize for a super
inconvenient platform (xeon phi).

If you're expecting a high-performance jit that needs to be called all the
time because your program gets unloaded from memory and needs to be rebuilt by
a kubernetes server every fifth web request, don't use julia.

------
eghad
Glad they've finally gotten to 1.0, been using it on and off since .3 with the
same optimism I had when Rails jumped onto the scene. I really hope the
community can start to mature beyond the bikeshedding that is typical of early
language days, and start cracking more significant issues necessary for wider
acceptance. Speed and multiple dispatch are great, but ability to roll
standalone binaries (just don't mention it on Discourse/Gitter or risk
Karpinski's wrath), streamlining documentation/onboarding, and more
native/active package development are pretty big todos to tackle.

I've been working on updating a wide variety of engineering software
apis/wrappers and a large informatics library for .7/1.0, but I'd love to hear
any suggestions of must-need technical packages that users want from MATLAB,
Python, or R.

------
continuational
Apart from the library ecosystem, what attracts you to Julia?

~~~
enitihas
Raw speed. Although julia has several nice features (multiple dynamic
dispatch, macros), the raw speed obtained by annotating code with types is
mind blowing. You can for the most part write like python, then annotate the
slowest parts with types. It works really well.

~~~
stabbles
You don't even need to annotate things with types!

    
    
        > square(x) = x * x
    
        > @code_typed square(2.0)
        CodeInfo(
        1 1 ─ %1 = Base.mul_float(%%x, %%x)::Float64
        └──        return %1
        ) => Float64
    
        > @code_typed square(2)
        CodeInfo(
        1 1 ─ %1 = Base.mul_int(%%x, %%x)::Int64
        └──        return %1
        ) => Int64
    

Functions specialize automatically to the arguments you pass in.

~~~
ChrisFoster
Or, to see what's _really_ going on at a low level you have immediate access
to the native assembly from the REPL:

    
    
        julia> square(x) = x^2 
        square (generic function with 1 method) 
     
        julia> @code_native square(1) 
            .text 
        Filename: REPL[3] 
            pushq       %rbp 
            movq        %rsp, %rbp 
        Source line: 1 
            imulq       %rdi, %rdi 
            movq        %rdi, %rax 
            popq        %rbp 
            retq 
     
        julia> @code_native square(1.0) 
            .text 
        Filename: REPL[3] 
            pushq       %rbp 
            movq        %rsp, %rbp 
        Source line: 1 
            mulsd       %xmm0, %xmm0 
            popq        %rbp 
            retq

------
bla2
"Hm I should check it out, are there docs?"
[https://docs.julialang.org/en/stable/](https://docs.julialang.org/en/stable/)
Julia 0.7 Documentation

Sounds like 1.0 hasn't propagated everywhere yet :-)

~~~
KenoFischer
The documentation should be equivalent, modulo a small couple additions made
in the past couple of days.

------
ontouchstart
I made a docker image compiled from the github master. Feel free to explore

[https://gist.github.com/ontouchstart/5222903bf4a2d040e6eee6a...](https://gist.github.com/ontouchstart/5222903bf4a2d040e6eee6a18da08479)

------
ksaitor
> We want the speed of C with the dynamism of Ruby. We want a language that’s
> homoiconic, with true macros like Lisp, but with obvious, familiar
> mathematical notation like Matlab. We want something as usable for general
> programming as Python, as easy for statistics as R, as natural for string
> processing as Perl, as powerful for linear algebra as Matlab, as good at
> gluing programs together as the shell.

… they forgot to compare Julia to JavaScript — the most popular programming
language in the world.

Or is that because authors don't want Julia to be used by anyone?

¯\\_(ツ)_/¯

[I'm joking, ofc. But I still find it weird how almost every modern language
is mentioned, but JavaScript…]

~~~
FridgeSeal
JavaScript, whilst widely used, is not a language I would want to compare my
own against when it comes to talking about quality and features, unless I want
to talk about how actual thought went into my language.

------
wuschel
Does Julia have _Tail Call Optimization_ for recursion?

I know it is not necessary, I was just curious, as thr language seems to have
a lot of metaprogramming options, and Femto-Lisp being part of the compiler.

~~~
KenoFischer
No, but it'd be fairly easy to activate, since LLVM supports it in code
generation. The primary reason we don't is that it messes with stack traces
too much.

~~~
wuschel
Thanks for your answer!

Is TCO on the roadmap, or is the " _mess it does with stack traces_ " too
much?

------
meanmrmustard92
Are there any updates to Juno forthcoming that don't butcher the existing
install of Atom on the system? I use Atom with Hydrogen for Python and R, and
was curious about julia and made the mistake of opting for the 'batteries
included install', which promptly overwrote all my atom settings. Seems a
bizarre oversight.

------
fiatjaf
Module Linker, my browser extension for browsing code on GitHub, has support
for Julia: [https://module-linker.alhur.es/#/julia](https://module-
linker.alhur.es/#/julia)

------
fernly
Possibly of interest:

[https://github.com/chrisvoncsefalvay/learn-julia-the-hard-
wa...](https://github.com/chrisvoncsefalvay/learn-julia-the-hard-way)

------
xvilka
Jupyter and excess of ML frameworks are important of the Python success.
Hopefully Julia will grow something similar without relying on the Python
code.

------
happy-go-lucky
I want to like the language, but once I land on their home page, I don't seem
to know where I can play with the language.

~~~
ranjanan
You can try Julia out in the cloud via a Jupyter notebook here:
[https://juliabox.com](https://juliabox.com)

------
sgt101
juno doesn't seem to work on windows with 1.0

------
ivarru
As someone who should be in the target audience for Julia, I am sad to see
that they have not switched to 0-based arrays before launching version 1.0. I
consider this is an indication of a broken design process and thus a reason to
stay away from the language. (The fact that ranges include both endpoints -
rather than being half-open as in Python - is another such indication.)

~~~
marvy
> the fact that ranges include both endpoints - rather than being half-open as
> in Python - is another such indication

No, it's the same indication, twice. Once you've decided on 1-based, you
almost definitely want to include both endpoints. Else you end up saying
things like a[1:n+1] to indicate "the whole array please", which is annoying.

