
Why we created Julia - new language for fresh approach to technical computing - jemeshsu
http://julialang.org/blog/2012/02/why-we-created-julia/
======
mjw
I wonder what they think about or have learned from
<http://en.wikipedia.org/wiki/Fortress_(programming_language)> , another
recent-ish attempt to deliver a modern and powerful scientific programming
language.

Personally I'm a little wary of being ghettoised into something overly domain-
specific for scientific/numerical computing. Really good interop may mitigate
that -- something which can navigate the unholy mix of C, C++, fortran,
matlab, octave, R and python routines one comes across trying to reproduce
others research work, would indeed be awesome.

I do wonder if some of the noble demands of this project might be better
delegated to library developers though, after adding a bare minimum of syntax
and feature support to a powerful general-purpose language. For now
Python+numpy+scipy seems a great 90% solution here.

~~~
fijal
Python+numpy+scipy does work only if you actually don't write much python or
at least not run much python. Otherwise it sucks

~~~
ViralBShah
I wouldn't say that it sucks - that's way too strong. I am one of the authors
of Circuitscape (<http://www.circuitscape.org>), which uses
python+numpy+scipy. The community loves it, the fact that its python, open
source, embeddable in other tools, etc.

However, much of the code is written in a vectorized style for performance
reasons, as is the case with many high level scientific computing languages.
This leads to unnatural code in some cases, and also uses too much memory.
First I thought IronPython was the way, but have been looking forward to
pypy+numpy+scipy eagerly.

If I were to use julia, the code would be a lot more natural, because type
inference and all the compiler goodies make it possible to simply write loops
over arrays when I need them. This was one of the reasons we started working
on julia, because everything else just seemed to fall just a little bit short.

~~~
andreasvc
What do you think of cython+python+numpy? I've used it successfully to do NLP
work.

------
wbhart
Much praise!! These guys have incredibly good taste. Almost every single thing
I can think of that I want in a programming language, they have it. All in the
one language!

The fact that it has parametric types, parametric polymorphism, macros,
performance almost as good as C, good C/Fortran interop, 64 bit integers and
an interactive REPL all in the one language just blows my mind.

I wasn't able to tell if it is possible to overload operators, which is
another thing essential to mathematical code.

I was also unsure why the keyword end was needed at the end of code blocks. It
seems that indentation could take care of that.

I also didn't see bignums as a default type (though you can use an external
library to get them).

However, all in all, I think this is the first 21st Century language and find
it very exciting!

~~~
ludwigvan
I think they are aiming for smooth transition for MATLAB users. They also have
1-index based arrays as opposed to 0-index .

~~~
wbhart
It would certainly be nice if there was an option to use 0 based indices in
blocks of code. It's understandable in that they are pitching at the technical
community, and many mathematical papers and books are written with 1 based
indices. But I am a mathematician who prefers 0 based indices.

~~~
ViralBShah
Actually, it is quite easy to implement 0 based indices or any other indexing
scheme in julia, since all the array indexing code is implemented in julia
itself.

[https://github.com/JuliaLang/julia/blob/master/j/array.j#L15...](https://github.com/JuliaLang/julia/blob/master/j/array.j#L157)

I personally would find multiple indexing schemes confusing both for usage as
well as to develop and maintain. Given that 1 based indexing seems to be a
popular choice among many similar languages, we just went ahead with that.

~~~
chancho
Have you looked at the Haskell Ix class?
<http://www.haskell.org/onlinereport/ix.html>

It generalizes the choice of 0 or 1 to an arbitrary starting index. So when
you create an array you specify not just where it ends but also where it
begins. This lets you do neat things (consider a filter kernel with range
[-s,+s]^n instead of [1,2s]^n) and the extra complexity it adds can be hidden
when not needed using for-statements or higher order functions.

Nobody uses it because the implementation is not very efficient and Haskellers
have a chip on their shoulder about performance. It subtracts the origin and
computes strides on every index, but you could easily avoid this by storing
the subtracted base pointer and strides with the array. Of course when you go
to implement it you'll see light on 0-based indexing :)

------
jfager
Just spent the last hour reading through the docs and playing around with it.
This is some damn sexy stuff. It's been a long time since I've been this
excited by a language.

------
mkl
The language looks interesting, but I am a bit concerned about the license
situation, as in my understanding they have misinterpreted the GPL with regard
to shared libraries. Quoting:

 _Various libraries used by the Julia environment include their own licenses
such as the GPL, LGPL, and BSD (therefore the environment, which consists of
the language, user interfaces, and libraries, is under the GPL). Core
functionality is included in a shared library, so users can easily and legally
combine Julia with their own C/Fortran code or proprietary third-party
libraries._

The FSF disagrees (<http://www.gnu.org/licenses/gpl-
faq.html#NFUseGPLPlugins>):

 _If the program dynamically links plug-ins, and they make function calls to
each other and share data structures, we believe they form a single program,
which must be treated as an extension of both the main program and the plug-
ins. In order to use the GPL-covered plug-ins, the main program must be
released under the GPL or a GPL-compatible free software license, and that the
terms of the GPL must be followed when the main program is distributed for use
with these plug-ins._

~~~
_delirium
I believe what they're saying is that there are two things:

1\. The Julia core, which consists of the language runtime and core
functionality, is MIT licensed, and builds into an MIT-licensed shared
library.

2\. The Julia "environment", which includes a user interface, third-party
libraries, etc., some of which are GPL, and which is therefore GPL as a whole.

I believe they're saying that you can link #1 with proprietary code, not
meaning to imply that you can link #2 with proprietary code (because as you
point out that wouldn't work). How useful that is probably depends on how many
of the libraries the average application needs are in bucket #1.

~~~
StefanKarpinski
That is exactly right. At the moment I think the only GPL library Julia uses
is readline — but obviously that's pretty important for the repl, so we chose
to do it this way.

~~~
sparky
The last REPL I wrote used linenoise ( <https://github.com/antirez/linenoise>
) instead of readline, and it worked great for us. You may need more of
readline's features than we did, but it's worth a look.

------
almost
Awesome stuff: <http://julialang.org/manual/metaprogramming/>

~~~
samth
It's great the metaprogramming support seems to be a requirement for new
languages these days.

Unfortunately, this one combines the inconvenience of Template Haskell
(explicit invocation of macros with @) and the bugs of Common Lisp (the
section on hygiene says, basically, "we have gensym"). Fortunately, the
language is young, and I hope they can improve this story.

~~~
tomp
What else would you suggest?

I find that the @ syntax is amazing, since the reader (programmer) knows
exactly which form is a macro (and can look it up), as opposed to introducing
arbitrary, often very confusing syntax.

~~~
StefanKarpinski
Thanks. We've considered doing other things — like function call syntax for
macros (which is essentially what Lisp/Scheme have), but decided against it.
It just causes confusion. Can you pass a macro around like you can a function?
Can functions shadow macros and vice versa? What happens if a macro introduces
a local variable that shadows the macro itself? Basically it comes down to
fact that macros are syntactic and as such behave very differently from
functions, which are not syntactic. With the @ syntax, there's no confusion.

~~~
tomp
Just a comment/suggestion: could someone with GitHub write access go over your
wiki and fix code samples where multiple lines have been concatenated into a
single line? E.g. [https://github.com/JuliaLang/julia/wiki/Types-and-their-
repr...](https://github.com/JuliaLang/julia/wiki/Types-and-their-
representations), heading `Built-in types`.

~~~
StefanKarpinski
We've actually switched the documentation over to the main website:
<http://julialang.org/manual/>. The website is hosted on GitHub using Jekyll:
<https://github.com/JuliaLang/julialang.github.com>. As a side-effect, you can
edit the docs and create pull requests, which you can't do on GitHub wikis.

------
spitfire
It's not yet another language that we need. It's more high level functions.

C or fortran can be as fast as they want, but in mathematica I can do
MorphologicalComponents[] and get the components. having these functions
available to me speeds up my time to discovery by 1000x or more.

------
drats
Looks very much like a python without the legacy and a touch of ruby, ipython
as the default shell, better multiprocessing, better os/system access,
decorators and mathematics. Nice clear documentation too.

Given how similar to python they are their main competitor is pypy, which
isn't too far behind them in performance I'd suspect (absent from the
benchmarks though). I was fully expecting someone to make a non-compatible
fork of pypy to create a faster python-like lang/something like julia. They
should quickly get some elegant small libraries like sinatra/bottle and
requests (access to dbs with http interfaces as well). A robust requests+lxml
library built-in would be simply amazing.

~~~
DasIch
PyPy provides an entire framework for people to write interpreters in. If you
were to write one you wouldn't fork PyPy, you would use it.

~~~
drats
Well you are being a bit nitpicky as pypy is both the python implementation
and the JIT/Rpython/framework bit. What I mean was someone holding very close
to the python implementation in almost everything (syntax etc.) but
streamlining it/breaking compatibility/giving it a new name ("python3-as-it-
should-have-been" perhaps?).

Further, if we are going to be nitpicky, when you use something like that for
its intended purpose it's still a fork - I forked Twitter bootstrap for my new
project/I used Twitter bootstrap for my new project - so "you wouldn't fork
PyPy" is a false statement.

~~~
apl

      >  if we are going to be nitpicky, when you use something
      > like that for its intended purpose it's still a fork
    

No. Fork is definitely not semantically congruent with use. You're only
forking Bootstrap if you create a path that diverges from its mainline; don't
get confused by GitHub parlance.

~~~
drats
Pypy most commonly refers to the python implementation anyway, as can be
easily seen by going to the pypy website[1]. Daslch abused the fact that
"pypy" points to two things to make a meaningless nitpick. Now you are
claiming that I have claimed that fork and use are semantically congruent,
where do I claim that? I think you are nitpicking and parsing "it" wrong. In
either case a question asking for clarification would be more polite than
going on attack.

There seems to be a disease, let's call it Eric Raymond Disease (ERD), on HN
in particular[2] of people busting in rudely to declare with supreme
confidence on the usage of quite generic words and/or slang. Let's start by
tabling the fact that a particular "software definition" of "fork" has not yet
reached the OED, or online dictionaries like Merriam-Webster for that matter.
Going from that I can't see how you can declare it to only have one definition
when _you already admit there is a competing one_ ("GitHub parlance"). The
base concept of the word fork is a divergence, a branch, which seems to me to
cover any copy+modify move for software so someone copy+modifying bootstrap is
forking it, someone copy+modifying the _implementation of python called pypy_
is forking _it_.

Are you and Daslch really contributing to HN or are you taking a rather banal
comment and turned it into a 100% useless thread by bickering over semantics?

[1]<http://pypy.org/> [2]<http://news.ycombinator.com/item?id=3467035>

------
blacksqr
Pardon the possible naïveté, but I'm so old I remember Ada. Seems that
language was designed to address a very similar problem space. What does Julia
offer that Ada doesn't? Or perhaps: why is ada so deficient that spending 2+
years inventing a new language was a better proposition than taking the time
to improve Ada?

~~~
SomeCallMeTim
While there's a resurgence of interest in ADA recently, the general sense is
that it's WAY overdesigned and heavy.

I came across this bit of humor a while back that communicates the general
feeling well:

<http://bit.csc.lsu.edu/~gb/csc4101/Reading/gigo-1997-04.html>

If the problem IS that it's huge and inelegant, then the solution isn't to try
to improve it, but to start from a clean slate. Julia looks like a reasonable
attempt.

~~~
blacksqr
Funny, I remember reading similar conspiracy claims about C.

------
MrMan
I have made the same rants about numeric computing language issues as the
Julia creators. I would avidly adopt Julia based on the documentation on the
website. It seems to me that this is a beautifully conceived language given
the concerns it attempts to address. Amazing even if the performance never
improves.

The R thread on the dev list gives an accurate representation of the obstacles
to Julia being widely adopted, however. One might become very unhappy while
using R, but sometimes you have to use it, because something you want is only
in R, due to the ubiquity of that tool for stats.

If Python, which does not have that many disadvantages besides not having been
explicitly designed to appeal to users like me, cannot unseat Matlab and R,
Julia will have a difficult time.

But I will give it a try, and will implement some simple but core algorithms
that I use a lot.

Another idea to test Julia's usefulness would be to port a tool like Waffles
to Julia. In my opinion implementing such a sensible tool like Waffles in C++
is a heartbreak.

------
disgruntledphd2
I'm a little worried about some of those benchmarks. They appear to have
benchmarked R from a few years back against 2011 Matlab, for instance. In
addition, they provide no code used for the benchmarks. That being said, this
looks really interesting, and I'm gonna bookmark it for when I have time to
examine it properly (after the damn phd is finally submitted).

~~~
EricBurnett
They do link to the code:
<https://github.com/JuliaLang/julia/tree/master/test/perf> . It doesn't look
like a very reliable microbenchmark - run test x 5 times - but it should
provide a useful starting point if you want to run your own.

~~~
disgruntledphd2
Thanks guys, I'm not sure how I missed that. The benchmarks look interesting,
but it is strange how old an R version they used. That being said, its a
really interesting language, and I look forward to playing with it.

------
vilya
This looks like it would be a great language to tackle the Project Euler
problems with - which is usually a good sign, in my book. A big-int type would
be nice though.

------
kghose
So, the usual caveats about writing code to match the languages strengths
apply here. For example the fib function plays to a python weakness - function
overhead. Rewriting the code to be a simple loop removes the inefficiency:

    
    
      import time
      
      def fib(n):
        if n < 1:
          return None
        if n == 1:
          return 0
        if n == 2:
          return 1
        x0 = 0
        x1 = 1
        for m in range(2,n):
          x = x0 + x1
          x0 = x1
          x1 = x
        return x
      
      if __name__=="__main__":
        assert fib(21) == 6765
        tmin = float('inf')
        for i in xrange(5):
          t = time.time()
          f = fib(20)
          t = time.time()-t
          if t < tmin: tmin = t
        print str(tmin*1000)

~~~
StefanKarpinski
That's quite true, but this micro-benchmark wasn't chosen to make Python look
bad — it was chosen to test how good each language was at function calls. So
using a loop defeats the point of the benchmark. Using double recursion to
compute Fibonacci numbers is also stupid and could obviously be avoided in all
languages.

The story the micro-benchmark tells is that Julia's pretty good at function
calls, but JavaScript is even better (and, of course, C/C++ is the gold
standard). In general the V8 engine is really amazing. We had the advantage of
being able to design the language to make the execution fast (with the
constraints of all that we wanted to be able to do with it), but V8 makes a
language that was in no way designed for performance and makes it blazingly
fast.

After I wrote the benchmark code for JavaScript and saw just how fast it was I
had a moment of "should we be doing scientific computing in JavaScript?" Now
wouldn't that be nuts?

~~~
ViralBShah
Yes, I pretty much had the same thought, but then I started thinking about
multiple dispatch, calling C/Fortran libraries, polyhedral optimizations, and
I realized that V8 developers may not have the same design targets in mind.

~~~
pron
... and the JVM (HotSpot) is even better than V8.

~~~
StefanKarpinski
I feel like V8 faces a harder problem and has solved it ingeniously. Not to
knock HotSpot, which is excellent and keeps getting better — it just seems
like making the JVM go fast is not as hard as making something like JavaScript
go fast. It's a tough comparison; kind of apples to oranges.

------
karpathy
Nice! I do a fair amount of scientific computing in MATLAB. This looks to have
a lot of the powerful array syntax / functions of MATLAB, but with the neat
feel of Python, and all kinds of extras.

------
warmfuzzykitten
When you try to install julia on MacOS X 10.7.3, you may see the make fail
because wget is not installed. Easy to fix with:

brew install wget

[Edit] git page says gfortran (and wget) are downloaded and compiled, but if
they're not already installed make fails. So...

brew install gfortran

The need to do this separately may have to do with licensing?

[Edit] And if you're not root, install to /usr/share/julia will fail. So
you'll need to do:

sudo make install

I'm sure all this is perfectly obvious to Unix-heads who are inured to this
sort of abuse, but I'm a Mac-head, used to things that Just Work, and I hate
this shit.

~~~
StefanKarpinski
Sorry! The only thing I can say in our defense is that this is pretty trivial
compared to installing a lot of scientific computing packages. But seriously,
we'd like a drag-and-drop Mac installer. Anyone want to do that? (Only half
kidding.)

Stepping back a bit, this is one of the reasons why having an entirely web-
based experience is appealing — then you can let people use a known-good setup
without needing to mess around with installing a fairly extensive amount of
software just to get basic things to work. Then there's also the general
appeal of doing big data work a la Google docs or Gmail. The trick is getting
the user experience to be good enough on the web.

~~~
warmfuzzykitten
Please don't let perfection tomorrow keep you from meaningful improvements
today. Just better instructions for Mac installs would be helpful!

------
fuzzythinker
Slightly off topic, but V8's benchmarks really surprised me

------
plessthanpt05
Pretty tied into Python/Numpy ecosystem & 3rd party libs; however, after
reading that (somewhat over the top, albeit very persuasive) posting, I'm
definitely going to dig a bit deeper. Really nice first impression, especially
like the clean syntax and calling of c -- looks promising, thanks!

------
andrewcooke
this is surprisingly complete for a relatively new(?) project.

one notable restriction is that inheritance is only for interface, not
implementation.

also, can anyone find a sequence abstraction (like lists)? arrays seem to be
fixed size and i don't see anything else apart from coroutines. am i missing
something?!

[perhaps not, if it's intended for numerical work. on reflection i am moving
more and more towards generators (in python) and iterables (in java, using the
guava iterables library to construct maps, filters etc) rather than variable
length collections, so maybe this is not such a big deal. it's effectively how
clojure operates, too...]

~~~
StefanKarpinski
pron is right: you can inherit behavior from abstract types and abstract
types, unlike interfaces, can have code written to them. (In single-dispatch
OO languages, you are in the strange situation that you can write code to
interfaces if they are arguments but not if they are the receiver of a method;
thus, you're in a situation where you can either dispatch on an interface _or_
you can write code for it, but never both at the same time.)

Arrays are not fixed size: there are push, pop, shift and unshift operations
on them just like Perl, Ruby, etc. This uses the usual allocation-doubling
approach so that the entire array doesn't need to be copied every time, but
it's still usually much better to pre-allocate the correct size. Of course for
small arrays that are typical in Perl, Ruby, etc., it hardly matters. If
you're building an vector of 1 billion floats, however, you don't want to grow
it incrementally.

The sequence/iterable abstraction is duck-typed: an object has to implement
methods for three generic functions:

    
    
      i = start(x)
      done(x,i)
      next(x,i)
    

The state i can be anything.

Lack of implementation inheritance is one of those things that intro to OO
books make a big deal of, but when you don't have it, you don't miss it at all
— or at least I don't. I've never found tacking a few fields onto the end of
an object to be very useful. I don't want to inherit memory layout — I want to
inherit behavior. Julia's type system lets you write behavior to abstract
types and inherit that for various potentially completely different underlying
implementations.

~~~
pron
I don't know if I'd call it duck-typing when methods are not grouped into
interfaces. I think you could say that in Julia, each method is an interface.
I find it much cleaner than actual "duck-typing" (the way it's done in , say,
Scala with structural types) because then you have both an interface, which
specifies a contract, as well as the method name, which also specifies a
contract when duck-typing is used, even when it's found in different
interfaces.

Must compound types in Julia be concrete? I don't think I saw it in the
documentation.

~~~
StefanKarpinski
It's been discussed, and would be satisfying to have an `Iterable` abstract
type and ensure that every object implements `Iterable` satisfies contract of
having appropriate `start`, `done` and `next` methods, but that requires two
features we don't have yet: multiple inheritance of abstract types, and some
way of specifying an interface. There's been some discussion of this, and I
believe we have the way multiple inheritance _could_ work mostly worked out;
enforcement of interface implementation, not so much. However, it's a pretty
massive undertaking to add it to the language. Quick teaser: generic functions
and final concrete types actually make multiple dispatch work a lot better
than it does with single dispatch and the "bag of named methods inside an
object" model of traditional OO.

So far we haven't actually felt a "pressing need" for multiple inheritance or
interfaces, and we tend to take a pressing-need approach to language features.
If you can live without a feature for a while, then maybe you really didn't
need it in the first place. But we'll have to see what happens when other
people are starting to try to use if for things.

Aren't compound types inherently concrete? The compoundness describes the
implementation of the type, implying that it must have an implementation,
hence must be concrete.

------
mjcohenw
I am currently building it - OS X Lion. First, I had to install wget:) Then, I
got a certificate error on the https, so I pasted the command into Firefox and
got the tarball. After copying it to the julia dir, I edited the tar command
and ran it. Finally, doing make right now.

Here is the certificate error: Connecting to github.com
(github.com)|207.97.227.239|:443... connected. ERROR: The certificate of
`github.com' is not trusted. ERROR: The certificate of `github.com' hasn't got
a known issuer.

~~~
ViralBShah
We used to use curl until recently so that it would build just fine on OS X.
We changed it to wget, because it just seems so much easier to use than curl,
and we had to do some nasty stuff at one point to download a few libraries
that I couldn't figure out how to do with curl.

BTW, do try out the mac binaries if the build is an issue. We are still trying
to make it all build seamlessly!

~~~
StefanKarpinski
For the record "ease of use" was not why we switched to wget — there was (for
a while) a dependency that could only be downloaded using wget's recursive
downloading. That's not the case anymore, however, so we should really switch
back to curl.

------
ludwigvan
Very exciting. The real power of MATLAB likes in its toolboxes, though, I
wonder if there will be an easy (and possibly automatic) way to convert
toolboxes.

One more thing: it would be awesome if the `manipulate` from Mathematica could
be incorporated somehow in that web interface. See:

<http://www.wolfram.com/broadcast/videos/manipulate/>

~~~
ViralBShah
MATLAB toolboxes are simply awesome for users, even though they are expensive.
We hope that if enough people find julia useful, many such libraries will be
built. Having written a ton of mex and matlab stuff, and a lot of julia
libraries, I find that julia is nicer to use personally - but of course I am
biased. :-)

I do believe that open source + good compiler + simple C/Fortran calling
interface will lead to others being able to write toolboxes in julia itself
and plugin libraries when needed.

------
sparky
Would be interesting to see how Julia perf compares to C++ compiled or JITed
with LLVM via G++ 4.6+Dragonegg or Clang 3.0/svn. Would be more apples-to-
apples, since both would use the same middle- and back-end. G++ 4.2.1 is a bit
obsolete at this point, but as it probably came by default on the MBP they
tested on, it's understandable.

~~~
StefanKarpinski
That would certainly be doable. If the performance is better, we can certainly
switch to using that for our benchmarks. The idea for the benchmarks is to
compare to a "gold standard" — hence the fact that the best results are taken
across all optimization levels. We could even take the best results across
multiple C compilers to give ourselves the absolute hardest comparison :-)

------
prtamil
For analogy there are few original liquor (vodka,rum,Whiskey,Brandy,Wine
etc..) but god damn so many cocktails, so many cocktails.

Same in Programming Language design. Few original concepts, Lisp,C,Smalltalk,
but so many cocktails even my grandma is creating one. All we want is more
libraries.

------
kylebrown
I'm more concerned with visualizing the data than with compiler optimizations
or types/operators.

I see that d3.js is included in the package, but a cursory glance at the docs
and I didn't see any examples of how to generate a chart.

------
meemo
I hope this really takes off. Though, for it to really take off it needs a
large ecosystem. The fact that the source is hosted on Github is a good start.

------
ya3r
If you look at the Github repo, you will notice the second most used
programming language in the repo (C being the first) is Objective-J ! I wonder
why?

~~~
jfager
Because the julia source file names end in .j, probably.

~~~
StefanKarpinski
bingo.

------
bandarman
This is very exciting - congratulations on the launch of Julia. Now good luck
as you move towards building a strong community!

------
bandarman
Anyone else having compile issues? I'm on Ubuntu 11.10 (Server x64) and make
fails consistently.

~~~
ViralBShah
Is it due to openblas not building? This may help:

<https://github.com/JuliaLang/julia/issues/370>

~~~
bandarman
Thanks for your suggestion. I tried that, but it wasn't it. I'm still getting
lots of build errors, culminating with this:

Saving to: `lapack-3.4.0.tgz'

2012-02-19 01:03:26 (55.3 KB/s) - `lapack-3.4.0.tgz' saved [6127787/6127787]

make[3]: gfortran: Command not found make[3]: __* [lsame.o] Error 127 /bin/sh:
./testlsame: not found /bin/sh: ./testslamch: not found /bin/sh: ./testdlamch:
not found /bin/sh: ./testsecond: not found /bin/sh: ./testdsecnd: not found
/bin/sh: ./testieee: not found /bin/sh: ./testversion: not found make[2]: __*
[lapack_install] Error 127 make[1]: __* [lapack-3.4.0/INSTALL/dlamch.o] Error
2 make: __* [julia-release] Error 2

~~~
olaf
Is package gfortran installed?

~~~
bandarman
@olaf: I was missing gfortran and libncurses5-dev. It's working now, sweet. :)

------
andrewcooke
a bit late to the party here, but i see no-one has mentioned gpus. since julia
supports calling c libraries i guess this will work, but i wondered if anyone
knew of any work on closer integration?

(also, julia is not a great name for googling....)

------
remmers
Is Julia homoiconic? Could someone give an example of a Julia macro?

~~~
absconditus
<http://julialang.org/manual/metaprogramming/>

------
jberryman
Their site seems to be taking a hammering. Any mirrors?

~~~
StefanKarpinski
We're GitHub hosted so hopefully this doesn't take GitHub down! (j/k)

------
jtchang
How does one open up a socket in Julia?

------
cad
Do you also want ice cream on top?

------
k-a-r
awesome language but the manual need some works in user-friendliness

------
yarapavan
Congrats! Great job!!

------
mjwalshe
And the moon on a stick :-) Not sure many "technical" programmers would take c
as the desert island programing language.

Sound interesting though if it can make hadoop easier to use id take that as
win I dont think its going to replace fortran as a HPC language.

~~~
coffeeaddicted
C as desert island programming language but making all indexing 1-based: "the
first element of any integer-indexed object is found at index 1, not index 0,
and the last element is found at index n rather than n-1, when the string has
a length of n." Also begin end blocks - maybe there was a pascal ninja hidden
in the syntax committee room?

~~~
jfager
Matlab, Octave, Mathematica, and other mathematical programming environments
use 1-based indexing, it's a well-established norm for the domain.

~~~
ced
Does 1-based indexing have any advantage, beyond familiarity to scientists?

~~~
jballanc
With 1-based indexing, length == last index.

Does 0-based indexing have any advantage, beyond ability to do pointer
arithmetic with indexes?

~~~
scott_s
In 0-based indexing, n % length is a value inside the range.

------
rwmj
They want something like OCaml ...

------
kib2
No Windows support, really ?!

~~~
simonw
That's pretty common for new language implementations - Node.js was around for
a couple of years before the Windows port was ready.

To be honest, if you're targeting early adopters of a programming language
Linux and Mac support is probably a lot more important than Windows. Smart
Windows users can always run Linux in a VM.

~~~
mjwalshe
and most users of this sort of languages aren't going to be running it on
windows but on Unix systems.

~~~
kib2
Are you insinuating that Windows users are too stupid to use those languages
(and use only rotten ones like V.B etc. ) ​​?

Please, be a minimum objective and constructive.

~~~
ajdecon
I doubt it was intended as an insult. Julia (and other scientific languages)
are often used in clustered environments, and Linux is much, much more common
than Windows in HPC.

------
nnnnni
Obligatory xkcd: <http://xkcd.com/927/>

It's about standards, but it still applies.

~~~
StefanKarpinski
I realize this is just a comic, but it's one of the reasons that we try to be
as compatible with C and Fortran libraries as possible — that's where the vast
bulk of high-quality scientific computing work has been done, and we want to
be as compatible with it as possible. The goal here is not to duplicate all
the work that's already been done, but to allow existing high-quality
libraries to be smoothly and easily used together in one environment. That's
very much the same philosophy that NumPy and SciPy have, and indeed, it much
the same goal. The biggest issue with NumPy, IMO, is that Python arrays aren't
designed with linear algebra and interoperability with libraries like BLAS and
LAPACK in mind. This leads to the somewhat maddening distinction between
Python arrays and NumPy vectors and matrices — and lots of conversion back and
forth between them.

~~~
nnnnni
I also posted that comment without reading the article -- that was just my
first response to "yet another programming language".

I'm sure that it's filling a real need. I just wish that groups could
cooperate to make a handful of languages/libraries better rather than having
100 competing ones.

~~~
ViralBShah
Actually, there is much more co-operation than is visible. Almost all the
groups share the same libraries, same bugfixes, same patches, etc., which
typically will account for much of the user experience.

The different language approaches will typically compete on things like
syntax, speed, etc., and this leads to innovation and cross-pollination of
ideas. I personally prefer to have some choice, and use a number of languages
for scientific computing myself - but I guess too much choice makes things
confusing for the newcomer.

