Hacker News new | past | comments | ask | show | jobs | submit login
Why I'm Betting On Julia (evanmiller.org)
493 points by mistermcgruff on Jan 23, 2014 | hide | past | favorite | 252 comments



When out with friends recently, one of them mentioned how awesome Julia is. I was surprised to hear someone talk about it, even from another person in science. She turned and gushed about how awesome it was, how supportive the community was, even though she was "not really someone who likes programming." And she liked it so much she was telling her friends about it at a bar!

If you make a programming language that people who don't like programming love enough to spread by word of mouth when not near a computer, which technically-oriented people also love, that's a lot like the OSX terminal + nice GUI blend.

That's a pretty rare thing. And for collaborative science it's pretty important. Often, you'll have people in a bio lab who are very proficient in their area of biological expertise, but who would be solving the wrong problem by spending 2 years trying to become C++ hackers. On the other hand, there are a lot of people who write computational libraries, but know they have to translate them to matlab, or write a matlab wrapper and pray that their users can get it to compile which might sound simple to folks here, but is really frustrating for less computationally oriented people when something goes wrong.


To be fair, there's also Python+NumPy and R in that space, not just Matlab. Besides the "tinker with LLVM" thing, what does Julia offer that Python (or Cython for speed)+NumPy does not?


Writing fast code in Julia requires less effort than it does in Python or R. You don't have to drop down to Cython or Rcpp to get good performance. If you write an algorithm in Julia the same way you'd write it in C, it will achieve equal performance. If you write it the same way you'd write it in Python or R, it may not be optimal due to the cost of memory allocation, but it's still faster than Python or R.

Julia is more concise than Python. The language was designed to work with multidimensional arrays; it wasn't bolted on afterwards. There is no difference between a 2D array and a matrix; * always does matrix multiplication and .* always does element-wise multiplication. There is no awkwardness involving differences between NumPy arrays and Python lists. Everything has a type, and you can make a multidimensional array that efficiently stores values of any type. You can define your own types and define how arithmetic operators act on them with a minimal amount of code.

Julia's type system makes writing efficient algorithms easy without sacrificing any performance. If you define your own immutable type and define your own operators on it, Julia can make those operators run as fast as they would on ordinary values. In addition to general matrices, we have diagonal, symmetric, and tridiagonal matrices. The same routines that work on general matrices work on these as well with the same syntax, just more efficiently.

Julia uses multiple dispatch instead of traditional class-based OO. Methods are not a part of the object; instead, they operate on the object. Different methods with the same name can be defined to work on different types, or a single method can operate on a set of types, but the functions it calls may be implemented differently for each of these types. This is a better fit for technical applications, where the data doesn't change much but the methods do.

Julia is homoiconic, which is more useful than this article makes it seem :). It's easy to write code that writes code. If built-in language features aren't enough to get good performance with concise syntax, you can write a hygenic macro that does this for you.


To the rescue of numpy: A matrix from linear algebra and a 2D array are not exactly the same. In Python they are different convertible types and I think in practice it is hardly a drawback. That the multiplication operation is overloaded in the Mathematical World with the same symbol as the "normal" multiplication is unfortunate, numpys solution is as good as introducing two different operators (with one being an awkward .*)

I love the multiple dispatch part about julia though.

My fear however is, that unlike Python, Julia will lack enough libraries, especially on the unscientific part (GUI, databases, network, all the other stuff you need).


> That the multiplication operation is overloaded in the Mathematical World with the same symbol as the "normal" multiplication is unfortunate, numpy's solution is as good as introducing two different operators

The thing is that the multiplication operation for matrices is matrix multiplication, not elementwise multiplication. When you apply a polynomial like x^2 + y to matrices, you do not want to apply the polynomial elementwise – you want to square the x matrix and add the y matrix to it.


I cheered out loud when I first started tinkering with Julia, tested whether multiplication did the right thing with matrices and vectors, and saw that they did. Multiplying two row vectors should give an error! Element-by-element operations should get their own operator, not the other way around.

But as a long-time R user, I'm hesitant to bet the farm on Julia for a project at work where it would be ideally suited. Maybe there's a way to squeeze it in on the side.


Sometimes an array of numbers is just an array of numbers. The language shouldn't presume too much about what you mean to do with them.

The moment you need an extra dimension (or anything other than 2 really) Matlab's ‘everything is a matrix’ approach falls apart. Matlab is a toy language in so many ways and this is just another one.

It's a pity that Julia adopted Matlab's pop matrix semantics instead of some solid and general principles from APL or J. Even modern Fortran would have been a better model for an array DSL. From what I've read of the Julia docs, they actually want you to write loops. But Julia looks great otherwise. With macros and a good compiler, maybe the array features can be fixed at some point.


how exactly does matlab's approach fai? I haven't had that much experience with it, but i do vaguely remember that it supports n-dimensions


Yes, adverbs, ranked matrix operator and some other parts of the APL/J approach I miss a bit and find them tedious to emulate/avoid.


If you are doing linear algebra, I agree. Yet linear algebra is not the only thing I want to do with numbers. I think most of the time I do use the elementwise operation, such as:

    x = linspace(0,10, 1000)
    y = (x<5)*4.0
    z = x**2 + y
Of course I can just use matrices and then I have the information at hand that I am doing linear algebra right now:

    x = matrix([[3, 0],
                [9, 5]])

    Out[28]: 
    matrix([[ 9,  0],
            [72, 25]])

    x**2 + x
I think this is not too much boilerplate and gives nice semantic information within the sourcecode. However, if you want to perform this with a 2d array object, you can also use its dot method:

    In [ 1]: a
    Out[ 1]: 
    array([[ 9,  0],
           [72, 25]])

    In [ 2]: a.dot(a)
    Out[ 2]: 
    array([[  81,    0],
           [2448,  625]])

    In [ 3]: a * a
    Out[ 3]: 
    array([[  81,    0],
           [5184,  625]])
The approach of the matrix object nicely takes into account that operands of a "normal" multiplication `*` commute, so the elementwise multiplication fits the picture here. Wheres matrix multiplication - which is non-associative for most matrices - is performed by a different method.


If multiply operator weren't such a big issue, the creators of Numpy wouldn't have attempted to insert a new operator for matrix multiplication into Python. For a dynamic language such as Python operator overloading for such closely related types is a big trouble. If I write a function that uses multiplication, either I use member methods such as "dot" or check for type explicitly, otherwise there is no guarantee what would happen. The worst part is that errors are strictly logic and only way to debug is to trace from end result all the way up to the point of object creation; it isn't pretty.


This isn't intrinsic to Matrices vs Number-Arrays. The matrix-mulitplication issue is just a mathy version of the plus-as-sting-concat troubles ("Foo: " + 1 + 1 makes "Foo: 11" while 1 + 1 + " Foo" makes "2 Foo"). There are always holy wars about whether "+" should be string-concat because of that.

Both approaches have their merits.


I can rarely think of functions

    foo(arg)
where I would like to pass either strings or numbers that uses an operator + which polymorphically concats or performs addition. In this respect like languages that offer special string concatenating operators (like Haskell ++ or Lua with ..).

More generally: I think that + and * should always commute for the applied types and mixing them should follow the rules of associativity.


I meant non-commutative


> Julia will lack enough libraries, especially on the unscientific part (GUI, databases, network, all the other stuff you need).

Yes, but this is rapidly improving. Julia has Gtk bindings that have seen a lot of improvement over the past two months. There are ODBC and SQLite interfaces, a MySQL interface is in progress, and probably others.

Julia has the advantage that you can write fast bindings in pure Julia, which alleviates the extra cognitive and tooling overhead of writing extensions in C.

Building a language ecosystem is a bit of a ponzi scheme - but it has real potential for a great payoff at the end!


It is pretty standard for elementwise operations to use .

Matlab you have .* ./ .^ and probably more and for good reason.

I don't really see it as being awkward it actually very useful when needed, can be confusing if you're learning a language and think .* might be dot product though.


Check out the Julia pycall library. It allows arbitrary python calls from inside Julia with nice autogenerated bindings from python objects to Julia types.


Check out the Python JIT compiler NUMBA. It compiles annotated Python and NumPy code to LLVM (through decorators). It's been wicked fast for my use cases: http://numba.pydata.org/


Everyone's answer to that question will be different. In my opinion, there's lots of things to love about Julia.

First class arrays and array literals. It's a wonderful thing… like Matlab but very smartly designed.

The type dispatch system makes so much sense for mathematical work. It's simply how math is done. And Stefan Karpinski (co-creator) often compares it to linguistic grammars, too, which may be a stretch but I think there's some truth to it. It just feels right. And it makes things very extensible, right down to the core language.

And the core language is indeed mostly Julia itself. Compared to NumPy where things are often implemented in C or Cython. I've tried to hack on some Cython things in NumPy and was immediately turned off. It was so hard to debug and run interactively.

Julia's interactivity is wonderful. The IJulia project brings over some of the best user experience of NumPy (in my opinion)… which is not NumPy but IPython.

And the community is so very great and supporting. The package system is such a great asset and really lowers the bar to entry.


Interesting that you mention IJulia. My concern with it is that when you are trying to develop a new technique or algorithm, the idea of introducing extra layers of code running in another system (in this case IPython) seems like a lot to deal with. Maybe I'm just a wimp ;)


But it's not running through Python at all (as I understand it). The kernel is all implemented in Julia. It just uses the IPython frontend and architecture.

See this post on the implementation of IHaskell[1] for more details. My understanding is that IJulia uses the same concepts. Notably, when running IJulia, you can't even use %%magics to change back to Python mode.

1. http://andrew.gibiansky.com/blog/ipython/ipython-kernels/

(Also, it's firmly endorsed by the language creators. Stefan recently gave a talk using IJulia).


Thanks for the explanation and link. I think you are correct in that it doesn't appear to be using any Python. The demo I saw was given by Fernando Perez [http://www.youtube.com/watch?feature=player_embedded&v=F4rFu...] where he demonstrated a cross language example which, whilst technically impressive, wasn't something I felt I would attempt. Like I said I'm probably just a wimp.


Agreed, Python with NumPy is definitely a key player in this space, probably more significant these days than Matlab. Don't forget Octave which continues to hold it's place as a Open Matlab compatible(ish) option. Whilst I'm a fan of R for experimentation and prototyping it is often let down by poor performance, particularly on matrix calculations. R's forte is really in providing reference implementations of an amazing array of statistical methods, often by the author of the technique.

One of advantages of Julia touted by the authors is that much of the Julia system is written in the Julia language making it easy for users to understand many of the algorithms and contribute to the system. In practice I don't know how true that is (it seemed to spend a long time compiling C/C++ code when I last built it) but I can see the rationale.


Julia has a community that doesn't feel threatened that their language is waning in popularity in some fields, and therefore doesn't feel the need to defend it every chance they get.


No need to make this personal. It was the original blog article that concentrated on the negative things and did not do a whole lot at explaining Julia's benefits.

I really love Scipy and friends and I also think Julia is a promising system.


I remember there being talk of eventually being able to call Julia from within Python. I've also been quite happy using Numba as an alternative to Cython for some things when I need speed. It's a lot more light-weight with less boilerplate, although still a little rough around the edges.


I don't know about calling Julia from within Python, but you can call python from within Julia. This makes it very easy to wrap and use python libraries for things which Julia doesn't have good support for yet.


Would be awkward to run django from inside julia though...


This already exists as a prototype as part of IJulia, see Leah Hanson's awesome blog post about it here: http://blog.leahhanson.us/julia-calling-python-calling-julia...

The collaboration between the scientific Python and Julia communities in recent months has been awesome to watch.


I used R day to day, but it's NOT as easy as MATLAB. There just isn't the same level of documentation and clarity.


Julia offers freedom from wondering whether you're betting on the wrong horse by coding for Python 2.7 or 3.x.


I had the opposite response, I played around with julia and found myself so frustrated I went back to octave. However, it was at an early stage of development and I bet there were a ton of bugs still around. I'll have to give it another shot.


Do! The improvement over the past ~6 months has been staggering.


Julia is good, in ............ (10,000 words omitted) I think any serious programmer understand the importance of specifying context. For this article, the author should write: I dont care about type safety, security, etc, when I ......... (10,000 words).

The title is flaming and shallow in any measure of programming language discussion...


Um, you know that Julia has type annotations, right? They are optional, but if you are a shop that really really wants type safety, you can get it.


I really don't like the anti-intellectual tone of the beginning.

"The problem with most programming languages is they're designed by language geeks, who tend to worry about things that I don't much care for. Safety, type systems, homoiconicity, and so forth."

can be rewritten as:

"The problem with most software is that they are designed by computer geeks, who tend to worry about things that I don't much care for. Information security, thread safety, modularity, hardware acceleration, system design, and so forth."


I'm glad I'm not the only one. I couldn't make it past the first paragraph. He says he doesn't care for safety and type systems, and then says what he cares about is making it work and making it fast, both of which are significantly aided by safety and type systems.


All he's saying is he doesn't care about the features, just what they let him do with them. Type safety in and of itself isn't interesting to the author but he appreciates it's benefits.


If that's the case, the author would be well-served by rewriting the first paragraph to make that clear. The way it's currently written does not communicate that to me at all.


Safety/type systems may significantly aid making it work/fast, but the end result is more important to him than more abstract concepts.

Some people are really interested in making compilers, right? While others (such as me) just want to do cool things with them.


The language may have been grating, but as someone who also comes from more of a scientific computing background, I read this as "I want a language to be a tool that I can use to solve scientific and engineering problems with minimal conceptual overhead due to minding language features." I read it as saying "Finally, a language designed for someone like me," which I also find true personally.


The irony is the creators of julia paid attention to all of those computer science things in order to create the language he wanted to use.


"I want a language to be a tool that I can use to solve scientific and engineering problems with minimal conceptual overhead due to minding language features."

Among compiled languages, isn't that Fortran, especially since Fortran 90 on have multidimensional array operations, like Julia, Numpy, and Matlab.


It should read the other way round - Julia, Numpy, especially Matlab have array operations like Fortran 90, since they are designed by people from that background.

The problem with Fortran is interoperabality with other libraries; if you leave scientific computing, you're basically out of luck. Writing a GUI is just not going to happen. It is really fast, but it is also not as easy for prototyping since it is also compiled like C. Finally, Fortran does have the kind of stigma coming from older versions like 77 - and it's still really easy to write REALLY bad and unmaintainable code with it if you're not minding the newer language features. Especially OOP (since Fortran 2003) feels a bit tacked on.


It is of course, but there is a lot of convenience in using an interpreted language with a REPL and a decent standard library.


Exactly - I read it as an expression of a tool designed as a tool, rather than an elegant and intellectually fulfilling exploration of toolness.


His point is: while these things can be nice, no one cares about them other than language designer. They are only means to an end, which is user experience. Sure, some car buyers may know or care what alloy their cylinder block is made of, but a lot don't know, and don't care, how many cylinders there are. They only care (somewhat) about how it drives. And it is certainly possible to have a language with all the theoretically nice features, but offering horrible user experience for a specific purpose. Part of it is just marketing, but part of it is also optimization with a different purpose in mind.

For example, his purpose, and Julia's main use case, is often different from that of either a language designer or a software engineer: e.g. performance and fast prototyping are first order concerns (together with a decent scientific library), and everything else, like longevity, ability of future code reuse, or simplifying work of teams are way, way down the list. The reason of course is that ~90% of code is written by one person, for himself, to effectively run once and produce one paper, and to be never touched by anyone ever again. At least, this is what I see in my field, which is largely dominated by Matlab (and Matlab-like syntax is definitely a huge asset here).


Hmm, this is an accurate description on how a lot of research work is run.But then there are the cases where that data analysis that was put together within a week is suddenly used by the whole research group, amended, extended and used with more and more data. Sometimes someone even sticks a GUI on top, plots and consecutive analysis steps are added, etc.

And this is when you really want to have a language that can be used to build up real abstraction.


And then you want to interface your code to someone else's, written in something that is not your pet language, and you find yourself spitting text or binary matrices through pipes. IME it's easier to decide on a data serialization format at the start, and use whatever language works best for each piece.


The problem with most cars is they're designed by professionals, who tend to worry about things some people don't much care for. Safety, security, reliability, and so forth.


That is exactly how many people will pick a car. They assume the cars already have these properties as a minimal requirement, and then proceed to pick one based on how it looks and how comfortable it is &c.


I didn't pick up an anti-intellectual tone.

His point is the same way I feel: stuff like type systems and homoiconicity (I don't even know what that means) don't interest me. I'd rather think about the work that I'm doing.

Also, these things are a step or two above my level of understanding (but probably not the op's).

I do care about security, thread safety, modularity, etc... but I can't really contribute to the debate about how we get there.


I personally care deeply about strong static typing as well as the work that I'm doing.


How is this anti-intellectual? Most languages are designed by language geeks by definition. And language geeks tend to worry about the things mentioned. And he doesn't care about these things because they've been largely irrelevant for the work he's had to do.


But then the rest of his article is precisely about how they're relevant. For example he says he doesn't care about type systems, and then his only code example is showing off Julia's type-specialization feature. Maybe he just means he doesn't care about the theory behind why it works, e.g. he likes to fly on planes but doesn't care to learn aerospace engineering himself (which is fine).


Because those things are actually hugely important, he spends the rest of the article showing how important they are. He is simply too ignorant to recognize his own ignorance, and revels in this fact. He starts out with "I dun need no fancy book learnin!" and you don't see how it is anti-intellectual?


There's two issues here:

1. A pretty significant amount of code is PHP serving broken HTML + Javascript, stuff which favours pragmatism over purity (to the degree that even some of the most pragmatic people hate it with a passion). These languages are popular because the authors focused on delivering results, not naval gazing.

2. When a language community starts talking up its the theoretical features with a passion, it's a red flag. Odds are, the documentation will be obtuse, and the community will bite noobs who don't know the theory. Even if they try to be nice to beginners, it's against their instincts to give simplified (if technically incorrect) answers. If your High School math teacher told you that "differentiation finds the slop of a graph, by finding f(x+e) / f(e) where e is really small", she was lying, but it's a good kind of lie.

A pedantic explanation will just confuse people, and stop most of them from understanding it well enough to learn how to appreciate the technicalities.


Your issue number 1 does actually (as I understand it) not apply to Julia, since it is actually really well designed and not tinkering. Matlab is the PHP of scientific computing ;)

About 2: The Haskell community is one of the friendliest I know, yet, it is one with the strictest theoretical background.

PHP was popular for a number of reasons, I think mostly because of the easy mix of HTML and PHP tags as well as the ready-to-run Apache/PHP/Mysql setup. From a maintainability standpoint, a lot of people suffer from this.

Its the same point that people have had about "Clean Code" and Unit tests. Some think it keeps them from getting work done. When a unit test of mine suddenly fails, I silently know, that some other person has just now also broken their code and does not know about this, will find out eventually but will not see why immediately. While they try to find their bug, I have already fixed mine and implemented tons of features in the time.


And he misunderstands the "cowboy" reference in "cowboy programmers." The reference I believe is to lone operator who views himself as having superior skills and so does not integrate with the group, follow the coding rules and procedures, and generally makes work for others colleagues, often while ignoring their communications or responding to them with condescension.


I'm excited by Julia, but I don't think this article makes a very good sell. It's neat that you can dump the generated assembly, but I'd rather see a demonstration of a robust profiler so that I know which functions I need to dump in the first place.

I also disagree that the popularity of Node stems from "getting disparate groups of programmers to code in the same language". From what I've observed, it's not that back-end programmers are suddenly giddy at the prospect of getting to use Javascript on the server, it's that front-end programmers get to apply their existing knowledge of Javascript to back-end development.


There is a built in profiler: http://docs.julialang.org/en/latest/stdlib/profile/.

It's also possible to run Julia with some of Intel's advanced profiling tools like VTune:

http://software.intel.com/sites/default/files/blog/477490/ju...

More info: http://software.intel.com/en-us/blogs/2013/10/10/profiling-j...


I tend to agree with you about tools. I have yet to meet a language feature that's more important than library availability, profiling, autocompletion, documentation, debugging, etc. Then again, I don't face the script/C/CUDA choice everyday (most of his detractors on this thread don't either, I'd be willing to bet) so his circumstances are probably different enough to justify a different priority list. His argument might be perfectly valid for the HPC community which is a powerful constituency among academic programmers.

However, he did address library availability, and that argument resonates with me. I've never met a FFI I didn't come to loathe. I've had java, ruby, and python FFI libraries fail to satisfy my needs despite half a dozen bugfixes between them. What this man says about having to write wrappers, despite abundant and loud promises to the contrary, is completely true. You don't have to wander far off the beaten path before a typical FFI goes belly-up. POD structs usually suffice (Sure, we support POD structs! Oh, you want to nest them / align them / make arrays of them / have them hold pointers / ...? We don't support that "yet". Worse: they support it but it's buggy.). Heavens help you if your argument has (gasp) an initializer or one of the arguments is a reference. Maybe things have changed in the last ~5 years, but I doubt it.

If Julia's intimate connection with LLVM makes it practical to implement a better FFI or hybridize FFI + wrapper code when necessary, it will have a very valuable advantage over python for purposes of scientific computing. Maybe even enough to displace it in the long run.

EDIT: By "hybridize" I mean that the ability to embed asm,C,C++ in Julia with the same ease that you can embed asm in C/C++ would be a KILLER feature.


That's in the works: https://github.com/JuliaLang/julia/pull/5046 for inline LLVM IR in Julia, with potential future extension to other languages.


> I've never met a FFI I didn't come to loathe

I've felt that pain. With so many new programming languages popping up, I've been wondering if the next killer programming improvement isn't strictly a programming language at all, but rather something that rethinks the linker, manages execution, and facilitates interfaces between larger blocks of code (maybe in multiple languages).


Yep me too. People are too quick to dismiss the cross language issues which can in my experience can slowly erode the benefit. Typically you use an FFI to access particular functionality offered in libraries/classes only offered in another language. In my experience it is only a temporary solution, a does-it-work test. After that you need to recode. Having said that, in my experience Lua(JIT) does this pretty well with C libraries but Lua was designed as the scripting companion to C from day one.

The JVM offers a better place to tackle these issues. Scala and Java cross calling is often more than adequate. Perhaps a Julia compiler for the JVM could be a step in the right direction.


I've heard many very smart people (professors, leaders of large HPC efforts, "language geeks") express the same sentiment over the years. Unfortunately, it's a btch of a problem, as attested to by the veritable graveyard of half-baked solutions out there. Apple's "Bridge Support" is the closest thing I've seen to success (haven't worked with MS's CIL) and it leaves much to be desired.

I'm not sure there is any way around the impedance mismatches between languages. They're all slightly different for a reason. Perhaps clean C APIs are the best we can hope for.


I keep hoping that some accessibility upgrade comes to the linker akin to how LLVM made compiler infrastructure more accessible, might make FFI bridging more automated. For example, if GDB can access and take apart C structs an automated way, why is FFI coding always seem to have programmers do a lot of mechanical interfacing work.

Not that all the other JVM, CIL, etc aren't approaches that yield some improvements, but something that upgrades capability or accessibility at the level of the ABI linkage level is going to be a wider impact.


> I'm not sure there is any way around the impedance mismatches between languages.

CLR has an approach for that by defining what is known as CLS, Common Language Subsystem.

Likewise they have a similar approach on WinRT, known as Type Providers.

However they also have their impedance mismatches, as you are only allowed to use types that are usable by all languages that target such runtimes.

The benefit is that they still allow for an higher level of code abstractions than pure C functions.


You mean like OS/400 TMI, .NET, JVM?


COM was actually not a bad model. Sadly the closest thing is HTTP and JSON right now.


s/was/is/

COM is the basis for most Windows APIs since XP, and the basis for the new WinRT runtime.

Lets see how it might look like with Windows 9.


FFI = Foreign Function Interface


The main reason we at clara.io use Node is so that front-end code can run in the back end. Imports, Exports and Renders are done by workers that are essentially headless clients that happen to have access to first and third party binary libraries.


what's the benefit of this? performance?


not having to write the same code twice.


But Node.js does glue together things written in C (by backend engineers maybe) and then used in node with javascript (by frontend engineers like you said).


not to mention the node package manager, ability to host web servers in couple lines of code, and nice implementation of single threaded pump pattern making it a very scalable platform. seems quite ingenious to me.


The author and I like Julia for nearly opposite reasons. (I write Julia for the language geek reasons. The power of homoiconicity is amazing for writing static analysis in the language you're analyzing.) It's really cool that Julia can appeal to people with nearly opposing priorities tho. :)

I'm looking forward to giving the workshop at UChicago. It'll be my third time presenting an Intro to Julia workshop.


Will you be webcasting?


No, unfortunately not. We are hoping to record it, tho.


Would love to see it put online :>


The reason to bet on Julia is disassembling a function? This is a standard feature in Common Lisp (ANSI standardized in 1994)

  CL-USER> (defun f(x) (* x x))
  F
  CL-USER> (disassemble 'f)
  L0
           (leaq (@ (:^ L0) (% rip)) (% fn))       ;     [0]
           (cmpl ($ 8) (% nargs))                  ;     [7]
           (jne L33)                               ;    [10]
           (pushq (% rbp))                         ;    [12]
           (movq (% rsp) (% rbp))                  ;    [13]
           (pushq (% arg_z))                       ;    [16]
           (movq (% arg_z) (% arg_y))              ;    [17]
           (leaveq)                                ;    [20]
           (jmpq (@ .SPBUILTIN-TIMES))             ;    [21]
  L33
           (uuo-error-wrong-number-of-args)        ;    [33]


This is also possible to a degree in Python, though you only get the bytecode:

    >>> def f(x):
    ...     return x * x
    ...
    >>> import dis
    >>> print dis.dis(f)
      2           0 LOAD_FAST                0 (x)
                  3 LOAD_FAST                0 (x)
                  6 BINARY_MULTIPLY
                  7 RETURN_VALUE


And the bytecode is just calling polymorphic methods. All the real work is done in the object implementations of type(x). I was very bummed years ago to realize how shallow the bytecode representation in Python is. There is no sub-terpreter, just C.



(jmpq (@ .SPBUILTIN-TIMES))

So, this is going to be really slow inside a loop. Would the compiler be able to optimize it into a single multiply instruction if it could prove that the input had to contain integers?


  CL-USER> (defun f (x)
             (declare (fixnum x)
                      (optimize speed (safety 0) (debug 0)))
               (the fixnum (* x x)))

  CL-USER> (disassemble #'f)
  ; disassembly for F
  ; Size: 19 bytes
  ; 0337CE2F:       488BCA           MOV RCX, RDX               ; no-arg-parsing entry point
  ;       32:       48D1F9           SAR RCX, 1
  ;       35:       480FAFCA         IMUL RCX, RDX
  ;       39:       488BD1           MOV RDX, RCX
  ;       3C:       488BE5           MOV RSP, RBP
  ;       3F:       F8               CLC
  ;       40:       5D               POP RBP
  ;       41:       C3               RET
  NIL


To each its own I guess, but I wanted to say that I don't see "safety, type systems and homoiconicity" and other theoretical "geek" stuff as orthogonal to a programming language's ease of use, productivity and expressiveness. If anything they complement each other. The theory behind it provides a consistent framework so that you minimize the mixing of different paradigms and you can express ideas in a more consistent way. I very much doubt that a language where you just throw stuff in would be easy to use. If Julia is a great language is precisely because of all the thought that went into it, the ideas behind it didn't just materialize in someone's brain.


I don't think his point is that "safety, type systems and homoiconicity" don't matter. His point is that those things don't interest him as much as getting things done do.

Those things may help him get things done, but they're for other people to worry about while he works on his own stuff.

Also, am I the only one that doesn't know what 'orthogonal' means? I assume from the context it means that these things aren't mutually exclusive.

Not really sure about 'homoiconicity,' either.


Orthogonal literally means "perpendiular" - it refers to two things that aren't related at all. So, non-mutually-exclusive is part of it, but not the whole picture.

FYI


Was Perl not a language that just had stuff thrown in? It wasn't difficult to use, but difficult to master I would say.


"Julia was not designed by language geeks — it came from math, science, and engineering MIT students"

This statement is built on a false dichotomy. And it is not really true for Julia, take the type system for example, sophisticated AND unintrusive.


Jeff and I were slightly miffed at being called "not language nerds" ;-)


Language nerds (or geeks), definitely! Maybe he meant something like "language dweebs" or "language snobs."

Julia's great strength, I think, is that it was designed by folks with very good grounding in language design, but who prioritized practicality.


I cringed when I read that in the blog. I came across the benchmarks on the home page and I was thinking there was no way that it was possible to write a language that looks that good and performs that well without being a "language nerd."


> "Julia was not designed by language geeks — it came from math, science, and engineering MIT students"

This makes me a bit cautious about the language. Scientific computing people are often very smart but they are not programmers or computer scientists and may do funny things that a computer scientist would not. Like one based indexing of arrays in Julia. This is not a big deal but I'm a bit wary that there may be some nasty surprises for a language geek computer scientist like me :)

Another example is the byte addressing of UTF-8 strings, which may give an error if you try to index strings in the middle of a UTF-8 sequence [1]. s = "\u2200 x \u2203 y"; s[2] is an error, instead of returning the second character of the string. I find this a little awkward.

There's a flip side to this too, if you're dealing with scientific computing there seems to be a wide variety of scientific computing libraries available in Julia [2].

Overall I find this language very interesting and it is on my shortlist of new languages to take a look at when time permits.

[1] http://docs.julialang.org/en/latest/manual/strings/#unicode-... [2] http://docs.julialang.org/en/release-0.2/packages/packagelis...


> Another example is the byte addressing of UTF-8 strings, which may give an error if you try to index strings in the middle of a UTF-8 sequence [1]. s = "\u2200 x \u2203 y"; s[2] is an error, instead of returning the second character of the string. I find this a little awkward.

Yes, it's a little awkward, but to understand why this tradeoff was made, think about how you'd get the nth character in a UTF-8 string. There is a tradeoff between intuitive O(n) string indexing by characters and O(1) string indexing by bytes.

The way out that some programming languages have chosen is to store your strings as UTF-16, and use O(1) indexing by two-byte sequence. That's not a great solution, because 1) it takes twice as much memory to store an ASCII string and 2) if someone gives you a string that contains a Unicode character that can't be expressed in UCS-2, like 🐣, your code will either be unable to handle it at all or do the wrong thing, and you are unlikely to know that until it happens.

The other way out is to store all of your strings as UTF-32/UCS-4. I'm not sure any programming language does this, because using 4x as much memory for ASCII strings and making string manipulation significantly slower as a result (particularly for medium-sized strings that would have fit in L1 cache as UTF-8 but can't as UCS-4) is not really a great design decision.

Instead of O(n) string indexing by characters, Julia has fast string indexing by bytes with chr2ind and nextind functions to get byte indexes by character index, and iterating over strings gives 4-byte characters. Is this the appropriate tradeoff? That depends on your taste. But I don't think that additional computer science knowledge would have made this problem any easier.


It's also essentially the same approach that has been taken by Go and Rust, so we're in pretty decent company. Rob Pike and Ken Thompson might know a little bit about UTF-8 ;-)


The problem I have with these design choices is that I predict lots of subtle off by one bugs and crashes because of non-ascii inputs in the future of Julia. I hope that I am wrong :)

> Yes, it's a little awkward, but to understand why this tradeoff was made, think about how you'd get the nth character in a UTF-8 string. There is a tradeoff between intuitive O(n) string indexing by characters and O(1) string indexing by bytes.

I understand the problem of UTF-8 character vs. byte addressing and O(n) vs. O(1) and I have thought about the problem long and hard. And I don't claim to have a "correct" solution, this is a tricky tradeoff one way or the other.

I think that Julia "does the right thing" but perhaps exposes it to the programmer in a bit funny manner that is prone to runtime errors.

> The way out that some programming languages have chosen is to store your strings as UTF-16, and use O(1) indexing by two-byte sequence.

Using UTF-16 is a horrible idea in many ways, it doesn't solve the variable width encoding problem of UTF-8 but still consumes twice the memory.

> The other way out is to store all of your strings as UTF-32/UCS-4. I'm not sure any programming language does this, because using 4x as much memory for ASCII strings and making string manipulation significantly slower as a result (particularly for medium-sized strings that would have fit in L1 cache as UTF-8 but can't as UCS-4) is not really a great design decision.

This solves the variable width encoding issue at the cost of 4x memory use. Your concern about performance and cache performance is a valid one.

However, I would like to see a comparison of some real world use case how this performs. There will be a performance hit, that is for sure but how big is it in practice?

In my opinion, the string type in a language should be targeted at short strings (long strings are some hundreds of characters, typically strings around 32 or so) and have practical operations for that. For long strings (kilobytes to megabytes) of text, another method (some kind of bytestring or "text" type) should be used. For a short string, a 4x memory use doesn't sound that bad but your point about caches is still valid.

> Instead of O(n) string indexing by characters, Julia has fast string indexing by bytes with chr2ind and nextind functions to get byte indexes by character index, and iterating over strings gives 4-byte characters. Is this the appropriate tradeoff? That depends on your taste.

This is obviously the right thing to do when you store strings in UTF-8.

My biggest concern is that there will be programs that crash when given non-ascii inputs. The biggest change I would have made is that str[n] should not throw a runtime error as long as n is within bounds.

Some options I can think of are: 1) str[n] returns n'th byte 2) str[n] returns character at n'th byte or some not-a-character value 3) Get rid of str[n] altogether and replace it with str.bytes()[n] (O(1)) and str.characters()[n] (where characters() returns some kind of lazy sequence if possible, O(n))

You're right, this boils down to a matter of taste. And my opinion is that crashing at runtime should always be avoided if it is possible by changing the design.

> But I don't think that additional computer science knowledge would have made this problem any easier.

There is a certain difference in "get things done" vs. "do it right" mentality between people who use computers for science and computer scientists. The right way to go is not in either extreme but some kind of delicate balance between the two.


I think it's more like Julia is what happens when "language geeks"/experienced programmers write a language that's for technical computing, with a deep understanding of their problem domain and empathy for their users.

Strings in Julia are meant to be addressed in for loops; they index by byte not character because it's slow to index by character once you include Unicode. Julia trys, in general, to give you control over low-level things rather than hiding them with magic.

I like Julia because it's homoiconic, because of it's type system, because multiple dispatch is fun and new, and because it's just plain fun to write. I do static analysis, not math/science.


>This makes me a bit cautious about the language. Scientific computing people are often very smart but they are not programmers or computer scientists and may do funny things that a computer scientist would not.

Most languages, from C and C++ to Python and Java were not created by "computer scientists".

Usually it's either programmers that studied math or came from some other profession (physicists, linguists like Larry Wall, even philosophers).

>Another example is the byte addressing of UTF-8 strings, which may give an error if you try to index strings in the middle of a UTF-8 sequence [1]. s = "\u2200 x \u2203 y"; s[2] is an error, instead of returning the second character of the string. I find this a little awkward.

That makes perfect sense if Julia cannot yet handle indexing strings on graphemes.

In essense, there is NO "second character" that you're getting when "byte indexing" a string. You might get one (if it's ascii all the way), or more possible you'll just get an invalid part of a character as a byte.

In other languages with similar limitations (like PHP) you get a broken result with no warning at all.


Can Julia be a competitor to R? I love R in concept (interactive environment for statistical analysis) but the language just drives me crazy in its multitude of types and the loosey-goosey ways it converts between them.

A friend of mine is really proficient with R; when I walked him through some of the R patterns that are very confusing/irregular to me, he sort of laughed: he could see what I was saying but he said "with R you can't worry about things too much, you kind of just have to just go with it."

If Julia can serve some of the same use cases but in a better-designed way, sign me up!


The biggest thing that R has is just an incredible amount of really well documented packages, that are quite frequently cutting edge (unless you want to do any deep learning work). Not to mention that the base R has just a tremendous amount of useful stuff baked in.

I've kept an eye on Julia and would love to use it in my everyday work, but also know that for now that's just not possible because of how many built-in functions and packages I rely on.

However solving this is just a function of time and community (Julia just needs their Hadley Wickham). I remember when people scoffed at Python because it has nowhere near the ecosystem that Perl did.


This is, I think, the biggest hurdle for Julia.

R's strength is not its language, it's the people. You need methods articles with supplements written in Julia, not R, for people to switch.


Very true.


Already in 2008 one of the R creators, Ross Ikaha, talked about how R is fundamentally slow and has inefficient memory use. And that a "next gen R" would be needed, and he was considering Common Lisp for the underlying language on compiler. See the "Back to the Future: Lisp as a Base for a Statistical Computing System".

https://www.stat.auckland.ac.nz/~ihaka/?Papers_and_Talks

But now apparently Julia also fulfills Ihaka's requirements for the basis on the "new R" system, so I wonder is the need-for-speed part of the R community is considering a switch to Julia, instead of building a "new R" from scratch?



Lobbying Ross Ikaha to endorse Julia as the new R would hopefully bring more people over to the language. Instead of just the Matlab folks you would also renew interest with the slightly larger audience using R today.


FWIW, there is a Julia library that allows you to call out to R (https://github.com/lgautier/Rif.jl). I'm not sure how well-developed it is though. There are also a lot of R-inspired Julia libraries, such as Dataframes (https://github.com/JuliaStats/DataFrames.jl).

There is also a pretty good Julia-Python interface (https://github.com/stevengj/PyCall.jl) and bindings to Matplotlib (https://github.com/stevengj/PyPlot.jl).


Patrick Burns' R inferno[1] enumerates a lot of R's eccentricities and workarounds for them. I think R gets a lot better once you switch to using the libraries Hadley maintains like plyr and ggplot. I still think proficiency in R is akin to a type of Stockholm syndrome.

[1] : http://www.burns-stat.com/pages/Tutor/R_inferno.pdf [PDF]


I believe it has been one of the intentions for a while. How far it has advanced, I'm not sure, I haven't looked at it for almost a year, now time for a refresher. But R may be a bit tricky to compete with directly at this point: it has been designed as a statistical language from the ground up, and it would be a while for any language to catch up to R's library. But, I still see many people doing statistical computations in Matlab, and that can not be very hard to beat, especially how similar the syntax is, and how ugly Matlab is at statistics (from what I am reading in the comments here, vectorization still carries a performance penalty though, which is a real pity).

Just curious -- what patterns bothered you the most with R btw?


> Just curious -- what patterns bothered you the most with R btw?

It's mostly around the multitude of subtly different types and the ways you convert between them. I think I also remember strange things like lists having named attributes in addition to list members that just seemed totally wrong and confusing to me.

I wish I could give you better specifics but it's been several years since I've done anything with R.


> The problem with most programming languages is they're designed by language geeks, who tend to worry about things that I don't much care for. Safety, type systems, homoiconicity, and so forth. I'm sure these things are great, but when I'm messing around with a new project for fun, my two concerns are 1) making it work and 2) making it fast. For me, code is like a car. It's a means to an end. The "expressiveness" of a piece of code is about as important to me as the "expressiveness" of a catalytic converter.

You want a fast car, but don't care much for having an aerodynamic design, hmmm..

EDIT: In retrospect I now think he means he wants to be able to create the project fast, and this is not about performance.


This is a confusing rebuttal, because cars have an aerodynamic design primarily for performance reasons, and Evan is very clear in this article that his primary concern is performance. I think you've misread him.


And yet he doesn't care about type systems, which are largely implemented to help with optimization, you see.


That's arguable - I hear more people talking about how types reduce bugs in code than how it improves performance. Besides, that's one of the three examples he mentioned, and the the other two are not features involved with said optimisation.


Well, it certainly optimises my ability to get shit done if I don't waste it on subtle type-conversion debugging.


I don't know if this an argument for or against static typing and type safety but there are two sides to this coin.

In dynamic programming languages it is definitely easy to get shit done, at least initially.

However as a project progresses to the point where a lot of refactoring takes place and there's more than a handful of people working on it, a good static typing language will make sure that shit keeps on getting done and things won't break due to a subtle typing error. Things will be caught by the compiler even before you get running the test suite.


I agree.

In other words, automatic program-correctness check is a crucial feature if project goes larger. And type check is actually one of the simplest, easiest and fastest way to archive that.

But most dynamic languages doesn't provide type-check. Really sad.

Adding type annotation on dynamic language is a kind of best mix of two worlds, and Julia seems pushing this approach even further - JIT static types from type annotation.


That's a strong vs. weak issue, not a static vs dynamic issue.

  irb(main):001:0> 1 + "hello"
  TypeError: String can't be coerced into Fixnum
          from (irb):1:in `+'
          from (irb):1
          from /usr/bin/irb:12:in `<main>'


Static prevents type errors from propagating, which is particularly important when using generic functions. In ruby you can have a type error four functions back pass silently until you do something ungeneric with it, which makes debugging harder than it should be.


Hah! Agreed.


Really? My impression is that it's largely correctness.


There are plenty of optimizations that are only available when you know the types of the objects you're dealing with (mostly relating to aliasing). See:

http://scholarworks.umass.edu/cgi/viewcontent.cgi?article=10...


Virtually every optimization even dynamic languages make are about narrowing down exactly what the type something is and which method that is called at runtime. At that point you can do things like inlining and other high powered optimizations.


But that allows for smarter optimizations.


Of course it does, and I never said it didn't. I just said that my impression is that the primary motivation of language people is correctness. However, I have published several in the area of software verification, so perhaps I am biased.


He's saying that he wants both: fast design and fast performance.

I think what he means is that 'safety, type systems, homoiconicity...' may or may not be important, but he's more worried about the end result.

So if other people want to work on those things, good for them. He's going to be working on his own stuff.


This is about the ability to complete a project fast, which is typically about both convenience of fast prototyping, and performance (ie. you don't want to wait days for the results to be computed before changing something in the code, and you want that change to be easy).

And probably few buyers who want fast cars care about aerodynamic design per se -- they care about speed; sure, if better aerodynamics is what's necessary then so be it, but they would also prefer a fast car with poor aerodynamics and a huge engine to the very aerodynamic and fuel efficient, but slow one.


As is usually the case with language design, performance and optimization are often on the opposite side of the scale from code learn-ability and usability. More than likely, the "huge engine" would be some other burden the language has that he doesn't want in exchange for faster prototyping.


I'd actually say both Julia and Matlab are often quite decent on both learnability, usability (at least for their purpose -- especially as the library for Julia develops), and performance. Certainly, on par with other newly-designed languages. Sure, you can go faster with Fortran, but you can do that when you see that it is really needed...


It's more like he wants a fast car but doesn't want to deal with servicing it. So it might fall apart in six months, but that's something he is OK with.


Without presuming to speak for Evan --

Julia's target audience is technical computing, and a large fraction of software in this space is built to solve a particular problem that only might matter for 6 months or a year. You might be trying to simulate the behavior of an experiment you just designed, for example, or trying to analyze a very specific property of a data set. These codes are often very tightly coupled to the scientific problem, and are only ever used in the context of a particular short-lived project. You do the experiments, write the paper, and move on with your life.

To be clear, I don't think Julia itself encourages this pattern any more or less than another language. But it's a very common pattern for scientists, so they often don't care about long-term maintainability.

Granted, this sometimes comes back to bite them later, if they discover the old code is good for a newer experiment, or they need to go back and re-validate results. But this doesn't always happen, and it's not like they're running a live service with customers -- when they finish a given paper, it's actually not unlikely that no one will ever need to use that software again. It's not easy to argue that they should care about maintainability when there's a decent chance this is one-off code.


Just yesterday I decided to start seriously developing in Julia. High-level languages are a bottleneck for computational biology. We need to be able to write things fast, and have them run fast. So far no language really does this. But Julia looks like the one.

I'm going to put together a BioJulia team is anyone is interested in playing.


You might also look at biogo for inspiration https://code.google.com/p/biogo/


I had BioPython, BioRuby, BioPerl and BioJava on the list - hadn't thought of BioGo! Thanks.


I am starting a Ph.D in evolutionary biology in May. Julia looks like a pragmatic solution to a lot of the voes in computation these days. If you require folks to do things, I would be happy to assist. I have extensive experience in C and python. I hack in 20 other languages too, but have not yet completed any serious projects in julia.


Python is very popular. I need to explore Pandas / Numpy more, but I was under the impression that they are closely linked to the underlying C arrays to provide high performance.

In my opinion the problem with computational biology is that most biologists are not keen to improve beyond a basic level of programming.


That is a problem for a large number of biologists who are working with bioinformatics, not for computational biologists, who tend to be computer scientists working in biology. I'll admit there are a good number of crap computational biologists as well, but that's not a reason to stop the rest of us from having good tools. In fact we should be trying to propagate tools that help them do what they are trying to do with minimal friction. Like Julia.

Numpy/pandas are good. But as evidenced by the Julia benchmarks, Numpy is relatively very slow. Also, most things that slow down computational bio are to do with much broader aspects of the language than linear algebra libs. Most successful standalone sequence analysis software is written in C or C++ for this reason.


I suppose I need to explain my position on the involvement of Julia in bioinformatics a little bit more than just a short "I want to play" statement. What follows is a bit of a rant that attempts to explain why things are bad and that we need to work to make them better.

I have been in the field of computational biology for (practically) 3 years. In this time, I have seen my fair share of bad tools and silly approaches to very basic problems. A lot of the computer science folks may not realize it, but there is a lot of trouble of basic software engineering sort in computational biology. There is a lot of old, unmaintaned code, messy projects implemented in multiple languages, and (of course) bugs. It does not appear that anybody checks or maintains their code after publication - the projects often die after they appear in a journal once.

There is a number of reasons that the situation is the way it is. One of the sadly obvious ones is that the academics do not have the time or desire to maintain their code. Some of the project would require full-time coders to be maintained - and that is indeed the case for some of the bigger and more popular tools. This results in the fact that some projects never take off or live up to their potential - for the simple lack of time. The wasted effort means that a lot of work is being re-done and science in general stagnates because of that. There is no easy solution to this problem (other than centralizing the efforts somehow - but that is the question of community, not tools).

The issues that can be made better are the following (and I will start with the most obvious ones first):

1) We need a language that is both easy to write in and is fast enough for production. Too many times there exist projects that are written in multiple languages. I have myself partaken in a few of those. The high level code is usually written in python or perl (I shiver of the thought), while the heavier numerical things are done in C or C++. This creates a rather large divide in terms of who does what - quite often folks only know a single high level language - so the numerical implementation stays opaque with only a single person knowing how it works. This means that projects of that sort quickly become unmaintainable. There is also a lot of glue code written - and God help you if you need to understand how perl-guts work. If there was a single sane implementation for both high level and numerical stuff, it will solve a lot of those problems.

2) We need a fast language. Building on the previous argument - the reason for splitting is quite often performance. This means that we use a single language for both layers - and we get the "Node.js" effect (I'm not sure who to reference this phrase to) - both front- and back-end stuff comes together. This also means that you are not penalized for using complicated data structures in your numeric code - so one level of separation falls away automatically.

3) We need a language with a large number of capabilities. Julia community is aiming to replicate a lot of the functionality of R. That means that it is already possible to use Julia for, say, an undergraduate statistics course. There is absolutely no reason (other than the historical, of course) why R is used by the statisticians. It was written by statisticians for statisticians - and has a lot of nice features. However, that means that a lot of the efficiency considerations have been missed. R does a whole lot of data copying - which is ridiculous for large data sets. For the growing crop of statisticians it will barely make a difference which language is used - I would go as far as to say that a lot of undergrads will not even notice the difference, but those who will, will thank us later.

4) We need a functional language. Much rather, we need a multi-paradigm language that has a strong functional basis. The advantages of the functional approach are too many to name here - and I am afraid this is already becoming incomrehensive. A lot of formal math and stats is really easily translatable into functional mindset - and that is a great boon if you are trying to implement an algorithm out of a math paper. Also, Julia does not restrict you to think in a particular way - it is very adatable to your thinking patterns.

These are just a few reasons that I would try give to support a case for a new language in the scientific community.


Just as a conclusion, I would like to point those interested to the current available library implementations of the bio- related stuff (i might be missing other stuff out there): 1) BIO-seq: https://github.com/diegozea/BioSeq.jl 2) Fasta-IO: https://github.com/carlobaldassi/FastaIO.jl 3) Phylogenetics: https://github.com/Ward9250/Phylogenetics.jl


We have a Julia and iJulia app on https://koding.com. It's going to be used by Harvard & MIT students soon. It's public and everyone can try it by simple login to Koding. The best part is you can easily try it online, without installing anything. Here is an screenshot of how it's look like (iJulia and Julia inside Terminal):

http://d.pr/i/MsZt

The source of this app can be found here:

https://github.com/gokmen/julia.kdapp

I'm happy to answer any questions :)


You can also use Julia on the Sage Math Cloud. https://cloud.sagemath.com/

They don't have IJulia (yet) though.


I find the fact that I can't even find out what koding.com is when using IE 9 pretty obnoxious. Love it or not, many people are forced to use older browsers. Sending them to https://koding.com/unsupported.html without any explanation at all a great way to make people not want to bother finding out what you're offering.


But you do know and understand why they're not interested in supporting IE, right? And you'd feel the same way in their shoes, yes?


This sounds like premature-optimization to me.

Maybe it's just me, but in the apps I write in dynamic languages, the bottleneck is rarely in the language. It's usually in some IO.

EDIT: some sentence in the article gave me the impression he was using this for non-math-heavy stuff which is why I said this


It is "just you"[1] in this case - the people most enthusiastic about Julia are the people doing statistics, simulations and similar tasks with massive datasets requiring heavy number crunching. I expect fast code and cache issues are the more likely bottlenecks in their situations.

[1] Of course, you might still represent the majority of people using dynamic languages, but you get my point.


Julia's primary purpose is as a scientific language, which means lots of number-crunching on large data sets, complex computations, etc. IO is unlikely to be the bottleneck in these situations.


I'm not sure I agree with the second sentence. Any kind of crunching on large data sets has I/O bottlenecks as one of its main issues. When you're crunching on a terabyte of data, pretty much the most important thing is your precise strategy for handling that terabyte of data. I'll agree the asm can be interesting still there in some cases, though, if you think of memory-bandwidth-and-latency issues as part of I/O. There are definitely scientific simulations where compute throughput is the only real issue, but I think of them as a bit different kind of setting than big-data processing (stuff like solving complex sets of equations, which has low I/O but high computational requirements).


Yeah, I shouldn't have conflated big-data issues with numerical computing. Good catch.

Mostly I was responding to the idea that there is a relationship between dynamic languages and IO bottlenecks. This is certainly often the case in things like web development, where dynamic languages dominate, but under the hood, Julia has relatively little in common with Python/Ruby/JS/PHP/etc, in terms of how it's implemented or, especially, what it's intended to do.


> There are definitely scientific simulations where compute throughput is the only real issue, but I think of them as a bit different kind of setting than big-data processing (stuff like solving complex sets of equations, which has low I/O but high computational requirements).

^^ This is the use case for Julia.


In production code paths yes, but I have to admit that the 3 seconds it takes a bare Rails environment to start up has over time made me really pine for a Go or Haskell project. Rapid prototyping is great, but responsiveness at the command line also is another form of rapidity that also serves developer productivity in appealing ways (auto testing harnesses for instance).


You're also not writing real-time apps or scripting games.

There are a lot of reasons to want speed in a dynamic language.


What on earth kind of programs are you writing where the dynamic language is blazingly fast but you have a bunch of IO bottlenecks? Are you writing a database driver in Ruby or something?



I'm pretty confident that in 55 more years we'll be at the point where the state of the art in programming language will be to simply rediscover Common Lisp.


Yeah it's funny that the author has a dig at lisp and then is amazed by something Common Lisp has had for eons


> my two concerns are 1) making it work and 2) making it fast.

What about maintainability? "Code as if the next guy to maintain your code is a homicidal maniac who knows where you live." -Kathy Sierra and Bert Bates

In my experience, making something work and making it (relatively) faster is easy. Making it easy to read is hard.


Seems like that's exactly what he doesn't care about. If you're prototyping or writing a lot of one off operations (for data analysis, maybe) then maintainability is less important.


IME, you only know that with hindsight. I have made one off programs I never needed to look at again. I've also made what I thought were one off programs that I needed to maintain for a while.

Even in a prototype, you may need to rework a particular piece of code multiple times before it works correctly. Even with a prototype, you may need to use it as a reference for your official version. Even with a prototype, you may end up having to use that as the official version (usually not by my choice).

Also, caring about speed 2nd is shocking to me (but maybe I just come from a different world). What if it isn't fast enough on your first attempt? Won't you wish your code was maintainable so you could change it to be faster?


> What if it isn't fast enough on your first attempt? Won't you wish your code was maintainable so you could change it to be faster?

From my experience, this isn't the case at all. A lot of the time my first attempt is in Python. The first attempt is really more of a prototype or a proof of concept. If the code works and I want to productize it, the code needs to be sped up. I have (at least) 2 choices: (1) make the Python code as fast as possible or (2) rewrite the whole thing in C/CUDA. If I take option 1, performance gains will be marginal and I still probably won't be happy with the performance of the software. Option 2 might take a bit longer, but at least I'll get something performant out. As I'm just throwing out and rewriting the first attempt, I don't actually care at all if it was maintainable code. I don't even care if the ideas in it were well explained/commented, because they're all my ideas and they're still fresh in my mind and I'm just going to rewrite the code and then document/clean up the fast version.

The appeal of Julia is that I no longer have to do this rewrite to make my code fast. Furthermore, if I don't have to do this rewrite, it is actually in my interest to make my first version of the code be maintainable and well documented.


Fair enough. But you ignored/changed the foundation my reply was built on. I'm replying to TFA saying "speed is second" and maintainability is... not considered. You're talking about "correctness being first" and speed/maintainability being not considered.


Honestly, all these languages, even Matlab, are quite decent at maintainability at the scale they are typically used. Almost anything would be decent at that scale. And speed matters, but you can always rewrite particularly crucial piece in Fortran when you know it all works.


Some people build software as a product, for example SaaS companies, software vendors, or enterprise systems programmers. Then some build software to automate difficult tasks or to interact with data. I think the first group cares deeply (or at least should) about maintainability, the second group just wants it to work and work fast. If it's easy to update and fix later that's a bonus but not the main point. I fall into the first class of developer but I can understand the seconds point of view.


Also a lot of scientific computing things are very much it works or it doesn't, and once it does it is a reification of some fundamental mathematical algorithm: a black box that should never need to be opened again.


Which is terrible science. Job #1 of good science is reproducability. Crapping out black boxes and claiming "I've proven my theory" in a way no-one else can analyse or reproduce undermines the fundamentals of the scientific method.


I would argue that having two people independently crap out black boxes and comparing them is far more scientific than having one, open box that's never reproduced.

A former co-worker of mine was having trouble understanding the results of her experiment. The simulation software she was using had been the gold-standard implementation for over a decade. The code was clear, well documented, and well engineered. However, my co-worker decided to re-invent the wheel and write her own. The results of her code exactly matched the results of her experiment. Thus, she designed a new experiment and predicted the results with the standard code and her own. After performing that experiment, her simulation was vindicated. It eventually came out that the standard code made assumptions that were invalid in a huge portion of the phase space.

It's important, as a scientist, to be able to perform the same experiment twice and get the same result. However, it's far more important to perform to different experiments and get the same result. If measuring my body temperature a hundred times with the same thermometer isn't nearly as useful as measuring it twice with two different thermometers. Having one piece of code that runs on on a hundred different computers, giving the same result every time, isn't as useful as having two different, independent code bases.

I do my best to make my code maintainable. I have everything up on github. I'm constantly trying to improve the documentation. However, if my code is still being used ten years from now, we have failed as scientists. What should happen is that a new code base should be written that does the same things that my code claims to do. If we get the same results, then great. If we don't, then we find out why.

But that's not happening. There's no plans for an independent re-interpretation. Everyone keeps using my code, because it's clear and it "works". If my code was less maintainable, then that re-implementation would eventually occur and they would be able to check my results. Only then would we truly know if my code works or if it just "works". I'm not going to do that, but I'd understand the reasoning behind it.


i'm referring to cases where you are say, porting or adapting a well understood algorithm. once the algorithm is at parity in terms of inputs and outputs the implementation details are relatively irrelevant from that point forward.


Often, especially with languages in the statistics, maths and scientific computing fields, you're only writing code for yourself and maybe 1 or 2 others. Everything you write is throwaway, or just functions that can be called and maintainability takes a second seat to ease of use for non-programmers writing code.


I think the idea is you can hack together your prototype in Julia, and then instead of re-writing it in C, you can either rewrite or hopefully just refactor your existing code into something presentable.


I've tried Julia out a few times and been very impressed. From what I've seen it really does a great job of bridging the gap between easy-to-use and high-performance. It kind of seems like D in that way. I can definitely see lots of situations where a language like this is desirable.

I'm in Chicago (and a U of C grad!). I might come to the meetup if I can.


I'm also in Chicago and probably going to the meetup. I haven't tried Julia, but I'd be interested in trying it with the help of experts.


I've used Julia for couple of projects and it's amazing, I seriously believe that Julia is better - in several ways - than all of the widely used dynamic languages like Python, Ruby, Clojure, Octave or Lua. It's a brilliantly designed language. There are so many things to like about this language.


Wait a minute! Can you embed Julia into a C program like Lua? Can it interface with complex C types cleanly?? This might be the scripting language I'v been looking for in my side project!


Wow. Okay, yes. It can be embedded[1]. It can call C code[2].

Julia may have just saved my project (which was dying because it needed a good scripting language that was fast)!

[1]: http://docs.julialang.org/en/latest/manual/embedding/

[2]: http://docs.julialang.org/en/latest/manual/calling-c-and-for...


And embedding (and its documentation) will likely get much better very soon.

https://github.com/JuliaLang/julia/pull/4997


One big pain point so far is that it won't be easy to actually embed Julia into my app. So users of my app will have to install Julia (probably via homebrew) before they can script the app with it.


I foresee Julia overtaking PyObjC, RubyCocoa, Nu, and MacRuby as the scripting language of Mac apps. This looks incredibly perfect. You don't lose performance, so you can even do the hard stuff in Julia. Which makes bridging much less painful. Hoorah!


I see the Julia home page lists multiple dispatch as one of its benefits. Since my only real exposure to multiple dispatch was when I inherited some CLOS code where it was used to create a nightmare of spaghetti, I'm wondering if any Julia fans here would care to elaborate on how they've used multiple dispatch for Good™ instead of Evil™


Multiple dispatch lets you make math operators work like they do in path. That means that you can use `+` the same way on ints, floats, matrices, and your own self-defined numeric type. If `x` is a variable of your new numeric type, OO languages make making `x + 5` work easy, but `5 + x` super hard. Multiple dispatch makes both cases (equally) easy. This was, as I understand it, the major reason that Julia uses multiple dispatch.

Multiple dispatch can make interfaces simpler: you can easily offer several "versions" of a function by changing which arguments they take, and you can define those functions where it makes sense, even if those places are spread across multiple modules or packages. Julia provides great tools (functions) that make methods discoverable, help you understand which method you're calling, and help you find the definition of methods.

Looking at some Julia code (the base library or major packages) might give you a better idea of how Julia uses multiple dispatch.


Totally makes sense. Thanks for the info. Will also dig around in the code.


When I read the opening paragraph, I immediately thought of the author as a Blub programmer [1].

"The problem with most programming languages is they're designed by language geeks, who tend to worry about things that I don't much care for. Safety, type systems, homoiconicity, and so forth. I'm sure these things are great..."

Yes, those things are great. They ultimately aid in helping the programmer tackle the inevitable complexity that arises when building systems in a maintainable way.

[1]http://www.paulgraham.com/avg.html


You're assuming the author wants to use Julia to build complex systems. Much of scientific computing has no need to do so, but is more concerned with discovering new knowledge and testing new ideas. Once the knowledge is gained, products may be built around that knowledge or new inquirires may be launched from it, but scientific computing usually uses programming languages as research tools rather than development tools.


This. The amount of code I write that never gets touched again after a paper goes to press is staggering.


mini ASK HN: would there be any interest in supporting Julia in Visual Studio? (as a free/oss plugin).

i lead the Python Tools for Visual Studio project at msft and would be curious if there is interest.

as a side note, if you do you use Python & require Python/C++ debugging, PTVS now supports it: http://www.youtube.com/watch?v=wvJaKQ94lBY#t=10


Are you using the IPython protocol for communication with python? If so, extending it to julia should be fairly straight forward.


we are for the integrated IPython REPL. but my question was more in terms of intellisense, debugging, profiling, mixed julia/C++ debugging, etc. ie, a fully integrated experience in VS.


The fully integrated experience you mention would do a lot for the Julia community in terms of gaining users tied to the GUI elements of MATLAB and VS for other languages. Integrated debugging and inspection in particular is a long-requested feature that has yet to see much attention.

Integrating with Pkg as Julia Studio does would be another important feature, as well as providing some sort of integrated plotting/graphics widget (a backend canvas along with plot navigation and image export, ideally supporting more than one of Julia's plotting backends).

I would certainly contribute to an alpha- or beta-testing effort :)


That would be fantastic, and I'd love to see that.


That would be actually quite awesome.


I think that would significantly increase the odds that my company moves from MATLAB (as one of the big arguments against is the lack of a good IDE), so yes, I would be incredibly interested in Julia support.


That would be amazing!


Does Julia have AOT compiler which produces a binary which can be linked to a C program? I am asking this because I have to consider availability on iOS - which is a platform prohibits JIT.


Alas, last time I checked this was still on the todo list. With sufficient annotations and type inference, there is no reason this would not be possible, but they way it sounded it would take rewriting or creating large chunks of core code and algorithms.


There's a lot to love in Julia, but my biggest nitpick is the 1-based array index. I can see where it comes from, but it's not something I can praise. I use R on a daily basis, where the aim is mostly interactive analysis, and still I cannot see any reason to use 1-based indexes. For a language that is instead mostly oriented to programming, I would have not went for the "familiarity" argument.


I'm guessing whoever was responsible for that decision was either a hardcore FORTRAN or MATLAB user, since both of those have 1-based indices. Or by a similar token, they could have chosen that because they expect that many of their users might be coming from either of those languages. I guess you'd get used to it, but I agree it's a big drawback.


I love Julia, Coming from the Ruby world it was very easy to get into.

It was easy to see how useful and expressive the language was by just doing a few Project Eulers.


the interesting thing is that what excites me about julia is that it is clearly a scientific computing language designed by people who are language geeks. the feature set seems very clean and well-thought-out to me.


I don't really see the need for the author to make himself into a "cowboy" coder and point out how they ignore all those valuable insights and enlightenments of programmers.

Julia is a kind-of-fine language that is designed to appeal Matlab users first of all by its syntactical looks. Just like Javascript was designed to appeal to C and Java users by imitating their look.

Under the hood, Julia is quite a smart development, not only in terms of code generation, but also in terms of datatypes and object models.

Multiple dispatch is something that more or less only Lisps typically offer natively (and Dylan). When working with types (especially in dynamically strongly typed languages) this is often something what I am missing in other languages. Consider Python:

    if isinstance(x, Y):
        ...
    elif isinstance(x, Z):
        ...
This feature alone shows that the authors of Julia are rather the thoughtful language-loving authors.

So I would like to leave the small scope of the article but look at the greater picture: Julia and its competitors. There are actually quite a few on the market. A few domain-specific numerical libraries exist for C/C++/Fortran for scientific purposes (ROOT at Cern, etc.). They are more or less falling out of fashion. For a long time, Matlab has been dominant in some faculties for evaluation and working with data, process signals and images. It is not by accident that Matlab was created as a convenient Wrapper to Fortran libraries at the time. From a software developer's perspective, Matlab is for Cowboys.

Next to its high price (and the vendor lock in forced upon college and university students who are trained for matlab when there exist suitable open source alternatives), the most appalling thing about Matlab is its poor performance as a programming language. While its easy to write small scripts, solve linear algebra problems and plot a few things, I have hardly seen well organized Matlab code and I just think that it is impossible. While Matlab licenses cost heaps of money, support is not good and upon a version change you have to spend considerable amounts of work getting around API changes.

The Matlab clones available (Octave) are generally unimpressive. I think this has to do with the big effort of copying Matlab and the need to develop the whole tool stack (parser, interpreter, libraries). Contributors are hard to find because octave hardly offers any benefit over the original, like ReactOS with Windows, Octave can only react. I still value the effort of the octave folks, they have done some great work!

Scientific Python has chosen a slightly different path. Taking the fairly uncontroversial programming language Python, the authors created an infrastructure of thematically separated modules. While eliminating the need to design and implement an own programming language, a lot of work could be spent on building useful libraries. Also, existing libraries were reusable (databases, XML, etc.) and Python is a really convenient programming languages for both Newbies and professional software developers. So with this pragmatic approach, the contributers have created one of the best environments for scientific software development and would be my suggestion for anyone at the moment who just wants to use one system.

What still amazes me: While working in an ipython notebook (http://ipython.org/notebook.html) on some numerical calculations, I can just pull up Sympy (http://sympy.org) and perform some symbolic computations (Fourier transforming some function analytically or taking the derivative of some other, etc.).

Oh, and have I told you about how Scipy can replace R for really cool statistical analyses?

The part where Julia kicks in now is the point that Matlab has a lot of market ground, especially with engineers who are not extraordinarily passionate about programing. For some people the burden of learning another syntax is just too big, they are not full time programmers but spend their time more with acquiring data and using the results. I really hope that some of them who are not willing to switch to scientific python can agree on switching to Julia.

Full Disclosure: I have occasionally been forced to work with Matlab (so I do have some experience with it without being an expert) and it was not fun. This is one of the reasons I would like all Scientists to have the chance of choosing a good environment that is suitable for them. If its Matlab for some, so be it ;-) I have never looked back.


> "The part where Julia kicks in now is the point that Matlab has a lot of market ground, especially with engineers who are not extraordinarily passionate about programing. For some people the burden of learning another syntax is just too big, they are not full time programmers but spend their time more with acquiring data and using the results. I really hope that some of them who are not willing to switch to scientific python can agree on switching to Julia."

As I see it, this will be Julia's main market. Younger engineers (read: "non-CS engineering students", i.e. electrical, mechanical, civil, etc) may encounter Python in college and become proficient in it, but because of historical reasons most of their assignments require some combination of Matlab, C, or Fortran. Even in group projects where the students have more independence with their choice of tools, if only one person in the group knows Python, the group will probably default to one of the common tools. When time is a scarce resource and time spent learning Python doesn't show much promise of improving your class performance, most students will neglect it.

Julia, at first glance, looks very familiar to a practicing engineer or scientist who is experienced with Matlab or Octave. It's the sort of thing that you could teach yourself in a weekend, and teach others at work if need be. Not necessarily the low level cleverness of the language or some of the more advanced uses of it, but enough to Get Stuff Done(TM). And that's what matters to most technical types without a background in CS. They will appreciate elegance and safety when they see it, but they're not going to decide what tools to use based on those factors.


You cannot overestimate enough how old folks growing up with Fortran just won't accept Array indizes starting at 0 instead of 1.

Then because professors demand it, colleges buy Matlab campus licenses and "encourage" their staff/students to use it, incorporate in teaching and research.

Sadly, when the student is not on campus anymore, he/she cannot reevaluate old date and in the new job they then demand a matlab license. Its the Matlab tax.


Btw, Fortran arrays start at 1 by default, but the lower bound can be specified by the user, for example

real :: x(-10:10)

is a real vector of 21 elements from -10 to 10.


interesting. I have so far only read some fortran code, never really dived into it in more detail.


> The Matlab clones available (Octave) are generally unimpressive. I think this has to do with the big effort of copying Matlab and the need to develop the whole tool stack (parser, interpreter, libraries). Contributors are hard to find

Nah, we have no shortage of contributors:

http://hg.savannah.gnu.org/hgweb/octave/

http://hg.savannah.gnu.org/hgweb/octave/file/052cc933aea6/do...


love to see free software with a lot of contributors ;-)


What still amazes me: While working in an ipython notebook (http://ipython.org/notebook.html) on some numerical calculations, I can just pull up Sympy (http://sympy.org) and perform some symbolic computations (Fourier transforming some function analytically or taking the derivative of some other, etc.).

You can certainly do it with Matlab (provided you have purchased the symbolic toolbox of course).

Oh, and have I told you about how Scipy can replace R for really cool statistical analyses?

Sure, it can for some things, but why?

My thing with Python is that it kinda loses to Matlab for non-statistical work (except perhaps for select fields, like network analysis or language processing), and to R/SAS/Stata (depending on type of job and personal preference) for statistical stuff. Of course all of these (other than R) are proprietary and not cheap, but most universities have all of them anyway, and businesses just buy what they need.

btw: did you first learn Matlab, or Python? In my experience, there is a tendency for people who start with Matlab to dislike Python, and for people who start with Python to dislike Matlab :) Probably has something to do with some basic things being just so slightly different, and therefore bothersome.

Also, the Matlab IDE these days is actually quite decent -- does Python have something similar?


Sure enough there are parts of matlab where matlab is without serious competitors. I think however that a majority of needs is available for Python, although I know that I do not represent everyone's needs.

Quick google search yielded: http://networkx.github.io/ do not know how it compares though.

I switched to Sci.Python at the point where I had difficulty treating time series data with Matlab. I know there is a toolboxy-thingy from mathworks but it either was not available or I did not find the documentation. Anyway, I quickly got started with Pandas. I had prior knowledge of Python and other mainstream programming languages before.

Spyder is a Matlab-ish IDE with a variable explorer, etc. Some like the Ipython Notebook which I think is great for demonstrations and teaching, but eventually does not scale when projects grow bigger.


Yeah, networkx is pretty good -- I actually meant that network analysis and text processing are the two areas where I'd be quite comfortable recommending Python over Matlab.

For time series data, I'd personally pick R though, or perhaps SAS if it's large enough -- at least if any statistical analysis is involved...

I need to check out Spyder.


R is a software that I still need to check out.

I think enthought offers commercial tooling and support and also a IDE platform with tools for data story telling etc.. I usually use vim.


R design is somewhat like perl in the sense that usually there are a lot of ways of doing anything. This includes time series of course. That being said, last time I did time series, I've used xts and was quite happy with it.


Continuum Analytics (http://continuum.io) also offers commercial tooling and support.


I grew up on Matlab. Entire PhD thesis work was all matlab. I started using python after leaving academia because it's much easier to put python into production, and I've never looked back. There are only 3 benefits to matlab. 1. Simulink, if you do that kind of stuff (I don't). 2. Some numerical algorithms in matlab are more efficient, but then some of the python ones are more efficient. 3. matlab array syntax is a bit more concise.

Python destroys matlab in all other regards. Once you've tried it for a while you'll understand the value of a general purpose programming language with advanced numerical capabilities


The biggest strength of matlab is the libraries, but you can very easy use those in python using an integration library like mlabwrap, and still have all the benefits of python.

And there's a python IDE called spyder which is similar to the matlab IDE.


Not to be a an asshole, something I have to preface a lot on here... but, uhh,

"Safety, type systems, homoiconicity, and so forth. I'm sure these things are great, but when I'm messing around with a new project for fun, my two concerns are 1) making it work and 2) making it fast."

Uhhh... Call me crazy, but wouldn't the "so forth" be what you care about if #2 is that important to you?


How good is the interactive plotting experience?


I haven't had the smoothest experiences getting plotting in general to work (it's getting progressively better). The plotting in IJulia using PyPlot (matplotlib wrapper) has been good for me.

The Julia plotting packages are Winston, Gadfly, and Gaston. You can find detailed discussions of which one to use on the julia-users mailing list.


Thank you. I can appreciate that it's difficult to implement good interactive plotting, especially across platforms. It's also very very important!


This is odd as a much better post on Julia v R v MATLAB v Python etc has got little attention: http://slendermeans.org/language-wars.html


I'm betting on Scratch.

The benefits of automation are mostly denied to me because I haven't the time to learn Julia or properly use the Python skills I already possess. I do however have the time to link and configure objects ala Scratch and Apple's Automator, or the first generation of what was once Allaire's Cold Fusion. Its not just me, either. The demand for automation tools is pervasive in business and education, but the time and innate skills needed to program effectively belong to a subset of the needy. Bring me a language that is truly a means to an end and take my money.


Is there reason to believe Julia is actually fast outside of microbenchmarks? Their strategy of aggressive specialization will always look good in microbenchmarks, where there's only one code path, but could blow up in a large codebase where you actually have to dispatch across multiple options. I've never seen a Julia benchmark on a big piece of code.


I've had some problems with Julia performance for a finite element method implementation (https://github.com/scharris/WGFEA), mostly I believe because of (1) garbage generated in loops causing big slowdowns and (2) slow calls of function closures. Functions defined at the top-level which don't close over environment values are fast, closures however are quite slow, which is really painful for situations where closures are so useful, e.g. to pass as integrands to integration functions.

There is an github-issue about the closure slowdown, but I don't have it handy. Both can be worked around, by writing in a lower level style, e.g. by using explicit loops acting on pre-allocated buffers, avoiding higher-order functions, etc. The pre-allocated buffers can be a lurking hazard though (Rust avoids the danger in the same strategy with its safe immutable "borrow" idea). I felt like these workarounds were giving up too much of the advantages of a high level approach for my own tastes.

I have converted to Rust to avoid the garbage collection for sure, and I'm extremely pleased with the performance. It would be nice having a REPL though, I do miss that. And I do intend to stay involved with Julia. I'm sure the situation will improve.

Good high performance garbage collectors aren't easy (and they are easy to take for granted after being on the JVM for a while) - that's probably the biggest challenge for Julia as a high performance language, IMO.


How does Julia interface with C? Is it easy to interface Julia with C because all it does is compile the C with Clang/LLVM?


Julia does not compile the C code; it links against shared libraries. It helps that Julia types can model C structs easily.

Manual entry on calling C: http://docs.julialang.org/en/latest/manual/calling-c-and-for...

Blog post on passing Juila callback functions to C code: http://julialang.org/blog/2013/05/callback/


> the real benefit is being able to go from the first prototype all the way to balls-to-the-wall multi-core SIMD performance optimizations without ever leaving the Julia environment.

That sounds like someone who has not had to maintain any kind of software for more than 2 days.


I'm betting on Julia and Rust really. Julia for Scientific programming and Rust for system.


I want to write audio VST plugins in this language! Somebody please make that easy. :-)


Based on your comments of Cowboys, you have obviously never rode hard put up wet.


Is there a Julia forum anywhere. Like with a hierarchy of topics and subtopics and with hierarchical threads at the bottom level like HN? Optimally something that remembers what you've read.


Not a forum exactly, but julia-users and julia-dev are hosted on Google Groups which does threading and remembers what you've read. See the homepage for signup (julialang.org)


Thanks. A good variety of things there but I'm hoping someone puts together a phpBB (or similar) site such as Oculus Rift uses here: http://oculusrift.com/

Such an organized site where you could drill down to topics of interest through broader categories would really be helpful. It would be best, of course, if it was set up and accessed through julialang.org as something official.

I don't know how to do that or I would. It isn't reputed to be very difficult to set up a phpBB site and I'm sorta hoping some enthusiast who does know how picks up on it.


I like what I see so far at this page [1] and will watch closely to see whether Julia catches on.

One thing -- can we call agree that dictionary literals begin and end with '{}', that arrays are zero-indexed and that an index into a unicode string is properly a character and not a byte? Or are we doomed to permute endlessly on details such as these? I wish any new languages would set aside a large set of tempting innovations and just go with the flow on the smaller points.

[1] http://learnxinyminutes.com/docs/julia/


Well, in many ways, they are going with the flow. They're targeting mathematicians, and R/Matlab users… all of whom use 1-indexed arrays. And, really, the kinds of dictionaries you're used to are constructed with {} braces. The square braces hold more specifically-typed keys and objects. It's a very clever analogy to their typed/untyped arrays. And wonderful for performance.

    ["one"=> 1, "two"=> 2, "three"=> 3] # -> Dict{ASCIIString,Int64}
    {"one"=> 1, "two"=> 2, "three"=> 3} # -> Dict{Any,Any}


Ok, good to see, but, what i can do with it that i can't with another language? -.-



I agree that Julia is great. But it's not there yet, either.


Figlet sighting. Font: big. :-)


> but it's poised to do for technical computing what Node.js is doing for web development

I stopped right there. Node.js has only a few great use cases where it shines and in the real world, the vast majority of shops have not switched to using it.


> Node.js has only a few great use cases where it shines and in the real world, the vast majority of shops have not switched to using it.

Of course not. 'Switching' is usually more pain than it's worth, especially if your previous solution works. New start-ups are likely the ones who will be using it, just as Rails took off in the start-up world.

Likewise, R and Python are going to continue to be in use in existing projects, and Julia is the potential future...


In my experience, most people are turned off by using Javascript on the backend of web-development. Your experience may be different, but I just don't think the analogy made in the original article is a very good one.


I was about to say, they ought to be turned off by JavaScript on the back end when there are so many other options. Then I remebered that people use PHP more often than not for back end work.


They may be turned off, but in my experience setting up a Nodejs backed site is incredibly easy with the way the frameworks, libraries, and everything are written. That alone will appeal to a lot of people. Front-end only people could easily put together a Nodejs backed up in no time at all.

Or you could use something like Coffee-script from front-to back. It's a very easy eco-system to get into.


Rails took off among startups because it made it so much more efficient to iterate on design, but node.js is nothing like that. I actually tried node.js for a while and came back to rails because it's simply not convenient enough. Like the guy below said, it's great for certain use cases but overall it doesn't bring much else to the table.


I'm not a Node.js guy, but aren't you comparing two things that aren't comparable ? Would it be more fair to compare say Express and Rails ?


There's all these other "frameworks" on top of node you can use to emulate rails, of which Express is the most popular. But I am talking in terms of tools to use when I'm building a database backed REST app. Node.js was hot so I took some time building apps entirely using node--some with express, some with other "more advanced mvc" frameworks--but they're all mediocre at best compared to Rails. I am not comparing node with rails. I'm saying node is not good for building web apps because all you have is all these inferior web development frameworks. Either you need to write too much boilerplate code, or learn to use half baked frameworks that will get nowhere as much traction as rails


I'm not a Node.js user and I'v certainly gotten the feeling that it's rapidly proliferating and shaking things up. Given that it's less than 5 years old, I think that's pretty cool.


I think he might mean "shaking things up" (which Node certainly has done) rather than "taking over the world" (which, as you point out, it has not).


>I think he might mean "shaking things up" (which Node certainly has done)

How has it "shaken things up"? It is yet another irrelevant mess of crap a few dumb web monkeys use. There's one of those every other month.


All they're missing is a cool interaction mode like SLIME.


Looks strangely similar to Lua


Well, most of the syntax he shows here is just calling functions. A lot of dynamic languages have similar syntax for functions calls. But you're right that Julia's syntax is superficially similar, with function definitions like

    function foo(bar, baz)
    end
and 1-based indexing of arrays (although Julia's use of that was to be similar to MATLAB).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: