Hacker News new | comments | show | ask | jobs | submit login
Consider the Nimrod Programming Language (geetduggal.wordpress.com)
70 points by geetduggal on Mar 12, 2014 | hide | past | web | favorite | 70 comments



Nimrod is very interesting. As a former Delphi hand, Nimrod looks very much like a modern upgraded version of ObjectPascal, with some heavy influence from Python and possibly Ruby.

The language is actually pretty amazing — it's expressive and flexible (eg., type inference lets you omit types in most places, to the point that much of your code ends up looking something like Python/Ruby/Lua), and it's just amazingly fast, but not at the expense of ease of use. In almost every way, Nimrod is the "speed of C/C++, ease of Ruby" replacement that we have been looking for. Go's performance has been too disappointing, while Rust is a wee bit too complex. Nimrod feels just right.

As I wrote in a different thread, I and a colleague have been looking a little bit at using Nimrod as a game-logic language, as a replacement for Lua. Lua is pretty fast, but Nimrod is considerably faster; I'm personally also not a fan of Lua with its very weak table-based OO and rather idiosynchratic syntax, although I know it gets a lot of love on HN.

What makes Nimrod particularly suited to game development is that it's designed to integrate well with C. Nimrod compiles to C, and the generated C code provides the necessary bindings so that you can easily call Nimrod functions from C without an intermediate translation layer. In other words: If you write your core engine in C, scripting the game with Nimrod is (from what I can tell) as easy as with Lua. Very exciting.


>Go's performance has been too disappointing

How so? Everything I've read online has been roses and unicorns in terms of it's speed and build times.


Build times are fantastic. Performance is still quite a bit worse than Java, actually. It has its own compiler, and so it doesn't benefit from the huge amount of work put into optimization in GCC and LLVM. Definitely not roses and unicorns.


Go has MULTIPLE compilers, because Go is a language spec, not a compiler. Hence GCCGO (which for example, takes advantage of some of those GCC optimizations) can be used.

Additionally, in most benchmarks, Go does fairly well and is young and constantly improving: http://www.techempower.com/benchmarks/

You will also find Nimrod / Jester in those benchmarks, and can compare the performance for yourself.


Pedantry. When people talk about Go, they usually mean the official implementation.

gccgo has too many issues (lags behind release, requires gcc recompile, etc.) to be useful to most people.


I disagree. I know most Go shops I have dealt with at least have looked into gccgo. We use gccgo in the wild (for all our image processing components), because it gave us a nice performance boost. With good testing, you can easily do A/B testing using gccgo to see where it makes sense for you in production.

We have to dig through 30TB of images -- so the performance benefits we got from gccgo were worth the trivial effort it took to implement.


What is "quite a bit worse"? Is that 2-3x slower? Or 20-30x slower? It's my understanding that it's the former.

And then there's memory consumption...


The former.


Go is pretty much competing with java and not C, C++, FORTRAN, and Ada. Nimrod is up there with the best of them. Go is also somewhat designed around reducing build times, nimrod is not. However C and C++ (esp C++) pretty much manage to be designed around increasing build times, so Nimrod is much faster to compile than they are.


What is too complex about Rust, in your view?


Pointer semantics. The syntax in general bugs me. It's too close to C++ for my taste -- too many braces and brackets and sigils.


Well, the pointer and reference distinction is there because you can't do safe memory management without it, without a thread-safe garbage collector which will hinder your ability to reach C/C++ level performance. But yes, if you're fine with the performance tradeoff GC is certainly more convenient.

I wouldn't say the syntax is complex because it's close to C++, but it isn't to everyone's taste (and that's fine; lots of people prefer that syntax and there's room for different languages for different folks).


These days only complaints I here about Rust are pointer semantics and C++-like syntax. Both will not be fixed (design decision). I guess that means Rust achieved some sort of optimality now. :)

More likely, probably people hate those two points so much that they forget to mention other problems. Which is a pity. Please tell us about other problems.


I think the pointer semantics being unfamiliar is a reasonable complaint. But I think it's what you get from the pointers that makes it appealing: safe, deterministic, GC-free, thread-safe and race-free memory management. I think if the pointers bother you you're really objecting to the "safe, GC-free" space of languages as a whole, not the Rust language itself--as I still haven't seen any approach to memory safety without GC that doesn't use smart pointers and references.


You and I disagreed about Rust's complexity in the past. In the meanwhile, I've actually come to appreciate Rust for the reasons you outlined above. It really is the only new (non-research) language with modern features that also includes a deterministic, safe, GC-free memory management, which fills a big hole. And it's a cool language, with a nice, functional feel, particularly with how the standard library is shaping up.

However, let's be clear that the pointer semantics do bring a great deal of complexity to the language.

That said, I'm very impressed that the managed pointer sigil has been eliminated and the feature moved to the stdlib. That and the fact that Rust's team has clearly pushed the message that owned pointers are the idiomatic way to deal with memory in Rust have combined to bring a lot of clarity to the language. Also, as stated above, I really like the direction that the standard library is taking. So congrats on a job well-done.

Edit: Let me also congratulate you on the Rust compiler error messages, which are almost invariably clear and actionable compared with other languages I'm playing with in this space.


The pointer semantics do bring a great deal of complexity to the language, but it's inherent complexity. Other languages with manual memory management also have these concepts, but they're not baked into the language semantics. So you still have to understand them, but the language gives you no help.


Seeing as how both Nimrod and D have invested into metaprogramming (and also "attempt a speed-elegance unification"), I think a comparison with D would be interesting. D doesn't have AST manipulation, though; the "parallel for" idiom is instead accomplished rather mundanely, using a custom iterator (http://dlang.org/phobos/std_parallelism.html#.TaskPool.paral...).

Another similarity is uniform call syntax (which D calls uniform function call syntax, UFCS for short).


I'd say Nimrod feels like Python, D feels more like about what Java 9 or 10 will be.


Nimrod is indeed pretty similar to D, although I think it is a bit more uniform because of the way the object model works. Also "death to mustache braces"


Thanks for the suggestion, CyberShadow!


What is Scala doing in the same list with Rust, Julia, Nimrod, and C++? You do realize there is a JVM preventing it from being a systems level language, right?

Also, you should need no more reason to avoid Scala than this: https://www.youtube.com/watch?v=TS1lpKBMkgg

If you're interested in Scala, learn Haskell. It's faster, its design is re-enforced with proven mathematical concepts, and it doesn't have the worst syntax ever.


There are lots of reasons to choose Haskell over Scala, but a blanket statement that it is "faster" is idiotic.


But it is faster. It's both faster to compiler and faster to run. I think it's a valid reason to prefer Haskell.


Except that it isn't faster at runtime (http://benchmarksgame.alioth.debian.org/u32q/benchmark.php?t...).


If you want to prove your point, using totally unrealistic micro-"benchmarks" such as those is by far the worst way about doing so. You're better off giving no evidence whatsoever than you are referring to those.


Are there any more realistic benchmarks that we can refer to? Because I've read a lot about how Haskell supposedly runs faster because of its purity, but I've not seen a lot of evidence actually supporting that notion.


"[B]etter off giving no evidence" than pointing to measurements that provide a known context -- source code, implementation version, command lines, measurement scripts...

Nonsense.


Almost all the Haskell entries in the shootout are just using the FFI to call into C libraries. None of them are remotely close to idiomatic Haskell.


Everyone knows, of course, that these benchmarks don't tell the whole story. However, it's ludicrous to assert that providing no evidence is better than providing a array of microbenchmarks across a variety of different languages, machines and implementations. Further, the assertion that Haskell is "faster" (in some blanket sense) than Scala is completely evidence-free.


@profdemarco --- sure; and they're still not faster than their Scala counterparts.


Perhaps his intention is not to just compare system level languages.


Holy crap the music in that video's intro makes me feel like I'm about to watch the Hunger Games. And then it's just a guy talking.


This video you've linked shows many general and philosophical statements but really not that many concrete examples... I stopped before the end because i got a bit fed up with general sentences like "complexity is the ennemy". From what i've read about the astonishing number of scala features, i am more than ready to believe him, but i would really need more examples...


He does go into detail. Keep watching.


He also goes on to say that no language meets his expectations today and despite that speech, he's still active in the Scala community.

https://github.com/paulp?tab=activity


Except that Paul Phillips has stated many times that he still programs routinely in Scala and, despite its flaws, considers it to currently be the best option. His arguments about what's wrong with Scala are compelling, but his continued use of it and desire to create his own collections library also speaks volumes.


It's a language that "attempts a speed-elegance unification", as the author specified before that list. It's not as if Julia is a systems language at all, but it also definitely belongs there.


> You do realize there is a JVM preventing it from being a systems level language, right?

You do realize there are quite a few commercial JVMs that generate native code AOT like any of the referenced languages, right?


I presume they still have a largish runtime attached with complicates FFI especially if callbacks are involved. I've found Java FFI to generally be a pain.

Also, I think a lot of organizations would never approve using these in production, too likely that behavior will differ vs Oracle's or the OpenJDK VM's, the company won't be around in a few years, etc., and the gcj is very out of date from what I gather.


I did not mention gcj, it is a dead project since 2009.

I don't know, but I guess enterprises do trust Aicas, Aonix, IBM, Excelsior and quite a few others.


That's too terrifying to imagine.


I fail to see why.


Nimrod's syntax is quite similar to scala (and Pascal!). Haskell can be really fast but fast Haskell and easy/safe Haskell tend not to overlap as much as one would like


> Nimrod's syntax is quite similar to scala

What are you talking about!? Have you even looked at either?

Nimrod:

    for i in 1..100:
        if i mod 15 == 0:
            echo("FizzBuzz")
        elif i mod 3 == 0:
            echo("Fizz")
        elif i mod 5 == 0:
            echo("Buzz")
        else:
            echo(i)
Scala:

    for (x <- 1 to 100) println(
        (x % 3, x % 5) match {
            case (0, 0) => "FizzBuzz"
            case (0, _) => "Fizz"
            case (_, 0) => "Buzz"
            case _      => x
        })
In nimrod you'll notice a lot less parens, curly braces, and '=>'.

> fast Haskell and easy/safe Haskell tend not to overlap as much as one would like

Not even remotely true. Where did you even get that idea?


It really depends on what part of Nimrod and Scala you look at. The similarities are perhaps more apparent when looking at the function declaration syntax, generics and/or the variable declaration syntax.


"fast Haskell and easy/safe Haskell tend not to overlap"

I'm sorry, but the only people I've heard say this are those who are slightly open-minded about Haskell but truly believe it usually can't be remotely as efficient as imperative alternatives. None of the people I've heard make these statements have had even a few months experience writing Haskell.

If you can qualify this, I'd be pretty interested. Or if you've had an experience with Haskell that led you to believe this, I'd like to see if there is another solution.

I'm very interested in these limitations people always talk about with Haskell, but none of them have held up so far. I've been evaluating Haskell for a while and am very interested in testing any limitations you've faced.


Also, Julia has no business as a systems language either. I think it's a more general language comparison.


If your plan was to make Haskell users look like douches, you are doing a pretty good job around here.


I think the reason I don't like Nimrod is the same as the reason I don't like C++(11): it has too many features. Rust's (relative) simplicity and memory safety make it much more attractive to me.


I like nimrod because it has a sane syntax. I can write a function declaration in one line of nimrod that would be 6 lines of c++. I also like Nimrod's focus on static dispatch and easy to use generics.

Rust makes me uneasy because the whole lifetime tracking adds a lot of complexity and I fear it may make some programs harder to write or some libraries harder to use. Also I think that garbage collection, if fast (which nimrod's is very much so) is the right way to go in some situations. I don't actually want to go and spend time freeing a bunch of strings right after I use them.


> Also I think that garbage collection, if fast (which nimrod's is very much so) is the right way to go in some situations.

Definitely in some situations garbage collection is the right thing. But Nimrod's garbage collector is not thread safe...

> I don't actually want to go and spend time freeing a bunch of strings right after I use them.

That isn't how Rust works. The compiler automatically frees things via RAII.


RAII is sorta by definition immediate. They are freed right when they exit scope, this can be annoying for large tree structures.

Yeah the GC not being thread safe can be annoying. CSP works OK and if you need more performance a thread safe GC may not be fast enough anyway. If rust can pull off ownership tracking between threads in an easy-to-use way then I would be super excited.


Well, you don't have to free in your destructor eagerly: you could incrementally release if you wanted to, and still use RAII. But you're right of course that RAII makes it simplest to eagerly deallocate.

Eager deallocation is usually what you want anyway, because of its cache effects and because you can just release the memory back to the free list (or the OS) and forget about it. If you repeatedly allocate and deallocate a similarly sized block, for example, eager deallocation means that your block will stay in L1 if it fits.


Um, what are you talking about.

There's a separate heap per thread and data sharing among threads ins't allowed. It's perfectly thread safe.


The language doesn't enforce that GC'd pointers don't pass between threads. The GC can silently free stuff you didn't want freed if you pass objects between threads. This is not a property of a thread-safe GC.


Yes, but in practice practically everything in Nimrod is a value, not a ref, so GC'd pointers are an extremely rare duck.


Ultimately though, if you don't have GC, you _must_ deal with lifetimes, wheather or not the compiler helps you with them.


well yes. This is true. But nimrod has (very fast) GC. And it also has destructors and RAII so if you really, really need unique_ptr style memory management you can do that. Rust has lifetime tracking but I would be interested to know how it performs against Nimrod's GC when the ownership gets really tricky.


I think one issue is that people have different opinions on whether "ownership gets really tricky" is a common case or an edge case. Many people with unique_ptr experience seem to assume that Rust's owned pointer is more of the same and not really applicable when ownership is even slightly complex, but this is not the case. Rust's owned pointer is vastly more powerful than unique_ptr and in my experience (and in Rust compiler and Servo browser's experience) and "ownership gets really tricky" is an edge case.


> Rust has lifetime tracking but I would be interested to know how it performs against Nimrod's GC when the ownership gets really tricky.

You can use GC or reference counting in Rust—the same that Nimrod uses. (You could use DRC like Nimrod in Rust if you wanted, but it's not my preferred form of memory management due to the cost of making it thread safe.)


I think Rust is very interesting, but a language with a "periodic table of types" is definitely not one of the simpler languages out there. Just like the zero-cost abstractions of C++, the memory safety in Rust comes at a price of increased complexity.


I'd assert that the "periodic table of types"[^1] radically overstates the complexity of the Rust type system, largely because of the rampant misuse of the "periodic table" as a means of presenting groups of things. A "periodic table" of (for example) food items or Perl operators tends to have a bunch of distinct elements in its cells arranged by loose groupings[^2], whereas the Rust periodic table (like the periodic table of elements, but unlike other pop-culture periodic tables) has cells whose contents are for the most part[^3] a function of their row and column.

Tabular nomenclature aside: Rust has values and two kinds of pointers, and a handful of built-in types, and you can have pointers to those types. While the semantics of the pointers are unusual, I don't think that makes it that much more complicated. It should also be noted that many of the proposed language changes make said table even more regular (e.g. replacing the [T] syntactic sugar for vectors with a more conventional Vec<T>.)

Rust is in some ways complicated, and I'll grant that a beginner to the language will likely curse the borrow-checker more than a few times. But I don't think Rust is nearly as complicated as it is sometimes reputed to be.

[^1]: http://cosmic.mearie.org/2014/01/periodic-table-of-rust-type...

[^2]: http://www.ozonehouse.com/mark/periodic/ is an egregious example.

[^3]: There are exceptions (e.g. the closure types) but they are few and easily memorized.


As someone without experience with linear types who learned Rust in the last week while porting quickcheck, I think my experience supports your claim here. In the first few days I was cursing the borrow-checker in every way imaginable. I was comfortable with everything except the pointer types and their borrowing semantics.

After a few days, the borrowing logic really started to click and I was fighting much less with the compiler. Admittedly, I still have some fogginess about some areas (particularly the borrowing/sending semantics of closures), but I understand that there's still some work left to be done on that before 1.0. But the point I want to stress here is that once the borrowing logic clicked, Rust became a reasonably simple language. Which is really important to me.

But, I now get to say that I just wrote quickcheck without manually freeing memory and without the use of a garbage collector. That's pretty damn cool if I say so myself.


Hey I think I was just playing with your quickcheck library and gave it a star earlier today! Small world. Nice project, I'm looking forward to using it for stuff :)


Ah great! Thanks :-) Feedback/criticism is appreciated. Still kind of learning the ropes of the Rust ecosystem.


Nimrod is pretty awesome for metaprogramming. If you can dig around on the Nimrod website and find docs about the html-producing language (same general goal as haml), that's a decent quick glimpse into the possibilities.


I must confess I got rather distracted by the number of spaces between sentences; I found everything from one space to six.


You have successfully spawned my OCD thread. I have attempted to fix a number of them, but it's more difficult in Wordpress.com land than I thought it would be.


I have used Nimrod daily for several months now, and I fully agree. It is actually one of the most productive languages that I have encountered so far.

Perhaps you should add two more things to your feature list:

1.) Native Perl syntax

2.) C imports for seemless use of C functions:

const txt = "x"

const x = 123

printf "%s is %d\n", txt, x // or

printf "%s is %s\n", txt, $x


I have used quite a few different languages and systems over the years, from SQL to C/C++ to OCaml, C#, D and more recently JavaScript and then CoffeeScript/ToffeeScript. And some others I better not mention. Last few years I have mostly been enjoying dynamic languages and the fact that I don't have to declare types.

I have been doing programming mainly with ToffeeScript in Node.js recently. I was worried that I might find dealing with some types and an actual compiler for a static language to be burdensome.

With Nimrod I have a syntax that's even cleaner than ToffeeScript and Python, which includes types inference to make type declarations minimal, and I actually find it easier to program with the type checking at compile time. Also my code is being compiled to C and then to native so it is very fast.

And it has very powerful standard library.

So what I have found is that not only does my Nimrod code execute vastly faster with much less memory than Node.js code, it builds much more quickly, and it is easier to read and maintain the code. One aspect of this is the fact that you don't have to use namespace qualifiers when you use module calls. Initially I was worried about that, since it is such a significant characteristic of Node.js code and point of pride for some. However, I think Nimrod has proven that this is the way to go. I can easily create my own DSLs with modules and extend the base programming system. This is something I tried to do with CoffeeScript because DSLs are powerful, but it was impossible to do it elegantly.

I have not yet attempted to learn the metaprogramming features, but from what I can see, few, if any, other languages with such high performance and clean syntax can compare with Nimrod in terms of metaprogramming.

I am quite glad that I found out about Nimrod before I got too caught up in Go or Rust. The syntax is much cleaner than Go, and the performance is better. The syntax is _much_ much cleaner than Rust, and overall its a much more practical language.

What I am doing now and plan to continue doing as much as possible, is moving my web programming into Nimrod code. Basically I am doing this because I need less memory usage in an agent I am going to install for a service. I briefly considered using C or something like that, quickly decided to take advantage of a newer language, started looking at Go, briefly got excited about Rust, saw Nimrod and fell in love. This is the language that is going to make it most practical to achieve my goals, as well as allow me to stay engaged by learning its more in-depth features as time goes on.

The only reason that Nimrod doesn't have instant uptake and popularity is that people don't evaluate things logically. Most people aren't capable of it, but even among the ones that are, peer pressure generally takes over. Humans are herd animals. They follow the crowd. If most people are driving cars that are four times as large as needed every day to work when they could just sign on to the internet and save gas, time, and the planet, then most people will do that, no matter how horrible it is. If most people choose a particular programming language even though it is inferior, then most other people are going to do that as well.

But you can still be like the smart guy who was driving the small, efficient, fast electric vehicle before electric vehicles were cool.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: