Hacker News new | comments | show | ask | jobs | submit login

We just switched from Rust to Nim for a very large proprietary project I've been involved with after rejecting proofs of concept in Go and Erlang earlier in the process. This despite the fact that we had formally decided on Rust, had tons of code developed in it, and only stumbled upon Nim by complete accident randomly one day. Even though many large components were already developed in Rust we have already reached par and have shot ahead of where we were at. I don't want to diss Rust in any way (or Go for that matter), but I figure they both have corporate backing and can easily speak for themselves, whereas Nim does not- so I don't feel too bad offering our somewhat informed opinion :-)

In a nutshell, Go ended up not really being a systems language and Rust has the beautiful goal of memory safety and correctness without garbage collection but in the end feels like it's designed by committee.

Nim, with Araq it's benevolent dictator, has had the opportunity to be highly opinionated. Much of the time that seems to simply allow projects to either shoot themselves in the foot or stagnate, but occasionally it produces true gems- and I really feel like that's what Nim is.

Some aspects of Nim that keep blowing me away more and more:

* Truly allows you to be close to the metal/OS/libraries (most beautiful and simple ffi ever) AND simultaneously allows you to think about the problem at hand in terms of the problem- i.e., high level abstractions and DSL creation like you can do with Ruby (but even cleaner in many ways). I always thought those were two opposite ends of a spectrum and didn't expect a language to even attempt to bridge the chasm- then along comes Nim and not only attempts but succeeds beyond what I ever would have thought possible. To illustrate: it has become my go-to language for quick scripting, faster to whip something together than shell or ruby or perl (and many orders of magnitude faster to run- often even including the automatic compilation the first time)- while also being the fastest, cleanest, safest way to do something low-level, like a mmapped memory-mirrored super-fast circular buffer or cpu-cache optimized lock-free message passing...

* Even though it has C as an intermediate step, it doesn't feel or act anything like C. While not as strong as Rust in this area (I know, Rust goes straight to LLVM's IR instead of C- but the risks are the same)- it generates code that is much more safe (and concise) than writing it straight in C. I know, there are other language that do this well also- but usually by sacrificing the ability to easily and cleanly do system-level coding- like a raw OS system call, for example.

* On a related note, despite feeling more high level than Rust, it can do so much more at a low level. For example, using musl instead of glibc (which at least last time I checked wasn't possible in Rust due to some accidental coupling).

* So fast. While premature optimization is considered the root of all evil, and linear optimization often does not end up adding significant real value, I've been reminded more in the last few weeks than ever before that when speedups are around 2+ orders of magnitude _it is often a complete game changer_- it often allows you to model the problem in a completely new way (don't have a generally understandable example for Nim offhand but take docker vs dual-boot for an extreme non-Nim example- VMs were a mostly linear improvement over dual-booting and other network-based workarounds, but only with linux containers and their orders-of-magnitude "startup" time, snapshotting, etc. etc. was docker able to say "these are so cheap and fast we can assume a single running process per 'image'").

* Even though it's not as formally safe as Rust yet, in practice it feels and acts as safe, without the cognitive overload.

* Rust gets tons of new commits from tons of contributors every day. I worried when looking at Nim that the lower volume of commits meant that the language was less... alive or long-term-tenable. But on closer evaluation I realized that there was literally less that needed changing, and that the changes that seemed to need code to change all over the Rust repository would, given an equivalent scope in Nim, require trivial changes to just one or two files. (for example, Rust's massive [and extremely slow] bootstrapping pipeline and parsing architecture, vs Nim's built in PEG syntax and 5-line loop that keeps compiling itself using the last iterations' outputs until the outputs don't change anymore).

In short:

- All the simple beauty (IMO) and conciseness of a python-like/pseudocode-like syntax without all the plumbing and pedantic one-way that comes with python. In other words, beats python at python's own strengths (not even mentioning speed or language features etc...)

- Top-down modeling and DSL affinity like Ruby with less feeling of magic- e.g., better "grep"-ability and tracing. In fact, the DSLs and up being cleaner than idiomatic Ruby ones. So beats Ruby IMO at one of Ruby's core strengths.

- Seems as safe as Rust with much cleaner syntax and simpler semantics. (this is something we're actively quantifying at the moment). Easily 10x the productivity. _Almost_ beats Rust at its core strength.

- Easiest FFI integration of any language I've worked with, including possibly C/C++.

- All the low-level facilities of C/C++ without the undefined-behavior, with better static checking, much better meta-programming, etc. etc. Beats C/C++ at their core strength.

- Utterly intuitive obliteration of autotools/configure/makefile/directives type toolchains required for system development using C/C++.

- It's very fast and efficient per-thread garbage-collection (when you want it) essentially allows it to eat Go and Java's lunch as well.

- It's a genuinely fun language (for us at least).

I've already been too verbose and am out of time but I should include for some sense of completeness some current weaknesses of Nim. None of these ended up being remotely show-stoppers for us, but at least initially we worried about:

* Too much attention to windows (doing nix and even linux-only development is so easy it has turned out to be a non-issue).

Less safety than Rust when doing manual memory management (hasn't been an issue and we believe unlikely to be an issue in practice).

* Lack of (implied) long-term corporate support (already more stable than some others and community is strong where it counts. Also, no matter how much corporate support they get- Rust will still be more verbose and designed by committee and Go will still fall short of being a true to-the-metal systems programming language and neither will ever have Ruby and Lisp's moldability and endless potential to get out of your way).

* Smaller community/package ecosystem than many others (so easy to do FFI or to even _reimplement something done in C using 1/5 the code_ that it also has turned out to be a non-issue. Now I worry that it's so easy to code that it will have a library-bloat issue like Ruby requiring something like ruby-toolbox)

* "How have I not heard about this before?? Why isn't it bigger? Is something wrong with it?" I start worrying about things like D's standard-library wars or lisp & scheme's utter flexibility combined with poor readability... Nope, just new and only spread through word-of-mouth, like Ruby, Python, and Perl long ago...

[there, once I've written three separate lists in a single comment I can safely say I've said too much and should get back to work].




> - Seems as safe as Rust with much cleaner syntax and simpler semantics. (this is something we're actively quantifying at the moment). Easily 10x the productivity. _Almost_ beats Rust at its core strength.

> Less safety than Rust when doing manual memory management (hasn't been an issue and we believe unlikely to be an issue in practice).

One major safety issue with Nim is that sending garbage collected pointers between threads will segfault, last I checked (unless you use the Boehm GC). Moreover, Nim is essentially entirely GC'd if you want (thread-local) safety; there is no safety when not using the GC. So Nim is in a completely different category from Rust; Rust's "core strength" is memory safety without garbage collection, which Nim (or any other industry language, save perhaps ATS) does not provide at all.


So since this is such an amazing language.. I wonder why it doesn't have a Wikipedia article? Oh, wait.. I remember. Its because Wikipedia admins are an incestuous cabal and will do anything to avoid admitting one of them was wrong. Or because Wikipedia in general is a joke.

Think I an exaggerating? I believe this is the best programming language out there. Just try to add a Wikipedia article. Not in a million years.

The Wikipedia notability rules and process are ridiculous and completely unfair, when every porn star, popular smut video on the internet, rare mushroom, and Pokemon DVD has an article.. But the best programming language in the world cannot.

This is an example of what is wrong with our society.


I love how Nim is getting along but I am rather afraid to put in production.

The number of compiler bugs is a bit scary.

https://github.com/Araq/Nim/labels/High%20Priority

And also from what I've heard, the tooling isn't very good. Autocomplete isn't context sensitive and using GDB to resolve a variable like "foo" actually becomes "foo_randomnumber".


> The number of compiler bugs is a bit scary.

I see similar lists for GCC when a branch is underway. I'm actually impressed with the number of active contributors, and I find the design very compelling. I'll be keeping an eye on this project.


> The number of compiler bugs is a bit scary.

This is actually one of the things that keeps turning me away each time I try Nim. All software has bugs, got it, but in my mind a language nearing 1.0 should squash some of that list (or remove/feature gate things causing them) before even thinking about a 1.0 IMO.


In my opinion, 1.0 is about the language specification becoming stable. That said, some experimental features have actually been gated in preparation for the 1.0 release.


As for scripting: I just tested on Debian Jessie, and at least for trivial code (read: not nim itself) -- nim seems quite content to work with tcc. Tcc is a pretty awful choice for c compiler in general -- but while my current desktop is a little too fast to be able to tell -- nim w/tcc was typically as fast on first compile (read: compiler not cached in ram) as clang/gcc were on second run). Honestly, on this box:

    time ./bin/nim --cc:tcc c -r examples/hallo.nim
vs

    time ./bin/nim --cc:gcc c -r examples/hallo.nim
    time ./bin/nim --cc:clang c -r examples/hallo.nim
is a toss-up -- and they all lose against:

    time python3 -c 'print("Hello, world")'
(by an order of magnitude that ends up being almost insignificant, it's ~0.5 seconds for clang/gcc on first run, ~0.2 seconds for tcc and ~0.02 seconds for python). But the binary tcc makes runs in ~0.001 -- or basically too fast to time -- gcc/clang versions are presumably faster).

Normally I think the startup time for python is less than instant (especially without dropping some standard includes/search with -sS) -- but apparently when running on a quad-core i7 at ~4Ghz with the OS on an SSD -- it makes no practical difference. I'll try later on my slower laptop (which is slow enough that "python -c "print('hello')"" doesn't feel quite instant) -- but the main point I wanted to make was that nim -c -r with --cc:tcc makes for a quite usable "scripting" tool, thanks to tcc's compilation/startup/parsing speed (if nim w/gcc/clang wasn't fast enough already).


Wow, that should have been its own post! Great to hear how you get along with Nim.


Have you read the following article?

https://gradha.github.io/articles/2015/02/goodbye-nim-and-go...

I'm curious about your thoughts on it being that the author speaks about some areas you're currently talking about.


Could you elaborate a little about why you rejected Erlang?


I actually coded a predecessor to the system under development a few years ago and have used Erlang and more recently Elixir a lot, but in the end it just wasn't low-level enough, fast enough, or runtime-free-enough for our current project (for example we need to generate real-time processing kernels that can run on GPUs and FPGAs-- a realm not even really contemplated at the Erlang level).


I also apologize beforehand for a long reply. Long posts get long replies.

I can definitely understand this point of view, but I just can't agree. The parent probably wants his/her claim that Nim seems as memory-safe as Rust to not be interpreted literally, as a literal interpretation would make the statement false (by any fair comparison using idiomatic code from both languages to accomplish the same thing).

What the parent is surely talking about is how it pans out in practice. Different languages have their different trade-offs here with different pitfalls, and denying that Nim can crash and burn due to memory management mistakes would be false. Denying it with respect to Rust would also be false, due to Rusts optional unsafe features, but the important distinction is how easy it is to make these mistakes in idiomatic code and what the consequences will be. Only time will tell, which is why anecdotes are of interest, of course - both the parents and everyone elses.

However, I find some choices of words to be a bit disingenuous (though hopefully unintentionally so).

The claim about being able to do "so much more at a low level", like e.g. being able to switch out libc variants, which allegedly is not possible in Rust due to accidental coupling. Is this a temporary difference? If so, it may only be relevant in the short term. I can't answer this question, but it would be interesting if someone did.

Most importantly: "Even though it's not as formally safe as Rust yet, in practice it feels and acts as safe, without the cognitive overload." Yet? Making Nim as formally safe as Rust would require completely changing key aspects of the language. Feeling as safe is possible, and acting as safe is possible too...

... until it doesn't anymore, that is, because the team grew (as it always does, some leave, some join, etc) and the code base ballooned and someone made a simple memory management mistake somewhere that is now a serious debugging problem and no code can be eliminated beforehand from the necessary auditing because the entire code base is vulnerable to these classes of errors.

Memory management errors have a way of resulting in seriously trashed core dumps, etc, sometimes severely complicating and limiting debugging possibilities. Where's my stack trace? Oh, we seem to have been executing data and not code. Where did we come from? Oh, no intelligible stack frames. No valid return address in the register, etc. I've been there, as I'm sure many of us have. Memory management errors can lead to complete debugging nightmares, and that's if they're even reproducible by developers. If they're only triggered at the customers site due to their unique circumstances, good luck. Having a deterministic test trigger it and being able to run it through valgrind until it's solved is the optimal cake walk scenario, but that's not real life most of the time.

Rust can step quite easily from low-level stuff to high-level features and meta-programming too, and I feel no comparison is really made by the parent, only talk of Nims features. The central premise as always for Rust is that it provides what it can provide while still maintaining memory safety. Rust without this prerequisite would not be Rust, and the constraints for everything else flows from it.

The repeated claim of design-by-committee is also not the best one. Having followed Rusts back-and-forths for years, I have to say I feel the discussion has been extremely well functioning, and most importantly: The choices have been very pragmatic within the constraints of preserving the key safety features of the language.

Personally, having gone through many languages all over the abstraction level spectrum and specifically having spent quite some time in embedded C/C++, I am terribly, horribly tired of fatal runtime errors in general and memory management errors in particular. They can cost so much time to debug and fix that development time can swoosh past what it would have been in a language with a type system preventing them in the first place. Your mileage may vary, of course!

There is something to be said for languages that simply eliminate these classes of errors compile-time, and that something is actually a lot. For the small programs, tooling, scripts... I can write them in anything. There are hundreds of choices. That's not what this is about. For the software that matters, that ships and that others will expect to work, I no longer have the patience or tolerance for these error classes.

Many languages with such safety guarantees (and Nim is not one of them) have already existed for a long time, but very few that can be applied to all the use cases that Rust can. That is what it's about. This is why people are excited.

Software development is a form of art and a form of engineering, at the same time. A lot of software doesn't have to be as reliable as space shuttle firmware, and I'm not claiming it has to, but the general bar could sure as heck be raised several notches. We know how the world works, and yesterdays quick hack or proof of concept is todays firmware shipment for use in live environments. Successful software lives for a long, long time. Software is eating the world, and society is now at its mercy.

Personally, I will sleep so much better knowing that these error classes were wiped out compile time in 99.?% of the code I shipped to those customers, while being able to maintain on par performance with the C code it replaced.

These are of course my $0.02, and I hope it didn't come across as combative as that was definitely not my intention - only passionately conveying my own perspective. :)


Thanks for the thoughtful response- it didn't come across as combative at all to me.

The true, provable safety of Rust was what drew me to it as well. I've always hated having to choose between un-principled memory management (with it's security and functionality vulnerabilities that can lie dormant for many years before kicking your butt) and garbage-collection forcing you away from the metal and removing deterministic reasoning about memory usage, runtime behavior, and runtime overhead.

I've been going through the academic papers, forerunners, and source-code for Rust's static memory routines and borrowing semantics. My hope and suspicion is that it can be added to Nim without core changes to the language like lifetimes. It's definitely not a guarantee, but with lots of experience in both languages now I feel very strongly that adding region-based-memory-management to Nim is possible while adding Nim's clarity, abstractions, and efficiency to Rust feels impossible.

I agree that at the moment Rust is the only responsible choice right now if provable memory safety is a primary concern, but I suspect that will change. In the mean-time, for us anyway, the price was too high in productivity when we discovered that we could do manual memory management in Nim in very well-considered isolated places and confidently use Nim's fast, real-time deterministic per-thread garbage-collection for everything else without a noticeable performance penalty.

Having said that, I don't think I actually disagree with anything you said (:


> I've been going through the academic papers, forerunners, and source-code for Rust's static memory routines and borrowing semantics. My hope and suspicion is that it can be added to Nim without core changes to the language like lifetimes. It's definitely not a guarantee, but with lots of experience in both languages now I feel very strongly that adding region-based-memory-management to Nim is possible while adding Nim's clarity, abstractions, and efficiency to Rust feels impossible.

I'm not so sure. The trickiest part of getting memory safety without garbage collection working is not the lifetimes but the borrow check, which relies on inherited mutability and, most importantly, the lack of aliasable mutable data. The APIs and libraries of garbage collected imperative languages invariably depend on aliasable, mutable memory. Consider something as simple as a tree or graph data structure with mutable nodes. Or consider taking two mutable references to different indices of an array, or splitting an array into mutable slices with dynamically computed indices. These are all things you (presumably) can do today in Nim, and a borrow checker would break them. The likelihood that the library APIs depend on being able to do it is very high.

I never say never: you could implement multiple types of references, some GC'd and some not, and copy the Rust borrowing semantics. But they would be incompatible with most existing APIs and libraries. I don't think it can be realistically retrofitted onto a language without breaking most APIs: aliasable, mutable data is just too common.

Regarding efficiency/performance, what in particular seems impossible to add to Rust?


Thanks for your reply! An enjoyable exchange in the midst of what often feels like a bit of a very tiring flame war.

I am constantly on the lookout for languages that could be suitable for replacing (or greatly diminishing) the use of C/C++ in my work, and so far Rust is one of the front runners.

However, I am also very much aware of some of the troubles I would most likely face in convincing my colleagues, like language complexity and productivity, and I completely respect the decision that it may not be worth it, depending on a wide variety of factors.

I try to keep an open mind, and I look forward to reading more about the improvements to Nim you envision! Thanks again (and good night). :)


To be fair to Nim, I don't see any reason why it couldn't be made memory safe by using the Boehm GC (though I'm not an expert in Nim by any means). Of course, using the Boehm GC negates the advantages of the thread-local heaps, but I don't think that Nim's implementation of them scales up to large-scale software in any case for the reasons I detailed in my other comments. IMHO, if you have a garbage-collected, multithreaded language that must compile to C (and doesn't need interoperability with a reference-counted object system like e.g. Swift does), the Boehm GC is the best choice.


Thanks for correcting me, Patrick. I certainly didn't mean to be unfair (especially as I replied to a comment I felt wasn't being completely fair itself, intentionally or not), and I should have been more precise about the use cases.

I agree about the GC considerations. I meant my points to mainly apply to the use cases where safety such as that offered by Boehm is eschewed in order to achieve other powers at its expense, which I feel is brought up a lot by Nim proponents as strengths during these discussions.


> The claim about being able to do "so much more at a low level", like e.g. being able to switch out libc variants, which allegedly is not possible in Rust due to accidental coupling. Is this a temporary difference? If so, it may only be relevant in the short term. I can't answer this question, but it would be interesting if someone did.

It's definitely intended that the Rust standard library can compile against many libc's. I personally hope that it can eventually be completely self contained and not even link to libc in certain configurations.


Thanks for clearing that up!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: