Judging by the upvotes there is a lot of interest in hearing answers to this question and yet there are few comments so far. I will share my thoughts but keep in mind that I am (extremely) biased as I am one of the core Nim devs.
> How has your experience been compared to your previous tech?
Previous to using Nim I was primarily using Python. This was a few years ago now, but recently I was working on a project in Python and found myself yearning for Nim. There were multiple reasons for this, but what stuck with me was how much I missed static typing. The Python project used type hints which I found rather awkward to use (of course the fact that we didn't enforce their use didn't help, but it felt like such a half-baked solution). Dependencies very often required multiple guesses and searches through stack overflow to get working. And the resulting program was slow.
As far as I'm concerned, Nim is Python done right. It produces fast dependency-free binaries and has a strong type system with support for generics.
Of course, that isn't to say that Nim is a perfect language (but then what is). For example, JetBrains has done a brilliant job with PyCharm. Nim could use a good IDE like PyCharm and with its strong typing it has the potential to work even better.
> How mature is the standard library?
To be honest the standard library does need some work. In the next release we do plan on making some breaking changes, but we always lean on the side of keeping compatibility even though Nim is still pre-1.0. Of course, sometimes this is not possible.
> How abundant are third party libraries?
Not as abundant as I would like. Great news is that you can help change that :)
The Nimble package manager is still relatively new, but you can get a pretty good idea of the third party libraries available by looking at the package list repo[1].
Hope that helps. Please feel free to AMA, I'd love to introduce you to our community.
Yes indeed. I for instance can not wait Kotlin-native to materialize because... Because Kotlin has a usable IDE already. Nim is a great language, although rough on the edges. Add lack of libraries or need of wrapper generation to the equation, also add lack of a decent IDE.. I am privileged enough to have liberty of picking my battles so one of my requirements for getting work done is to also feel a pleasure doing it. Writing nim code in a notepad which has some IDE features that may or may not work (auto-complete from nimsuggest is really iffy, in source code of compiler especially) is suboptimal. Add debugging experience which due to various reasons is not greatest.. It is hard to justify using Nim over c++ or python. It is a shame though because language is really nice and with some polish it could be a great alternative to already established languages. Keeping fingers crossed for some brave soul coming forward and making IntelliJ plugin for nim.
I love the simple and clean syntax of Nim. The first time I was very impressed with vim was when I used a wikipedia xml dump parser. What would have literally taken days to complete in Python (atleast on my machine, and god forbid if it had crashed somewhere in the middle) was done in a few mins by nim (again on the same machine). I was blown away!
The one thing that concerns me a bit is that identifiers are partial case-insensitive right?
(That means only the first letters are compared in a case sensitive manner. Other letters are compared case insensitively and underscores are ignored.)
Does it hinder when you need to do refactoring?
Also, I work in vim, and grep for stuff when I need. I am not sure if autosuggests and locating definitions/usage for variables would work. I am sure I must be wrong as I believe the core developers would have thought about it :)
I just would like to know how
Identifiers are "style-insensitive", so case changes and single underscore insertions don't actually change the identifier. There is exhaustive tooling such as nimgrep and ide support built into the compiler (such as find symbol at point equivalents) for helping when a convention is different. It is a strong philosophical position, but it does allow style consistency across a code base even when dealing with foreign symbols.
>It is a strong philosophical position, but it does allow style consistency across a code base even when dealing with foreign symbols.
Every language seems to have a bizarro decision that kind of breaks it for many people. This is Nim's. It could be worse, it could be "Who need's Generics? They're costly anyway", like Go's...
IMHO if such a superficial aspect is what's breaking it for someone, I'm thinking they're a bit hung up on trivia, the rest of Nim is really very good and other languages should copy it more. In practice the symbol thing really isn't an issue...
Seconded. Being able to insert an underscore and have the code still compile sounds like a strange design choice to me. dom96, could you write more on this?
A lot of people have this reaction, but I am still yet to find a single person that gave Nim a proper try only to decide that it is not for them because of style insensitivity.
It's definitely weird and I personally could take it or leave it. But don't let it dissuade you from trying the language.
The idea behind this feature is to let you choose which style you want to write code in.
Python's significant indentation is also off-putting for a lot of people who are no longer newcomers to the language. I think some people are also unhappy with its effects without realizing that the significant indentation is contributing to those effects, such as how significant indentation interacts with other language features to force trade-offs between some programming techniques and longer lines of code.
Code I write for work these days (not including Puppet config, which is Ruby, or shell scripts) is all Python, and I still dislike the significant indentation. I do not dislike significant indentation per se, but I do dislike some of its effects in concert with other language design decisions. I did have a knee-jerk negative reaction to Python's significant indentation by itself when I first encountered the language a long time ago, but I got over it when exposed more to a couple other languages with significant indentation where it felt less off-putting. For instance, I never had a problem with Haskell's use of the off-side rule for specific block types (though I'm far less familiar with Haskell in general than Python), and did not even realize the language used the off-side rule for a while because it blended so seamlessly and naturally with the "obvious" way to organize code in every case where the rule applies. To repeat and clarify my feeling on the matter, though, I still dislike the off-side rule in Python, because of its interactions with other language features.
I don't want to distract from Nim discussion to create a whole separate debate, so I may not respond to further questions on the subject here -- I just wanted to point out that significant indentation is not devoid of its trade-offs for the benefits some people enjoy. I understand that some people like significant indentation so much that the other concerns that arise might end up beneath their notice, and others may hate it just out of some stubborn attachment to the conventions of other languages, but it is worth noting that it really does come with trade-offs, and there are also many people who dislike it specifically for the effects those trade-offs have on their code, and that this is a perfectly valid and reasonable position to take (even for those who find it difficult to identify or articulate the specific, undesirable causes of those effects they dislike). There are problems in practice, but not problems so big that one cannot reasonably ignore them if one likes the positive effects of significant indentation.
My usual solution to the problems Python's significant indentation solves would just be to fire people who lack such attention to detail that they cannot learn to format code nicely, though. As such, the language enforcing it offers less benefit for me than it might for others.
> My usual solution to the problems Python's significant indentation solves would just be to fire people who lack such attention to detail that they cannot learn to format code nicely, though.
While I might not go as far as you here (firing), I think your underlying point about attention to detail is entirely correct and generally doesn't get the attention it deserves.
Lack of attention to detail re code style issues is probably an early (i.e. obvious) indicator for lack of attention to detail in other more important areas.
(Note: I say indicator, because it's entirely possible the developer just doesn't care about code style, or has a different preference, yet still has attention to detail for other more important areas)
Significant whitespace has some real benefits (enforcing proper indentation, no lines wasted for "}" or "end", etc) too.
Case insensitive and _ ignoring identifiers not so much -- they just add mental burden, encourage different styles in the same codebase, make things ungreppable without special tools, etc. And for what?
If the best one can say about a feature is "people still like the language despite it", it might be better to axe it.
Actually, case sensitive identifiers encourage this. If a library uses camelCase but your code base uses snake_case then you will be using a mix of camelCase and snake_case in order to use that library.
That is a real benefit and is entirely the point of this feature.
>Actually, case sensitive identifiers encourage this. If a library uses camelCase but your code base uses snake_case then you will be using a mix of camelCase and snake_case in order to use that library.
Which will be totally fine, as it involves different variables / things pointed at.
Whereas the "case/underscore insensitive" qualifiers lets you have 10 different versions of the same variable name, in the same codebase -- even without using any third party "library".
Besides, the problem of "third party library uses a different style" is easily solved by having a tool like go fmt that enforces a unique single style. End of story.
This. I can't count the number of times I've used two libraries with different casing just to have my own code end up as a mess of both..
I can see the point though that grepping, and things like editors which highlight the selected variable would have to take these rules into consideration. It's a valid concern but I feel the benefit of clean unified code is more important.
>This. I can't count the number of times I've used two libraries with different casing just to have my own code end up as a mess of both..
Whereas referencing the exported identifiers from the library with a different style than they are written with in the library's code sounds more satisfactory to you?
Yes, that way my code looks more like one entity than a mess of different things.
I can see why you are skeptical of it but once you've used it for a while it becomes second nature. And you can always mix if you want to. The thing is that you can choose how you want to do it. And there are extremely few cases where the casing actually makes the identifiers different (keeping it mind that the first character is type sensitive).
Significant indentation comes with trade-offs -- real benefits, and real detriments. I agree it's entirely understandable that a language designer would include significant indentation, such as Python's use of the off-side rule (even though I don't like it in Python).
Labels (e.g. variable names) coming with style insensitivity is indeed weird, and likely to be far more off-putting for many than significant indentation, though both put together will likely have even more of a negative effect on adoption, in my estimation.
I can imagine benefits to the style insensitivity, but I think those benefits could be had without a style insensitivity, and they come with trade-offs as well when you use style insensitivity to get those benefits (for instance, some typos could be ignored altogether by the parser, but cause the human eye to miss them or cause codebase searches to fail to find all instances of a label when using tools external to the language's implementation itself).
I think some of the benefits of style insensitivity are worthwhile, though, and should be pursued (by other means, preferably). For example, a language feature that allows one to define (with a single line of code, perhaps a pragma declaration) a standard for label formatting could be used to make library functions conform to your project's code style standards would be a much more palatable solution to the problem of alien code looking alien than style insensitivity, in my opinion. I think a (Proposed? In development?) feature of Nim -- syntax "skins" -- might offer exactly that, and it could result in effectively being able to turn off style insensitivity in your projects. If so, I would expect that to become the standard way to handle code style in Nim projects.
> I am still yet to find a single person that gave Nim a proper try only to decide that it is not for them because of style insensitivity.
Unfortunately, it's unlikely that you can collect any meaningful data on how many people refuse to try it altogether (or put enough time into it for anyone in the Nim community to hear about it) because of style insensitivity.
The "style insensitivity" decision dissuades me much more strongly than some case sensitivity style enforcement in the language would.
It seems odd to me that Nim developers chose to go with that "style insensitivity" (which I would have predicted would be highly controversial and very discouraging for many, many developers) but indentation sensitivity (which is less controversial, but still more controversial than indentation insensitivity; shockingly large numbers of people dislike it, including me at least when combined with some other language features).
If the goal was widest possible appeal and adoption, those two decisions should have been swapped, I think. If the goal is related to some specific philosophical stance, however, I suppose every language developer who doesn't just want to recreate PHP or VisualBasic needs to pick the hill on which he or she is willing to die, and only time will tell whether it will be worth it.
The goal of any language designer is not to appeal to the widest possible group of people but to create a language that appeals to them. In this case style insensitivity is something that Araq wanted in his language and so he implemented it.
It has almost never been an issue for me and I don't use any case-insensitive refactoring tool. Just write variables in a consistent style and let completion help you stay consistent.
Because Nim has an excellent 'compiler service', the symbols can be identified at the AST level instead of the textual level. In practice, most code bases stick to some conventions, and the possible ambiguity is rarely an issue, in my experience
Go does support multiple return values, commonly (T, error).
If you think about it for a while, you will realize a exception is just a type of return value.
Regarding generics, the more I used golang, the less I had a need for generics. Also, there is solutions if you insist, such as https://github.com/cheekybits/genny
Sure, they are, but they're sums, not products. Go's failure to acknowledge this leads to infecting every type with the Billion-Dollar Mistake in an effort to encode sums as products (and loses out on compiler checking of it, in the process). The syntactic aspect of exceptions is nice, but can be handled without compiler magic if you have nice syntax for monads in general.
> Go does support multiple return values, commonly (T, error).
Sure, and Nim supports multiple return values too. But having to handle this extra value at every function call seems like it would get old really fast. I don't want to worry about every single exception when I am prototyping for example. Also, without stack traces, how do I know where an error comes from?
> Regarding generics, the more I used golang, the less I had a need for generics. Also, there is solutions if you insist, such as https://github.com/cheekybits/genny
Indeed, like I said, I haven't used it much so perhaps I would be able to live without generics. It is still very nice that Nim offers them though.
> I don't want to worry about every single exception when I am prototyping for example.
You are free to ignore the returned error, so for example to compare with a situation where you would raise an exception in another language inside a function, but not wrap the call in a try-catch:
val, _ := funcThatCanFail()
> Also, without stack traces, how do I know where an error comes from?
No, they are part of the syntax. The compiler (in Nim and other languages) can track which exceptions are allowed to be raised in a procedure and which aren't at compile time:
I don't see these items as being shortcomings. Go is simple and simplicity is elegance, and simplicity is something that cannot be found in most modern languages. Just look what happened to C++, It has become more like C# and Java.
Regarding Nim, I believe it's a nice language and I have done a few small projects using it but just like what I mentioned there are 10000 ways to do a single task and that sometimes is confusing.
Being fluent in Python and Go, I definitely prefer the explicit error handling in Go as opposed to the try-catch error handling in python where most of the time you can never really be sure of all the possible errors which can be thrown. I spent some time learning Haskell, and one of the points that stuck was that in languages with exception handling, a function will have some kind of declared result, but all the different exceptions which can be thrown are also basically alternative return values. Any caller of the function then has to know all possible return types to effectively handle all errors, but since most code doesn't document which errors might be thrown, it's a crap shoot. Functional languages basically force the return to always be the declared type, and go mostly follows this model.
Even though I don't like Go's explicit error handling, I think this is a very reasonable comment.
I don't like Go's explicit error handling because (1) I think it clutters up your main code path with error-handling logic, and (2) it forces you to always handle errors locally (even if that local code does not have the context to know how to handle the error) or return the error code through multiple layers of functions (back to where it can be handled).
That said, I completely agree that exceptions are also problematic. As you say, you can never really be sure of what exceptions can be thrown. Some languages have a "throw" keyword, in which you're meant to enumerate the list of possible exceptions; but of course, that's a headache to maintain, and is affected by inner code (such as library code) that might be completely out of your control or review. And when should the "throw" keyword be enforced, at compile time or runtime? And what should happen if the "throw" keyword's list of possible exceptions is violated?
Then of course there is the other problem with exceptions, the flip-side to being forced to handle an error locally: As your exception unwinds the stack, it might obliterate some local context that is needed to decide how to handle the error; or it might obliterate some local context that is needed to continue with your original task after the error has been handled!
I see C++'s `std::set_new_handler` function [0] as C++'s less-capable equivalent to Lisp condition handlers. But the function invoked by `std::set_new_handler` lacks the ability to assess local context to decide how to handle the problem.
EDIT: I would love to see a language like Nim incorporate something like Lisp's condition system. Nim already offers "procedural types" (function pointers) [1] and closures [2] to capture variables from the enclosing scope. Nim even offers "anonymous procs" [3], to avoid the need to define new error-handling functions everywhere.
> That said, I completely agree that exceptions are also problematic. As you say, you can never really be sure of what exceptions can be thrown. Some languages have a "throw" keyword, in which you're meant to enumerate the list of possible exceptions; but of course, that's a headache to maintain, and is affected by inner code (such as library code) that might be completely out of your control or review. And when should the "throw" keyword be enforced, at compile time or runtime? And what should happen if the "throw" keyword's list of possible exceptions is violated?
This is something that Nim offers via the `raises` pragma[1]. It is enforced at compile-time and in my experience works rather well.
Can we please let go of this meme? Many polyglot developers complain about the lack of generics, and how Go code ends up being more verbose and less expressive than languages like Python.
Things like interfaces and pointers to interfaces, structs vs slies and maps are not consistent.
>Things like interfaces and pointers to interfaces, structs vs slies and maps are not consistent.
Yeah, and inconsistencies are one of the things that make for a more complex language, not a simpler one. In fact, Go has generics, but they only work for magic, built-in types. I would argue that's less simple than just supporting generics, and building your standard library out of language features.
Indeed. It compiles to C so you have a lot of flexibility. glibc/musl isn't much of a dependency (except when you compile on a newer glibc leading to issues [1]), but I am still amazed at how easily I was able to produce a cross-platform executable for Windows (XP+ !!), macOS and Linux[2].
> How has your experience been compared to your previous tech?
Previous to using Nim I was primarily using Python. This was a few years ago now, but recently I was working on a project in Python and found myself yearning for Nim. There were multiple reasons for this, but what stuck with me was how much I missed static typing. The Python project used type hints which I found rather awkward to use (of course the fact that we didn't enforce their use didn't help, but it felt like such a half-baked solution). Dependencies very often required multiple guesses and searches through stack overflow to get working. And the resulting program was slow.
As far as I'm concerned, Nim is Python done right. It produces fast dependency-free binaries and has a strong type system with support for generics.
Of course, that isn't to say that Nim is a perfect language (but then what is). For example, JetBrains has done a brilliant job with PyCharm. Nim could use a good IDE like PyCharm and with its strong typing it has the potential to work even better.
> How mature is the standard library?
To be honest the standard library does need some work. In the next release we do plan on making some breaking changes, but we always lean on the side of keeping compatibility even though Nim is still pre-1.0. Of course, sometimes this is not possible.
> How abundant are third party libraries?
Not as abundant as I would like. Great news is that you can help change that :)
The Nimble package manager is still relatively new, but you can get a pretty good idea of the third party libraries available by looking at the package list repo[1].
Hope that helps. Please feel free to AMA, I'd love to introduce you to our community.
1 - https://github.com/nim-lang/packages/blob/master/packages.js...