Hacker News new | past | comments | ask | show | jobs | submit login
Nim 1.4 (nim-lang.org)
278 points by narimiran 5 days ago | hide | past | favorite | 140 comments





A small perspective insight from a game developer:

We (Beamdog) are using nim in production for Neverwinter Nights: Enhanced Edition, for the serverside parts of running the multiplayer infra.

nim is uniquely good in providing an immense amount of value for very little effort. It gets _out of my way_ and makes it very easy to write a lot of code that mostly works really well, without having given me any serious traps and pits to fall into. No memleaks, no spurious crashes, no side-effect oopses, anything like that. It's C/++ interop has been a huge enabler for feature growth as well, as we can partly link in game code and it works fine. For example, our platform integrates seamlessly with native/openssl/dtls for game console connectivity. And it all works, and does so with good performance. It is all now a set of quite a few moving components (a message bus, various network terminators, TURN relays, state management, logging and metrics, a simple json api consumed both by game clients and web (https://nwn.beamdog.net), ...).

We're still lagging behind and are on 1.0.8, but that is totally fine. It's tested and works, and there's no real incentive to move to 1.2 or 1.4 - yet!

Usage for our game has expanded to provide a few open source supporting utilities (https://github.com/Beamdog/nwsync) and libraries (https://github.com/niv/neverwinter.nim/) too. The good part about those is that they are cross-platform as well, and we can provide one-click binaries for users.

OTOH, There's been a few rough edges and some issues along the way. Some platform snafus come to mind, but those have been early days - 0.17, etc. Some strange async bugs had been found and fixed quickly though.

Good and bad, at least for me, nim has been a real joy to work with. If I had the chance to message my 3-years-younger, I'd just say "yea, keep going with that plan", as it turned out to save us a lot of development time. I suspect the features we've put in wouldn't have been possible in the timeframe we had, if it would have been all written in, say, C++.


Interesting, thanks for sharing.

May I ask, did you consider Go and decided against it for any reason considering your requirements of quick development, cross-platform, interoperability are all guaranteed features of Go which should have given better peace of mind considering a production application?


Go was a consideration and there are a few libraries internally existed for Go at that point, but nim in the end won out on acceptance and it convinced - despite the known risks of using a only semi-mature language - on feasibility in getting it done in time for the initial game release.

The PoC was incredibly quick to manifest, and iterating on it had quickly proven itself as a good way forward.

Peace of mind was a judgment call. Despite being a rather sizeable project now, what we had back then was already very stable and reliable (even under heavy benchmark load) and there weren't any great unknowns souring making the call.


Go code runs much more slowly than Nim code in my experience. Nim realizes better ergonomics than Python in some ways (UFCS, command syntax, user-defined operators) and as good performance & lightweightness as C/Rust.

I'm honestly surprised Nim is not the secret weapon of many start-ups. Nim is much more "open architecture" instead of pushing some "single, canned turn-key fits most users" solutions.

Having a juggernaut like Google marketing/pushing/subsidizing something might provide false as well as true peaces of mind. :-) { Such peace is a multi-dimensional concept. :-) }


The advanced features Nim offers is in stark contrast with Go where simplicity is valued. But the power those features give you when you really need them is undeniable. There's no need to dance around certain problems.

I did move to Go from Python briefly for performance reasons but once I found Nim, there's just no going back. Simplicity might have real value in large projects but I just don't like my hands tied.


Statement on performance is surprising, I haven't personally benchmarked Nim against Go so couldn't say about that.

But I started using Go for the same reasons pointed out in parent and after being tired of changing Python code to C to resolve performance issues.

How about concurrency?


Nim is on par with C in a lot of benchmarks, e.g. https://github.com/kostya/benchmarks. Go is the king of concurrency, so whatever you compare to it loses. That said, Nim has async/await for concurrency, and I find threads and threadpools easy to use for paralelism.

Lets not get carryied away with Go advocacy,

https://www.techempower.com/benchmarks/#section=data-r19&hw=...


There is a link in another of my comments here and because Nim is pretty open architecture you can roll your own basis for what you want (more than most languages) such as: https://github.com/mratsim/weave

I can't speak about go vs nim generally, but:

Our particular design is a bunch of single-threaded apps on a message bus. Each app (network ingress/egress, data handling, relays, etc) sits on the bus, and can use async/await to do IO concurrency, but on the app level, there's no threading and no locking to observe.

Each app also has a erlang-inspired supervisor task system, where each task is just a async proc kept alive. It's proven to be very robust (from the standpoint of service availability) in the face of bugs or input validation mishaps.


The performance shouldn't be surprising, as it compiles via C (So it benefits from 50 years of work on C compilers), and almost all of the language constructs compile to essentially equivalent C code.

Nim also has an emit pragma where you can just inline C code (or code for the Javascript backend or C++ backend, etc.). So, if there is some poorly optimized (for whatever reason) hot inner loop you can fix it right there, though you start sacrificing portability (often the trade off for optimal performance). You can even do SIMD intrinsics right in Nim no problemo just using the FFI Nim has for C calls.

Python is written in C, and yet ...

Compiling to C really isn't relevant. "50 years of work on C compilers" is not at all relevant--languages that compile to LLVM get all the advantages of the optimization work.


Written in C is not the same that compiled via C. If you want Python compiled via C you can use Cython, and without much surprise, you usually get a huge speed up (e.g. https://notes-on-cython.readthedocs.io/en/latest/std_dev.htm...).

What's the dev experience been like? Do you feel like there is good editor support for debugging and such?

I've been using vscode+nim. Debugging was mostly just writing correct code in the first place! ;) The only gripe I have is sometimes nimsuggest hanging itself at 100% cpu use, and I have to kill it manually.

Even with the rather oldschool approach of echo/logging.nim usage, things tend to turn around quickly. I have not felt the need to be able to attach a debugger to the process, mostly because our architecture is very pluggable. Almost all events/interactions are on a message bus and can be hooked/handled individually.


https://nim-lang.org/blog/2017/10/02/documenting-profiling-a... -- Not sure how good the VSCode debugging integration is, but all the requirements are there including gdb/lldb. Word is JetBrains folk wrote a new Nim plugin too. Personally, I like the `writeStackTrace` bit.

What build system are you using if not MSBuild then?

Our production stuff runs on Linux, so we just wrap it into docker and that's that.

Personally, I develop on Mac and it runs natively there the same as on Linux.

Windows binaries for the tooling releases I used to build with a cross-compiler, but more recently, GH actions looks attractive enough to take that role.

Edit: Sorry, could have been clearer. The build system is just running the binary directly via nim cpp -r in development, and for production it's nimble. The dockerfile is handcrafted, but of trivial complexity.


So you use Docker to keep the build system unchanged between builds? That’s genius if so and cuts deeply into MSBuild’s main advantage (comprehensively delineated system settings).

I think there might be a misunderstanding. I'm just talking about the multiplayer service infrastructure, not the whole game.

Despite doing a lot and having a lot of smaller moving parts, the mp infra is of moderate code size and build complexity is not a concern. The whole thing compiles in less than 5 minutes, and can be done with nimble (the package manager). We have a docker builder image that spits out the final production image containing the apps, and a docker-compose setup then runs those as needed.

Any kind of per-platform specifics are handled in nim itself (when defined(Linux): etc) and via nim.config/nims.nim, to link in platform libraries.


I use the Nim and GCC compilers from Debian in order to have guaranteed stable build system for 5+ years.

Since Nim has hit the front page twice in the past day, let me just say: if you're at all curious about the language, try it out over the weekend. Nim isnt quite as simple as Zig (to compare to another compiled language with a smaller ecosystem), but the more advanced features stay out of the way until you need them. If you've worked with any static language, you probably know 80%-90% of what you need to write productive Nim code. The dev team has worked hard over the past year to improve documentation, and the website links to some great tutorials.

"Nim isn't quite as simple as Zig"

My experience is complete opposite. I find Nim to be very simple and Zig to be not simple. What's the problem with Zig? I find the documentation to be chaotic. I believe that this is largely due to the rapid pace of changes (including changes that break earlier code).


That makes sense. I haven't tried Zig, but it's still pretty early in its history. That means a lot of syntax change, similar to Rust which is settling down, or Swift. Nim's been around long enough that it's settled it's syntax a while back.

Before I started really digging into Nim, it seemed like it was always changing the language a lot (feature churn). However, most of those changes have been compiler support for different GC's and other backend languages, which don't generally break existing code. I've tried Nim code from 4 years ago, and its almost the exact same syntax. Sometimes stdlib names changed. I think syntax change is the part that gets people a lot in terms of daily "complexity".

It actually reminds me of the feel of Python 2, before Python 3 that seems to add new syntax & language complexity every release. Well Python 2 but with a more solid language theory (e.g. everything can be a statement, etc). The trickiest part of day to day Nim are: var vs no-var, object vs ref object, and some iterator annoyances.


My understanding is that Zig is also strongly committed to working out as many kinks as possible so that 1.0 can be an extremely stable release, so they are running as fast as possible to make mistakes in the language. In that sort of an environment it doesn't really make sense to solidify docs.

Maybe I'll give it another try. I might have gone too deep into the docs, but Nim seemed to get very complicated with all the {.annotation.} to remember

If you're just starting with the language, my recommendation would be to ignore the pragmas. None of them are necessary for basic functionality, and a lot of them are only useful for very specific optimizations. The most useful pragmas in my experience are the pragmas for FFI, async, and memory/side effect tracking.

Nim and Zig have vastly different philosophies and targets.

I'm going to do a little bit of a shameless plug as a way to show off just what Nim is capable of. If you've ever played one of the many IO games, it might seem familiar to you. Basically I have used Nim to create a multiplayer game that can be played in the browser[1].

I'm planning to write up a more detailed post outlining the architecture of this. But suffice it to say, Stardust is written 100% in Nim. Both the servers and the client running in your browser is written in Nim. The client uses Nim's JS backend and the server Nim's C backend. The two share code to ensure the game simulation is the same across both. Communication happens over websockets.

It's been a lot of fun working on this and I cannot imagine another language being as flexible as Nim to make something like this possible. I already have an Android client up and running too, it also is built from the same code and I plan to release it soon.

1 - https://stardust.dev/play


Not to disparage Nim, but I can think of a number of other languages where this is possible:

* JS, obviously

* Rust

* C/C++

* Anything else that can compile to WASM


Fair enough. I think then we get into how easy this is to do in these. For example, can you really target the web browser vs node from the same JS codebase by simply changing the compiler invokation?

This looks really amazing! Would love to read an architecture post for this. Adding this in your book would also be pretty cool

> Android client up and running too, it also is built from the same code

How does that work, and is it an alternative to dart/flutter?


I wouldn't say so. Since my game mostly just needs a canvas I make use of SDL2 to target Android (and plan to use it for iOS/desktop as well). I created a thin library which targets either HTML5 canvas (JS) or SDL2 (everything else): https://github.com/dom96/gamelight

An economic argument for GC instead of ARC/ORC.

Commercial web application in Java. Java GC is concurrent, on other cpu cores than the cores processing customer requests. Modern GCs such as Red Hat's Shenandoah or Oracle's ZGC have 1 ms or lower pause times on heap sizes of terabytes (TB). Java 15 increased max heap size from 4 TB to 16 TB of memory.

Now the argument. A thread running on a cpu core which processes an incoming request has to earn enough to pay the economic dollar cost of the other threads that do GC on other cpu cores. But that thread processing the client request spends ZERO cpu cycles on memory management. No reference counting. Malloc is extremely fast as long as memory is available. (GC on other cores ensures this.) During or even after that thread has serviced a client request, GC will (later) "dispose" of the garbage that was allocated when servicing the client request.

Obviously, just from Java continuously occupying the top 2 or 3 spots for popularity over the last 15 years -- Java's approach must be doing something right.

That said, I find Nim very interesting, and this is not a rant about Nim. I am skeptical of an alternative to real GC until it is proven to work, in a heavily multi threaded environment. And there is that economic argument of servicing client requests with no cpu cycles spent on memory management -- until AFTER the client request was fully serviced.


> Obviously, just from Java occupying the top 2 or 3 spots for popularity over the last 15 years -- Java's approach must be doing something right.

I'm not sure if this argument holds. Java's high memory usage is frequently cited as a downside of Java. GUI applications written in Java have a reputation of being memory-hungry, and I know plenty of people struggling with memory usage of server application (e.g. ElasticSearch). You will also find C/C++ on the same popularity lists…

That said, I do agree that a tracing GC is a better solution (for most programs) than reference counting these days. The improvements in the pause times by the Java GCs are really impressive, and the throughput is great. One example would be ESBuild: The author created a prototype in both Rust and Go, and found that the Go version was faster allegedly because Rust ended up spending a lot of time deallocating [source: https://news.ycombinator.com/item?id=22336284].


The default tracing Nim GC is capable of quarter-millisecond pause times while the new ARC/ORC stuff is more like single-digit microseconds: https://forum.nim-lang.org/t/5734

That is interesting.

> Java's high memory usage

I would rather optimize for performance and energy use at the cost of higher memory use.

See recent SN:

https://news.ycombinator.com/item?id=24642134

https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sle...

Memory is a one time capital cost and gets cheaper over time.


High memory usage forces you to only run one JVM process. If you need to do something that doesn't require a significant amount of memory you can't use Java at all.

> Memory is a one time capital cost and gets cheaper over time.

Except if you're operating in the cloud.


True, yet not everyone is trying to be the next FAANG.

Besides, plenty of GC enabled languages support value types and ways to do deterministic deallocation, use the tools Luke.


The majority of all GC research goes to java and (primarily the hotspot) jvm. ZGC and kin are like the zfs of garbage collectors: insanely good, but also insanely complex and not readily replicable. It's not practical to expect somebody with fewer resources than oracle to create something similar.

Reference-counting strategies are much easier to optimize; so if you have fewer resources available to throw at your compiler it's the way to go.


You mean tracing GC, as reference counting is still GC.

Adding value types and deterministic deallocation doesn't require endless GC research and was already available in languages like Mesa/Cedar and Oberon, features that Nim also has anyway.


Reference counting is still GC, but I stand by what I said that the majority of all GC research goes into java. The GP was talking about using a tracing GC—like java uses—in nim; which I argue against.

Which is why Java is getting value types, as that research has proven that there is only so much that automatic escape analysis is capable of.

Bare bones tracing GC in Java were the state of the art like 25 years ago.

Java GC's are powering quite some interesting stuff, one just needs to open their mind beyond OpenJDK.

https://www.ptc.com/en/products/developer-tools/perc

https://www.aicas.com/wp/

Still eventually, even Java would enjoy the value types and determnistic destruction capablities of languages like D, or to go to the days before Java, languages like Modula-3 and Eiffel, which they should have paid more attention to for Java 1.0.


There are a lot of gotchas with the new GC that make me nervous about this release:

> As far as we know, ARC works with the complete standard library except for the current implementation of async...

That's not a great endorsement...

> If your code uses cyclic data structures, or if you’re not sure if your code produces cycles, you need to use --gc:orc and not --gc:arc.

Seems like this is a big onus to put on the user -- it's tough to prove a negative like this.


I think you missed this detail:

> ARC was first shipped with Nim 1.2... [ORC is] our main new feature for this release

Seems like they should phrase it like "use ORC unless you know you don't have cycles" rather than "use ORC if you're not sure you have cycles", but that's a reasonable responsibility to take on if you're choosing to use an alternative garbage collector.


I'd prefer no phrasing at all. Just pick one that will work well in all cases, and don't give me any footguns.

To clarify: By default Nim will still use the GC which doesn't have any "footguns". ARC and ORC are new features which have different trade-offs, and some limitations. You opt-in to using them if you're interested.

A way of dealing with memory that works well in all cases! That would be great.

Yes. Or at least put them in the appendix.

I believe the phrasing might not be clear to everyone, so here's a reduced version:

* Nim's current async implementation creates a lot of cycles in the graph, so ARC can't collect them.

* ORC is then developed as ARC + cycle collector to solve this issue, and it has been a success.

* This 1.4 release introduces ORC to everyone so that we can have mass testing for this new GC and eventually move torwards ORC as the default GC.

TL;DR: ORC works with everything† and will be the new default GC in the future. Your old Nim code will continue to work, and will just get faster‡.

† We are not sure that it's bug-free yet, which is why it's not the default for this release.

‡ Most of the time ORC speeds things up, but there are edge cases where it might slow things down. You're encouraged to test your code with --gc:orc against our default GC and report performance regressions.


Your explanation is very clear. Thanks.

Looks like it'a still an opt-in experimental option at this point rather than the default. Preaumably they'll fix these caveats before recommending it for general use.

If I recall correctly, I read that the intention is to eventually make ORC the default, not ARC. (ORC is like ARC, but it avoids problems with cycles.)

Yes, you're totally right! That's the exact reason ORC should eventually become the new default GC.

Sorry if you got misled by the article - ORC is what you should use if your program has cycles. ORC deals with async just fine.

Does ORC stand for something?

I could be wrong, but I believe the acronyms are: ARC = Automatic Reference Counting (NOTE: not Atomic) ORC = Optimized Reference Counting

The loopy/circular character of the letter 'O' may be a secondary mnemonic since it collects cycles.


I meant "character" as in "nature" not as in the prog.lang "character type"..I just realized that in context this might be confusing. Lol.

At an ELI5 level, does anyone know why Swift can have ARC and async but Nim's ARC doesn't work with async? Is it just implementation details of Nim's async specifically instead of anything more fundamental to ARC? Just asking out of curiosity.

Yes. There is nothing fundamental to nim's ARC working with nim's async. There are attempts on make nim's async cycle free.

Note that what we're referring to as "async" for Nim is actually "async await". AFAIK Swift doesn't support that, does it?

Swift doesn't claim hard realtime capabilities like Nim's ARC does.

I'm not familiar with Nim. Does it also support weak references?

A significant portion of the problems with cycles in ARC are parent references.


Weak references are possible with the .cursor pragma applied on local variables and object fields. This release introduces cursor inference too, enabled by default with --gc:arc|orc, but only for local variables.

Is there an msan/asan equivalent that can detect leaks in your test suite?

Yes, we use valgrind for the compiler and stdlib test suite. The testament tool supports this, however it's not in widespread use outside of the compiler and stdlib, so documentation is rather sparse.

Since Nim uses the C compiler to generate executables, you should be able to use `--passC:-fsanitize=memory --passL:-fsanitize=memory` to enable msan. For maximum effectiveness the flags `-d:useMalloc --gc:orc` should also be used.

`-d:useMalloc` tells Nim to allocate memory using libc's malloc instead of our TSLF implementation. This should provide adequate compatibility for use with external inspection tools. We do

`--gc:orc` because this is one of the only GCs that support -d:useMalloc (the other being arc).


Please put this in the docs, if it is not there already

Awesome to see 1.4 finally released, and to have a version that should hopefully build cleanly out of the box on OpenBSD and FreeBSD and mostly "just work"[1]. Next target is NetBSD[2] then DragonFly!

1. https://github.com/nim-lang/Nim/issues/14035 2. https://forum.nim-lang.org/t/6610


I was skeptical towards Nim. Then I wrote a small program that uses SDL2, and compared it to the same program written in over 10 other programming languages. Nim has an excellent combination of ease of use and performance.

Nim is my mistress. I work with Elixir and Javascript, but something about Nim is so pure and exact and precise. I love it.

I'm hoping someone builds a great web framework with it, and a library like Ecto for better postgresql access. This language great potential to build faster software.


Can you output WebAssembly/Emscripten project that uses SDL2?

This seems quite possible from everything I've explored, and also is something I want to try soon. It's also interesting to use Nim through JS for a scripting layer for an application written in Nim; where they both communicate through shared data structures or something.

Here's a repo I found: https://github.com/Jipok/Nim-SDL2-and-Emscripten


Yes. People have done this.

So one of the biggest improvements to my day-to-day life as a programmer was having compiler-checked nullability in Kotlin. It is the most significant feature that makes programs more reliable and code more readable compared to Java. I never want to miss it again.

Why did Nim decide to allow null pointers? They must have had a very good reason, given its young age?


You need null pointers for system programming.

That said you can already declare `type MyPtr = ptr int not nil` but the compiler is still clunky on proofs and needs a lot of help.

it is planned to have a much better prover in the future and ultimately Z3 integration for such safety features: https://nim-lang.org/docs/drnim.html


Minor correction Re: "young age" - Nim dates back to 2006 with a first public release in 2008 and is older than Go and Go is older than Kotlin.

nil checking is being worked on : it might be a part of 1.6

Interop I imagine.

Great to see that Nim grows ever faster and more stable. Been a fan of the language for quite a while now and while I miss the old days of new and exiting changes with every release it's much easier to target Nim now that it has become this stable. Great work all around!

I played around with Nim a little and was amazed at how small the executables were: 10s of KB for something simple. Even Rust spits out 100s of KB, or even >1MB by default.

In the end I still went with Rust, simply because it's more popular, but my initial impression was that Nim is a really fun language to work in, and much much easier to pick up than Rust.


"and much much easier to pick up than Rust."

That's my impression so far, too. Previously, I already had some experiences with C, Pascal, and Python. Then learning Nim just feels natural.

Not so much with Rust. Well of course it's not surprising, with memory safety as one of its goals.


>Well of course it's not surprising, with memory safety as one of its goals.

Where is this idea coming from that memory safety has to be complicated? Nearly all languages (basically everything except C/C++/Assembler) people are using are memory safe. And usually it's not complicated at all.


Well Rust's complication is memory safety and the lack of garbage collector. You see similar issues in a language like Swift, but Swift lets you create a memory leak very easily if you're not aware of the issues. Which can be fine when you're just starting out, you can always learn about it later. But Rust isn't so charitable: you must learn about memory safety upfront. The plus side is that it's significantly more difficult to accidentally create a memory leak.

I've only played with Rust a little bit, but I think something that often goes under-remarked is that its ownership system doesn't just give you "static GC" -- it also guarantees no data races for multithreading. This is pretty awesome.

> no data races for multithreading

This is simply not true. Rust borrow checker has no notion of memory models: i.e. sequential consistency, acquire, release or relaxed semantics.

The Tokio team needs an additional model checker to ensure no data race in Tokio:

- https://github.com/tokio-rs/loom

If you want to ensure no data races you need formal verification not a borrow checker, I've compiled my research and a RFC for Nim formal verification here:

- https://github.com/mratsim/weave/issues/18

- https://github.com/nim-lang/RFCs/issues/222


Yes, I think for me Nim could be a great answer for when I want to throw together a quick script. Right now I'll often use JS/Node for that and I'm not going to switch to Rust because it'll take a lot longer to get a rough and ready script going.

Nim is indeed a great answer for this.

Should you want to give that script a nice, traditional CLI then you can do so with very low effort using https://github.com/c-blake/cligen

Should you want "script like" edit-test development cycle ergonomics you can use the TinyCC/tcc backend for compiles in ~200..300 milliseconds. Once it's how you like it you can compile with gcc & -d:danger for max performance in deployment.


I have so many little nim scripts all over the place.

Unmentioned so far is the NimScript mode `nim e`..For a portable sort of alternative to Bash scripting ("sort of" from the presently much longer start-up time...).

I have never seen anyone say that Rust was easier to pick up than Nim...(Famous last words, I know!)

Great stuff! GC ORC is an amazing improvement over existing memory management models :) I hope to see more Nim projects adopt it!

Congratulations! And thanks for all the progress on ARC/ORC. But I must say, even though I'm excited about ARC/ORC, I'm even more happy to see the nice list of bugfixes and quality standard library additions!!

Nim is a lot of fun to write, and the language is small enough that you could build something fun / useful in a weekend. The community is also very responsive. Definitely worth checking out!

Web development.

I’ve said it before and I’ll say it again ... I wish more folks would use NIM for web development.


100% agree. I'm developing an ORM [1] in my spare time specifically with the vision to make it easier to create web apps with Nim. Created a PoC with Jester in backend, Karax on the front, and Norm for model definition, and it turns out to be very much usable, accessing the same models from backend and frontend and all that.

[1] https://norm.nim.town


Maybe my lack of lower level language knowledge will show here, but how does that compare to Rust? I keep seeing and hearing about these new-ish languages Rust, Nim, Zig, etc. that all claim to be C/C++ perf lvl but better developper experience. Any of these is preferred for API/Web development? Does it yield much advantage over something like Elixir that already provides significant perf increase over a Python(Django-Flask)/RoR stack?

> Any of these is preferred for API/Web development?

I often wonder why anyone would use a language like Zig or Rust for Web development. I am very much biased in favour of Nim here, but to me in general a non-GC'd language seems like overkill for web development. So I would rule those languages out straight away.

I can't speak to Elixir, likely the main difference will be the lack of a mature Django-like framework. I'm assuming that Elixir has one, whereas Nim doesn't. If you're looking for a fun project, I would love to see that made for Nim and happy to give pointers if you need them :)


Indeed Phoenix is a great web framework for Elixir. Idk that I'd be qualified to tackle something like building a web framework but I'd be curious enough to look into it :)

In my mind, and be aware of my bias here, these languages split up into three different categories:

* Has a GC, but you can remove it. This is Nim and D.

* Relies on pervasive refcounting. This is Nim if you choose that implementation, Swift.

* Has no GC. This is Zig and Rust. (Though obviously you can use refcounting in these languages, but it is as a library.)

While this focuses on a specific aspect of these langauges, I think it also represents their philosophies pretty well. Nim and D start from a "what if we had a GC" and then try to make things nicer down the stack. Rust and Zig are how nice can we go starting from nothing?"

There are also additional factors that may or may not play in here, depending on what your needs are. Arguably, Rust is starting to break out of the "niche language" stage and move into the "significant projects and is sticking around" phase, whereas many of these other languages aren't quite there yet. This can matter with things like getting help, package support... some people love the open frontiers of new languages, others want something more mature. https://nimble.directory/ has 1,431 packages at the time of writing, https://crates.io/ has 48,197.


How many of those 48,197 have over 50 lines of code? How many of them have had changes in the last 1 or 3 years?

Same questions apply to Nim too of course, but I believe Rust's focus on newbies and pretty much trivial crates/cargo new pkg addition has led to a lot of cruft in there. Not to mention squatters (even if those are probably not a majority).

Also, I would challenge your classification of ARC being in the same category as Swift, memory-handling wise. Nim's ARC has hard realtime capabilities, would that be possible with pervasive RC?


I agree that sheer package count is a weak metric especially for very hyped up "can I put it on my resume" prog.langs, and in the context of "maturity" (as opposed to "popularity"). I also agree that programmer attrition rate (or its complement retention rate) are a bit better.

While I cannot speak to Rust's packages, as part of testing a new Nim package manager against the published ecosystem, I happened to just a week or three ago be surveying the freshness of Nim's ecosystem. About 80% of those 1400 Nim packages have been updated in the past 2 years. By last git update, 50% have not changed since 2019/October and 30% have not had a commit since 2018/November. 50..80% fresh (with apologies to Rotten Tomatoes!) is much higher than I would have guessed naively. I am not sure even Python would score so highly. I realize these above numbers are but a start down a more real analytical road. Maybe someone could measure Rust in this way.

In terms of package quality, the one time I re-wrote something that existed in Rust in Nim, my Nim version ran like 10x faster than Rust. That was a couple years ago (https://github.com/c-blake/suggest if anyone cares) and is just one data point. I do think Rust unfairly enjoys a presumption of performance when almost any language has ways to make code run fast. Less ambitious or knowledgeable programmers can always make things slow, but they also seem biased against picking up Rust. So, there is also a sample selection bias, but this has probably all been said before.


On the other hand, many many packages on https://pypi.org/ have not been updated in the last 5 years since they are essentially feature-complete. They don't need active development, the feature-set in the last published version still works.

It's a fair point that sometimes software is "just done". Combining the dependency graph with version freshness may yield even more informative retention/attrition metrics. { EDIT: as in "either updated in N months or a dependency of something which was". Of course, this can easily neglect reverse dependencies not in the package graph, but perfect can be the enemy of the good. :-) }

It's funny, because everyone wants different metrics. Some people would argue that no changes in the last 3 years is a sign of maturity!

Anyone can argue any metric, and that's 100% fine. Any of these things can only be a really rough measure of anything.


Yes, Common Lisp people say that a lot. I'm sure you know why :)

Nim doesn't require that you use a gc; one of its many gc options is `none`, which is, as it sounds, no garbage collector at all.

Right, that was the “you can remove it” part.

But then strings and most of the stdlib aren't available to you unless you want one big massive memory leak.

Sibling talks about memory management. Some other notes:

- Nim and rust have macros.

- D has very high-quality metaprogramming (probably better than any other language without macros).

- (Afaik swift and zig have fairly normal templates. I don't know as much about those.)

- D and zig have compile-time function execution (think c++ constexpr on steroids on steroids).

- Swift is likely to be the slowest of the bunch; like go, though it's technically compiled to native code, its performance profile is closer to that of a managed language. The others should be generally on par with each other and with c.


I would say that Nim has one of, if not the, highest quality metaprogramming among these languages. Indeed, compile-time function execution, AST manipulating macros, declarative templates and term rewriting macros are some of the metaprogramming features in Nim.

Nim beats D on metaprogramming.

I said 'better than any other language without macros'.

Nim has macros.


And I said something else.

I am using nim this week on a data file scraping project. If you can Google and write python you can just start with nim and learn as you go, very easy.

I am super bummed that there is (effectively) no debugging. I am too lazy to mess with VS code to get gdb working, it should just work already. Someday, I guess. Maybe jetbrains will save us. With real IDE support nim would sweep the nations.


From my experience, there really is not that much need for debugging in the sense of adding breakpoints and attaching to a running process with Nim. Honestly, `debugEcho` suffices, give it a shot. I mean, you may need more debugging tools when your project gets large but since you're just starting with Nim, you can just relax and keep going.

Relax is my strategy right now but I am a bear of little brain and spent years with Matlab stopped in the debugger while I went to lunch. Peering into data at will is something I can't easily give up. Debug trace was how I spent the stressful young years of my programming life and for data intensive, rather than logic intensive transactional code, I don't prefer tracing.

> The reason is that only after the new(result, finalizer) call the compiler knows that it needs to generate a custom destructor for the CustomObject type, but when it compiled tricky the default destructor was used.

I'm wondering why people underestimate graphs that much. It's lot easier to explicitly represent dependencies between your definitions as a graph and not just avoid such issues but also get rid of unnecessary passes. I did that in my compiler and it works great.


I am intrigued by Nim. Anyone know of the motivation for Nim? I was not able to find that out in their website.

From this nice interview with Araq (language creator): http://157.245.209.254/andreas-rumpf-on-creating-and-growing...

> I started the Nim project after an unsuccessful search for a good systems programming language. Back then (2006) there were essentially two lines of systems programming languages:

> The C (C, C++, Objective C) family of languages.

> The Pascal (Pascal, Modula 2, Modula 3, Ada, Oberon) family of languages.

> The C-family of languages has quirky syntax, grossly unsafe semantics and slow compilers but is overall quite flexible to use. This is mostly thanks to its meta-programming features like the preprocessor and, in C++'s case, to templates.

> The Pascal family of languages has an unpleasant, overly verbose syntax but fast compilers. It also has stronger type systems and extensive runtime checks make it far safer to use. However, it lacks most of the metaprogramming capabilities that I wanted to see in a language.

> For some reason, neither family looked at Lisp to take inspiration from its macro system, which is a very nice fit for systems programming as a macro system pushes complexity from runtime to compile-time.

> And neither family looked much at the upcoming scripting languages like Python or Ruby which focussed on usability and making programmers more productive. Back then these attributes were ascribed to their usage of dynamic typing, but if you looked closely, most of their nice attributes were independent of dynamic typing.

> So that's why I had to create Nim; there was a hole in the programming language landscape for a systems programming language that took Ada's strong type system and safety aspects, Lisp's metaprogramming system so that the Nim programmer can operate on the level of abstraction that suits the problem domain and a language that uses Python's readable but concise syntax.


Also relevant this other interview (video), a bit more recent: https://www.youtube.com/watch?v=-9SGIB946lw

Great! Where's the announcement on https://twitter.com/nim_lang so I can retweet it?


NIM and Red (also on HN today) seem both seem to have an interesting and intersecting feature set. Anybody here used or heavily evaluated both that can comment?

Nim and Red are quite different languages, both in terms of semantics and in terms of project's goals and scope. Perhaps you should elaborate on an intersection that you see between the two.

I can speak only for Red: it takes its heritage in Lisp, Forth and Logo, has an embeded cross-platform GUI engine with a dedicated DSL for UI building, C-like sub-language for system level programming, OMeta-like PEG parser, and unique type system with literal forms for things like currencies, URLs, e-mail and dates with hashtags; all of that fitting in one-megabyte binary and valuing human-centered design above all.


Rebol/Red is more like Python/Cython - able to be fast but falling back to an interpreter for dynamic things. As such there are probably more "performance footguns" (though any such thing is ultimately subjective based on programmer awareness). Nim feels more like what C++ (or Python) should always have been. Not sure if this helps. It's kind of a "big" question.

No, it's actually the other way around: Red and Rebol are interpreted by default, and Red has a bootstrapping AOT compiler capable of bridging it with Red/System (a C-like sub-language), which it turn targets machine code. However, since Red is highly dynamic, compiler cannot preserve semantics across the whole language, so it keeps some parts for the interpreter to process at run-time. And sometimes that's the only option, since code can be generated on-the-fly or pulled over the network: Red is homoiconic and has quite powerful meta-programming facilities on top of that.

Nim or Julia?

As someone who doesn't do software development as part of routine day-to-day work but has played with both, I'd describe Julia as "Fortran for Python developers", while Nim feels like "C for Python developers".

My interest in scientific applications pushes me towards Julia, but the user experience has so far been strictly worse than than Python, so I just don't bother with it as much as I might like to.

On the other hand, I am drawn to experiment with Nim (and to some extent Rust as well) because they feel better constructed, having more professional feeling tools and approaches to packaging. The downside is that their core strengths are in use-cases which aren't so aligned with my interests.

The strength of the Python packaging ecosystem makes me doubtful of the impact Julia can have. Meanwhile for Nim, it feels to me like awareness and adoption suffer a fair bit from competing with Rust for mindshare.


You can pick both, they are very different. Julia is a dynamic language looking for a compromise between interactivity and performance (from it's origin on R/Matlab), while Nim is one of the new batch of static languages that looks for the perfect balance between of easy of development and safety/speed.

For example, if you want to do something more exploratory (like some research or data analysis) that can still easily scale up to HPC you can use Julia, if you want to create something reliable with small binaries and no start-up issues or use in more resource constrained environments you can use Nim.


Not that @ddragon exactly said you could not, but you can also use Nim for data science, such as https://mratsim.github.io/Arraymancer/ or https://github.com/Vindaar/nimhdf5 or many others and there is a REPL INim mentioned in another comment here. If you want everything "done for you already", Julia has more libraries and it is (for better or worse) more fundamentally dynamic.

Julia has its specific niche and Nim does not collide with this niche. Nim is more of a general language by far.

It is nice with a lot of innovation now in the ahead-of-time compiled languages camp.

What worries me is the fragmentation, and the fact that no one language seems to check all of the (subjective set of) boxes for a general purpose high-speed, ahead-of-time compiled language [0].

E.g, Crystal seems to be the only one supporting a modern concurrency story (similar to Go), but has a huge problem with compile times.

Nim looks nice in many respects, but last I checked, they don't have anything like Go-like concurrency. Maybe not on everyone's wishlist, but as the world move toward flow everything/everywhere[1], I personally find this to be a problem.

[0] https://docs.google.com/spreadsheets/d/1BAiJR026ih1U8HoRw__n...

[1] https://www.amazon.com/Flow-Architectures-Streaming-Event-Dr...



Also INim at the Nim-2020 conference for a REPL: https://www.youtube.com/watch?v=Qa_9vut4TzQ&list=PLxLdEZg8DR...

In short, I think at least for the linked to table you mentioned, Nim does check all the boxes.


Thanks! This is interesting, although I'd be worried about how stable this is for the long term. Would be happy to have my worries annihilated of course :)

It depends how good/low level a programmer you are, but it's really not that hard to spin up your own impl. I did an (incomplete) Python multiprocessing like dealio in like 100 lines of code (https://github.com/c-blake/cligen/blob/master/cligen/procpoo...) just to make my `only.nim` run faster because libmagic/file are so CPU intensive. So, one way to annihilate your worries is to just roll your own.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: