
Nim vs. Crystal - open-source-ux
https://embark.status.im/news/2019/11/18/nim-vs-crystal-part-1-performance-interoperability/index.html
======
RX14
Nice article!

There's one small error, where the article says "With Nim, we were also able
to link both the Nim and C files into the same executable, which Crystal sadly
cannot do." However, this is not true! You've passed the object file you
created directly to the linker, so it is in fact included directly in the
executable.

I'm a core team member of Crystal, feel free to ask away.

------
mratsim
Status dev here (i.e. colleague with the benchmarker) and prolific high-
performance Nim libraries author.

Note that Nim standard library focuses on maintenance cost as the Nim team is
small. When raw performance is needed, Nim gives us the tools to reach for it.

For instance at Status we use our own json serialization/deserialization
library: [https://github.com/status-im/nim-
serialization](https://github.com/status-im/nim-serialization) and even Araq,
Nim language creator, has his own JSON library:
[https://github.com/Araq/packedjson](https://github.com/Araq/packedjson)

This allows Nim to be in the top10 of json parsing in TechEmpower:
[http://www.techempower.com/benchmarks/#section=data-r18&hw=p...](http://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=json)

Disclaimer: Status is the main sponsor behind Nim
[https://our.status.im/status-partners-with-the-team-
behind-t...](https://our.status.im/status-partners-with-the-team-behind-the-
programming-language-nim/) / [https://nim-lang.org/blog/2018/08/07/nim-
partners-with-statu...](https://nim-lang.org/blog/2018/08/07/nim-partners-
with-status.html)

Status is committed to Nim but we do have a pretty diverse stack (i.e. most of
our code is in Clojurescript and we have part of the codebase in Go that we
are migrating to Nim at the moment) and teams work very independently from
each other, so other teams are sharing back their experiences with Nim.

~~~
modernerd
Have you written about why you're migrating from Go to Nim?

~~~
mratsim
We did cover part of it here: [https://our.status.im/status-partners-with-the-
team-behind-t...](https://our.status.im/status-partners-with-the-team-behind-
the-programming-language-nim/)

But to give more detailed explanations:

\- We are a blockchain company and close partner with the Ethereum Foundation.
Ethereum research is done in Python, using Nim allows to quickly transcribe
research to fast code. \- We focus on resource-restricted devices; for now
mobile phone but running a blockchain node on a router might make sense in the
future as well, so we need tight control over memory allocation. \- We can
reuse C and C++ tooling (Valgrind, LLVM Sanitizers, gdb, lldb) \- The biggest
bottleneck we have (and all Ethereum clients have) is cryptography, using Nim
means being able to reuse cryptographic libraries written in C or C++ or even
using C++ templates with one of the easiest FFI or rolling our own
implementation. \- WASM is going to be very important for blockchain
development and the most important thing is generated codesize (because it
lives forever in the blockchain), Go and Rust struggles with WASM codesize at
the moment. Nim is very good at that:
[https://www.youtube.com/watch?v=QtsCwRjtbrQ](https://www.youtube.com/watch?v=QtsCwRjtbrQ)
\- Being the main sponsor of a language means also being able to shape it
suitable for our needs from scratch.

So this was why we started using Nim in the first place.

The Nim team was started 2 years ago and the experiment is successful. For
example, our Ethereum client was kickstarted by translating python to Nim with
an automated tool: [https://github.com/metacraft-
labs/py2nim](https://github.com/metacraft-labs/py2nim).

Our main go codebase is supporting our own fork of the major Etehreum client
go-ethereum. But this is costly for us as upstream and our teams have
different goals. Now that the Nim Ethereum client is more mature, moving
completely to Nim with a homegrown clients will allow us to have better API,
better logging, controlled release cycles and avoid the cost of syncing with
upstream.

~~~
steveklabnik
The smallest wasm binary produced by rustc is ~110 bytes, Rust struggles with
this far, far less than Go, or any language that has a significant runtime.

This doesn’t mean that the rest of your reasons are invalid, of course.

~~~
RX14
Hopefully wasm's gc proposals will be implemented and get LLVM support,
allowing languages like Crystal to sanely support wasm.

------
nicholaides
I wrote some Crystal recently for a hobby project. I have to say, I really
like it.

• Nice syntax (mostly just that blocks/procs are easy to use— Ruby-like syntax
is nice, but it’s not really that important)

• straight-forward class/object model

• type system is simple but powerful (union types + type inference are a great
combo)

• syntax and std lib that enables functional-style programming, but isn’t
strictly functional

• Pretty darn fast— compiles to machine code via LLVM, and seems like it’s not
far behind C, C++ and Rust in most benchmarks, despite being garbage collected

What other languages offer a similar profile? D? Swift? Kotlin (via LLVM)?

~~~
smt88
F#. I hate the syntax, but anyone who enjoys ML will like it. Curly braces are
optional.

It also meets all of your other requirements, including the "isn't strictly
functional" part (which is somewhat rare).

Scala has most of these qualities, but people say it's extremely complicated
and supports too many different paradigms in a single language. I've never
used it.

~~~
agumonkey
Does F# offers native builds ? I thought it was CLR only.

Also Scala is about to jump to a new compiler (and maybe a new language). It's
a weird time for scala (from the few that I know).

~~~
throwlaplace
>Does F# offers native builds ? I thought it was CLR only.

i would guess that it will eventually support aot:

[https://mattwarren.org/2018/06/07/CoreRT-.NET-Runtime-for-
AO...](https://mattwarren.org/2018/06/07/CoreRT-.NET-Runtime-for-AOT/)

------
SeekingMeaning
Biggest turn-off with Crystal for me is no native support for Windows. It’s
kind of a big deal.

> __However __, if you are a Windows user, _for the time being_ you are out of
> luck

We’ve been waiting for _years_.

~~~
fizixer
I read your comment and realized I don't think much about what it means to
have native Windows support in a programming language. A crystal issue page
[1] talks about two ways to support Windows:

\- Cygwin (POSIX)

\- Native Windows API support

Though I wonder if targeting .NET would be something to consider. It would not
save much effort, compared to Windows API, but in principle, .NET is a
portable platform (like POSIX).

Also I wonder why the LLVM project doesn't make it relatively easier to
support multiple platforms (including Windows). The whole promise of LLVM is
that your prog-lang just targets LLVM (so to speak) and doesn't have to worry
about specific platforms.

[1] [https://github.com/crystal-
lang/crystal/issues/2569](https://github.com/crystal-lang/crystal/issues/2569)

~~~
smt88
If Crystal added CLR[1] (what you likely meant by .NET) or WebAssembly as a
target, I guarantee its popularity would vastly increase.

LLVM would be smart too, but I suspect that having access to the complete .NET
ecosystem from a Ruby-like language would be very appealing to devs who worked
on Ruby at home (or at startups) and are now at big enterprises that demand
big-name stacks.

1\.
[https://en.wikipedia.org/wiki/Common_Language_Runtime](https://en.wikipedia.org/wiki/Common_Language_Runtime)

~~~
gdxhyrd
The GP is talking about the standard library support etc. of the compiled
objects, while still being native.

------
dom96
Something to keep in mind while looking at the benchmarks in this article:

* Nim's `json` module will parse the full JSON file into memory, now I might be mistaken but AFAIK Crystal doesn't do this. The JSON module in the stdlib is good enough for most use cases, but there does also exist `packedjson`[1] which could be more performant for some use cases

* Regarding the base64 benchmark, you may want to read this: [https://forum.nim-lang.org/t/5363](https://forum.nim-lang.org/t/5363) (this article does not include the patches made by treeform)

Benchmarks are easy to game, the only takeaway that can be had from an article
like this is that on average the performance of software written in Crystal
and Nim should be about the same.

1 - [https://github.com/Araq/packedjson](https://github.com/Araq/packedjson)

~~~
shaki-dora
FWIW, my experience with quite a few (usually small(ish)) tasks I implemented
in Crystal is in line with the article's.

Once, a CSV-parsing and string-wrangling data batch job in python indicated it
would take 27 hours to finish. I got annoyed and wrote a line-by-line
translation in crystal. Writing it took about 40 minutes. It took a total of
2.5 minutes to run, that's two, almost three orders of magnitude.

As I said, this was a line-by-line translation, so whatever mistakes the
python version had, the crystal version would have also had them. There are,
however, a bunch of specialised python libraries (numpy et al) that weren't
used, and I guess you could achieve some significant performance increases
that way. Coming from ruby, I just happen to be quite productive in Crystal,
to the point where I stumbled over the article's description as a "systems
language".

~~~
juki
I'm not sure why you're comparing Crystal to Python here, but just in case
you've misunderstood something, Nim is a completely different language from
Python. It borrows some bits of syntax (like I suppose Crystal does with Ruby,
but I haven't used either of them), but otherwise it has nothing to do with
Python, so a performance comparison of Crystal vs. Python doesn't say anything
about Crystal vs. Nim.

------
zmmmmm
I really like this new generation of "C replacements". I guess these are what
I was hoping Go would be like but although it got part way there, it never
quite fully achieved the combination of power, simplicity and ease of use I
was looking for. It would be awesome if one of these would gain enough
mainstream support that it didn't feel irresponsible to bring it into mainline
use for work projects. For now though I just have to play with them on the
side.

------
buster
I was curious to reproduce the "benchmark" in rust, which got me to this code.
Mind you, i have not much experienace in rust, so i'd think this would be the
most straight forward translation of the nim code in the blog post for a
beginner.

    
    
      use std::fs::File;
      use std::io::Read;
      use std::path::Path;
      use serde_json::Value;
      
      fn main() -> Result<(), Box<dyn std::error::Error>> {
          let mut json_file = File::open(Path::new("1.json"))?;
          let mut json_content = String::new();
          json_file.read_to_string(&mut json_content)?;
    
          let json_value : Value = serde_json::from_str(json_content.as_str())?;
          let coords = json_value["coordinates"].as_array().expect("Coordinates");
          let len: f64 = coords.len() as f64;
          let mut x: f64 = 0.0;
          let mut y: f64 = 0.0;
          let mut z: f64 = 0.0;
      
          for value in coords {
              x = x + &value["x"].as_f64().expect("X Conversion");
              y = y + &value["y"].as_f64().expect("y Conversion");
              z = z + &value["z"].as_f64().expect("z Conversion");
          }
      
          println!("{}", x / len);
          println!("{}", z / len);
          println!("{}", z / len);
    
          Ok(())
      }
    

(fyi: the program ran in 2 seconds on my 7 year old laptop)

~~~
unlinked_dll
You can make this a lot faster assuming the schema is fixed, and clean it up a
bit with iterators. IE something like this

    
    
        #[derive(Copy,Clone,Deserialize)]
        struct Point {
            x: f64, 
            y: f64, 
            z: f64
        }
    
        #[derive(Copy,Clone,Deserialize)]
        struct Schema {
            coordinates: Vector<Point>
        }
    
        fn main() -> Result<(), Box<dyn Error>> {
            let mut json_content = String::new(); 
    
            File::open("1.json")? // don't need Path 
                .read_to_string(&mut json_content)?;
    
            let schema = serde_json::from_str::<Schema>(&json_content)?;
     
            let len = schema.coordinates.length(); 
            let (x, y, z) = 
                schema.coordinates
                      .iter()
                      .fold((0.0, 0.0, 0.0), 
                        |(xsum, ysum, zsum), (x, y, z)|{
                            (xsym + x, ysum + y, zsum + z)
                        }); 
            println!("x: {}, y: {}, z: {}", x / len, y / len, z / len);
          Ok(())
        }

~~~
jiofih
While I appreciate the intent, that’s turning beautiful concise code into a
very much less readable... thing.

How much faster is that?

~~~
dwb
It would be more polite and accurate to express that as your opinion rather
than objective fact. Personally, I prefer to read the second version. I also
expect the techniques shown in it, especially parsing against a type-level
schema, would be less error-prone than the original in a larger program.

------
sam0x17
I'm a big fan and long-time user of crystal. It's like the good parts of ruby
minus the dangerous parts, with actual performance. The OOP system is also
very opt-in, in that you can get as granular as Java is if you want, or use
modules for everything. The introspection offered by macros is absurdly good
as well, and allows for constructions you would only think are possible in
interpreted languages. Slices make everything nice as well, especially with
string and byte manipulation.

Generally, crystal will let you do what you want to do, often in a number of
ways. I definitely couldn't say that about Rust, though I like Rust.

~~~
ksec
>It's like the good parts of ruby minus the dangerous parts

I like this description. Others said Ruby allows you to shoot yourself in the
foot, I often think may be there should be gun control in the first place.

Edit: _This list is a little old, The proliferation of programming languages
(all of which seem to have stolen countless features from one another)
sometimes makes it difficult to remember what language you’re currently using.
This handy reference, although not authored by me, is offered as a public
service to help programmers who find themselves in such a dilemma._

TASK: Shoot yourself in the foot.

C: You shoot yourself in the foot.

C++: You accidentally create a dozen instances of yourself and shoot them all
in the foot. Providing emergency medical assistance is impossible since you
can’t tell which are bitwise copies and which are just pointing at others and
saying, “That’s me, over there.”

Algol: You shoot yourself in the foot with a musket. The musket is
esthetically fascinating, and the wound baffles the adolescent medic in the
emergency room.

Perl: There are so many ways to shoot yourself in the foot that you post a
query to comp.lang.perl.misc to determine the optimal approach.

After sifting through 500 replies (which you accomplish with a short perl
script), not to mention the cross-posts to the perl5-porters mailing list (for
which you upgraded your first sifter into a package, which of course you
uploaded to CPAN for others who might have a similar problem, which, of
course, is the problem of sorting out email and news, not the problem of
shooting yourself in the foot), you set to the task of simply and elegantly
shooting yourself in the foot, until you discover that, while it works fine in
most cases, NT, VMS, and various flavors of Linux, AIX, and Irix all shoot you
in the foot sooner than your perl script could.

Then you decide you can do it better with the new, threaded version…

SNOBOL: You grab your foot with your hand, then rewrite your hand to be a
bullet. The act of shooting the original foot then changes your hand/bullet
into yet another foot (a left foot).

FORTRAN: You shoot yourself in each toe until you run out of toes, then you
read in the next foot and repeat. If you run out of bullets, you continue with
the attempts to shoot yourself anyway because you have no exception-handling
capability.

Pascal: The compiler won’t let you shoot yourself in the foot.

Ada: After correctly packing your foot, you attempt to concurrently load the
gun, pull the trigger, scream, and shoot yourself in the foot. When you try,
however, you discover you can’t because your foot is of the wrong type.

COBOL: Using a COLT 45 HANDGUN, AIM gun at LEG.FOOT, THEN place
ARM.HAND.FINGER on HANDGUN.TRIGGER and SQUEEZE. THEN return HANDGUN to
HOLSTER. CHECK whether shoelace needs to be re-tied.

LISP: You shoot yourself in the appendage which holds the gun with which you
shoot yourself in the appendage which holds the gun with which you shoot
yourself in the appendage which holds the gun with which you shoot yourself in
the appendage which holds the gun with which you shoot yourself in the
appendage which holds the gun with which you shoot yourself in the appendage
which holds…

Scheme: You shoot yourself in the appendage which holds the gun with which you
shoot yourself in the appendage which holds the gun with which you shoot
yourself in the appendage which holds the gun with which you shoot yourself in
the appendage which holds … but none of the other appendages are aware of this
happening.

FORTH: Foot in yourself shoot.

Prolog: You tell your program that you want to be shot in the foot. The
program figures out how to do it, but the syntax doesn’t permit it to explain
it to you.

BASIC: Shoot yourself in the foot with a water pistol. On large systems,
continue until entire lower body is waterlogged.

HyperTalk: Put the first bullet of gun into foot left of leg of you. Answer
the result.

Motif: You spend days writing a UIL description of your foot, the bullet, its
trajectory, and the intricate scrollwork on the ivory handles of the gun. When
you finally get around to pulling the trigger, the gun jams.

APL: You hear a gunshot, and there’s a hole in your foot, but you don’t
remember enough linear algebra to understand what happened.

APL: You shoot yourself in the foot, then spend all day figuring out how to do
it in fewer characters.

Unix: $ ls foot.c foot.h foot.o toe.c toe.o $ rm * .o rm:.o no such file or
directory $ ls $ csh: You can’t remember the syntax for anything, so you spend
five hours reading man pages before giving up. You then shoot the computer and
switch to C.

Ada: If you are dumb enough to actually use this language, the United States
Department of Defense will kidnap you, stand you up in front of a firing
squad, and tell the soldiers, “Shoot at his feet.”

Concurrent Euclid: You shoot yourself in somebody else’s foot.

370 JCL: You send your foot down to MIS and include a 400-page document
explaining exactly how you want it to be shot. Three years later, your foot
comes back deep-fried.

Assembler: You try to shoot yourself in the foot, only to discover you must
first invent the gun, the bullet and the trigger. And your foot.

Assembler: Using only 7 bytes of code, you blow off your entire leg in only 2
CPU clock ticks.

Modula2: After realizing that you can’t actually accomplish anything in this
language, you shoot yourself in the head.

Visual Basic: You’ll really only appear to have shot yourself in the foot, but
you’ll have had so much fun doing it that you won’t care.

SNOBOL: If you succeed, shoot yourself in the left foot. If you fail, shoot
yourself in the right foot.

Paradox: Not only can you shoot yourself in the foot, your users can, too.

Access: You try to point the gun at your foot, but it shoots holes in all your
Borland distribution diskettes instead.

Revelation: You’re sure you’re going to be able to shoot yourself in the foot,
just as soon as you figure out what all these nifty little bullet-thingies are
for.

May be this list should be updated with more modern programming languages.

------
komuher
Kinda biased benchmarks u just took best of crystal and worst of nim from this
xd ->
[https://github.com/kostya/benchmarks](https://github.com/kostya/benchmarks)

~~~
shaki-dora
That statement seems wrong in the strict sense, since the "Havlak" benchmark
from your link has a more extreme difference between Crystal and Nim and was
not included in the article's tests.

The word "bias" has several meanings. One of them is an accusation that those
two benchmarks were chosen specifically to produce the desired outcome.
Considering the above, and that Base64 and especially JSON parsing are very
common, I think it would be fair to allow for the possibility that the
article's author chose them accidentally and not with any intent to deceive.

FWIW I was surprised by the article's focus on interfacing with C, something I
have not done in 20 years of programming, and was wondering if it was a
(possibly subconcious) choice to set up Nim to win.

------
glofish
There are interesting summaries there but when it comes to the performance the
article compares the implementation speed of the _json parsing_ and the
_base64 encoding_ libraries in Nim and Crystal. It is not clear at all if
those lessons apply to all/most code as well.

~~~
perturbation
Additionally, the Nim code was not compiled with many optimizations turned on!
(I.e., without -d:release).

    
    
        $ nim c -o:base64_test_nim -d:danger --cc:gcc --verbosity:0 base64_test.nim
    
        $ nim c -o:json_test_nim -d:danger --cc:gcc --verbosity:0 json_test.nim
    

IIRC the -d:danger flag is necessary for some optimizations (like disabling
bounds checking) but -d:release is necessary for most optimizations to be
enabled.

Edit: It appears I'm incorrect, -d:danger does imply -d:release in newer Nim
versions.

------
ternaryoperator
When will you have first-class support for Windows?

~~~
RX14
Unfortunately there's no concrete timeline, it's being worked on on-and-off by
the community.

If you have free time, or free money, it'd be a great help to the effort.

~~~
sedatk
Since Windows support isn't on parity with other platforms, how can we make
sure that our monetary contribution will fully go to the Windows work? Or, is
it so that only the 1% of it will make it to the Windows budget?

~~~
faitswulff
I can't imagine donations working that way for any organization unless you're
donating a considerable sum. Every organization has its own priorities and
donations help them meet those priorities. If you have a cool million lying
around it might help to convince them to make Windows support a priority, but
if not, then all you can do is to donate in hopes of helping them focus on
knocking out priorities so that they can get to your pet issue sooner.

~~~
zaphirplane
True in that small individual donations are unlikely to fund it. Sponsoring a
feature is a thing in open source. Either through a bounty site or a fund me
site.

~~~
iridius
While bounty has been existed for long time, why its taking slowly?

~~~
moe
Maybe there just isn't much interest in windows support?

Crystal is a fantastic language for server-side programming where it pairs the
performance of Golang with the expressiveness of Ruby.

I imagine most Crystal users love it for exactly that reason and have little
interest in development resources being diverted to a platform that they have
no use for.

------
stanislavb
I'd more than love see Crystal growing in popularity. That's be a great option
for Ruby devs to tackle tasks requiring high-performance

------
skocznymroczny
I'm curious how both would compare to D.

~~~
petre
Crystal is a bit faster at prime number crunching than D. I haven't done any
optimizations, just a loop, math, push and hash ops. Its's on par with Go,
which is also very fast. Compilation with the release option is slower.

~~~
vips7L
I really like some of the features of D for userspace programs. Optional GC
for hot paths, the ability to write raw ASM if needed, familiarity of
java/cpp. Really wish it would take off more.

~~~
petre
I like it as well. It would probably be my first choice of a general use
statically typed compiled language.

------
qwerty456127
Does NIM have anything like numpy for SSE-accelerated vector math?

~~~
treeform
Yes.

You can even use numpy in nim: [https://forum.nim-
lang.org/t/4102](https://forum.nim-lang.org/t/4102)

But why use that when you can use Cuda and OpenCL accelerated vector math:
[https://github.com/mratsim/Arraymancer](https://github.com/mratsim/Arraymancer)

If thats too much speed, you can just roll your own for loops. Because Nim is
compiled with battle tested GCC, LLVM or VC++ it will try to SSE optimize your
code if you pass the right switches. If you know what CPU your computer/server
has you can compile with newest brand of high performance instructions like
AVX2, or even the newest AVX512VNNI...

~~~
qwerty456127
Because my laptop's GPU is Intel GMA 4500M HD, it neither supports CUDA nor
OpenCL.

------
st3fan
> As you can see; in this case Crystal is the more performant language –
> taking less time to execute & complete the test, and also fewer Megabytes in
> memory doing so.

Yeah no - all you have proven is that one JSON library is less efficient than
the other. I bet that if someone spends some time optimizing the heck out of
the slower one, the implementations can probably be equally as fast and use
memory in the same range.

------
pull_my_finger
I feel like these are 2 very different languages with different goals. Both
are cool, but I don't think it's really fair to compare them.

------
fiatjaf
See also
[http://rosetta.alhur.es/compare/nim/crystal/#](http://rosetta.alhur.es/compare/nim/crystal/#)

------
IshKebab
JSON is a really bad way to benchmark a language because it's often
implemented in a different language.

Much better to implement a small program entirely in that language, along the
lines of the benchmark game.

~~~
RX14
JSON is actually implemented entirely in Crystal for Crystal, I'm pretty sure
it's the same for Nim too.

For a language which claims to be fast, having to use a C-based JSON parser
would be a bit of a cop-out :)

~~~
sweeneyrod
But using an existing parser that happens to be written in C could definitely
be sensible.

~~~
imtringued
That sounds like a pretty awful idea. If you're using a C library for parsing
JSON then you must add an additional C to Nim/Crystal conversion step that
requires additional RAM and CPU time.

~~~
mratsim
Nim compiles to C, there is no cost in calling C.

