Hacker News new | past | comments | ask | show | jobs | submit login
Nim binary size from 160 KB to 150 Bytes (hookrace.net)
145 points by def- on May 4, 2015 | hide | past | favorite | 66 comments



Today I looked at Nim in a bit more depth because it keeps popping up. I have a slightly uncomfortable feeling about it that I hope is unfounded!

To me it looks like it makes the unsafety of c more accessible because of better tooling and nicer syntax. Looking at e.g. [1] there are still pointers, null-pointers etc, just like in c. So now you have a language that looks superficially simple but is actually very dangerous. Compare this to e.g. rust which was the most painful thing I learned recently but I also know that it brings something fundamentally new to the table.

Anyway, there's a lot I don't understand about Nim and I'd be happy to see evidence to the contrary.

[1] http://nim-lang.org/0.11.0/tut1.html#advanced-types-referenc...


You are correct in saying that Nim does have C style pointers (ptr keyword). These are unsafe, but are meant to be used as part of the FFI. So when developing ordinary applications you should not be using them, unless you absolutely have to.

Nim also has references (ref keyword) which are traced by the GC and therefore safe.


Does nim allow statically flagging procedures as unsafe to assist in the separation of ffi and safe procedures?


So Rust has some cool safety features, especially for concurrent code. But, and perhaps I'm just uninformed, I never really understood the safety benefit of Rust's 'never nil' design. Nil is a useful modelling tool, even in Rust where it exists via Option<>/None, correct? Perhaps by forcing you to be extremely explicit (and enforcing `match` always handles all conditions) you gain some arguable safety, but at what cost? It's certainly not easier to use and reason about, IMO. And it seems just as likely you'll end up crashing your program due to a bounds-check error (which may happen more often since Rust encourages indexing over references due to this very design.. at least, so I've read).

It seems to me the design was chosen more as a way to ensure memory lifetime could be better predicted by the compiler rather than any strong argument for safety.. but then, I'm not well read on the subject, and It's very likely there's good safety arguments for it I'm not aware of.. either way, in my experience nil-deref errors are rarely a painful thing. They happen often, but are also fixed quickly.


> Nil is a useful modelling tool, even in Rust where it exists via Option<>/None, correct?

It's not that null is not useful. It's that most pointers can never be null, so nullability is the wrong default. And it is useful for the compiler to force you to handle the case in which pointers are null.

> Perhaps by forcing you to be extremely explicit (and enforcing `match` always handles all conditions) you gain some arguable safety, but at what cost?

There's basically no downside to having no null pointers. With constructs like Option::map the code is usually even less verbose than the equivalent code with null.

> It's certainly not easier to use and reason about, IMO.

You never have to worry about your program failing whenever you type "." or "∗". With null, the semantics of the language are that an exception can be thrown [1] whenever those constructs are invoked. That's pretty much objectively easier to reason about.

> It seems to me the design was chosen more as a way to ensure memory lifetime could be better predicted by the compiler rather than any strong argument for safety

Huh? Lifetimes are totally independent. We could have had null pointers with the lifetime system (and there were languages like Cyclone that had both). The system exists precisely because of safety.

We also get some really nice optimizations out of it that are impossible to get in C. All pointers in Rust are dereferenceable per the LLVM definition, which opens up some really neat optimizations like loop invariant code motion on loads.

> in my experience nil-deref errors are rarely a painful thing. They happen often, but are also fixed quickly.

Not in my experience. They show up in production all the time.

[1]: Or you could do what Nim does, and make dereferencing null undefined behavior instead of guaranteeing that an exception is thrown, but that strikes me as worse than what Java does.


> it is useful for the compiler to force you to handle the case in which pointers are null.

Well I agree that it's very useful (and we have that in Nim), but..

> With constructs like Option::map the code is usually even less verbose than the equivalent code with null.

I'm still not convinced of this part. It certainly hasn't been the case with the, admittedly small amount of, Rust code I've seen. However, I'll look for more comparisons in the future (or offer Nim comparisons to Rust snippets anyone posts). Point is, nil is still a useful and commonly used tool. So the argument for verbosity and conveniences is relevant, IMO.

> With null, the semantics of the language are that an exception can be thrown [1] whenever those constructs are invoked. That's pretty much objectively easier to reason about.

That completely depends on how often you want to use nil refs, and how easy they are to use. Like I said in another response, I agree Rust's design may be better for some domains, but I certainly wouldn't call it "objectively" easier to reason about in a general sense.

> Huh? Lifetimes are totally independent.

Well like my post implied, I was only guessing as to the design. And it's interesting to hear that it takes advantages of special compiler optimizations. That said, I still don't see how it's completely decoupled from the life-time system.. you're saying that if I have a Option<> reference to a mutable list in Rust, the compiler can determine weather or not the list is 'frozen' based on the runtime state of that reference?

> Or you could do what Nim does, and make dereferencing null undefined behavior.

I didn't think derefing nil was undefined behavior. I thought only dereferencing a pointer which points to once-valid-but-now-free memory was undefined behavior, and that situation is covered by GCed refs. Can you explain this a bit?

EDIT:

> Not in my experience. They show up in production all the time.

I did say 'rarely', and I drew a comparison to bounds-check crashes, which surely also show up in production.


> Point is, nil is still a useful and commonly used tool. So the argument for verbosity and conveniences is relevant, IMO.

The only advantage of having null references is that the pattern "if this reference is null, dereference it; otherwise throw an exception" is shorter. But the question is: how often do you want that pattern? In a robust program, the answer to that is "rarely".

Put another way, it would be trivial to add sugar for the ".unwrap()" pattern to Rust (perhaps with the "!" operator) if it were necessary, gaining back the only verbosity-related advantage of null pointers. But nobody in the Rust community is asking for it. That's because this pattern is rare. If it were a problem, someone would have at least submitted an RFC by now!

> I certainly wouldn't call it "objectively" easier to reason about in a general sense.

If you write down, formally, what the star or dot operators do, there are strictly more steps involved when you have null pointers. That's why a language without null is objectively easier to reason about.

> if I have a Option<> reference to a mutable list in Rust, the compiler can determine weather or not the list is 'frozen' based on the runtime state of that reference?

I don't know what this means. Lifetimes rule out dangling pointers. They don't have anything to do with nullability. The borrow checker only cares about the structure of your data enough to construct loan paths.

> Can you explain this a bit?

Dereference of null is undefined behavior in C, and Nim compiles to C code that blindly dereferences pointers without inserting null checks. So dereference of null is UB in Nim too. In an earlier comment I was able to construct a Nim program that exhibited very different behavior in debug and optimized builds, using nothing but GC'd pointers.

> I did say 'rarely', and I drew a comparison to bounds-check crashes, which surely also show up in production.

Actually, Rust does try to prevent indexing-related issues by preferring iterators to raw array indexing. But, in any case, the comparison isn't relevant for a couple of reasons. First of all, in a general sense if you have big problems A and B, the fact that you can't solve B isn't an excuse to not solve A. More specifically, though, the amount of type system machinery needed to fully eliminate bounds check failures is much higher than that needed to eliminate null pointer exceptions—you basically need dependent types, whereas to eliminate null pointers all you need are bog-standard algebraic data types, which have existed since the 70s.


> there are strictly more steps involved when you have null pointers.

Well yes, and both Nim and Rust have non-nil pointers.. I suppose I misread your original statement as "Rust is objectively better ..." when you actually just said non-nil vars are an objectively better design pattern in general. My mistake.

Our argument seems to stem around the two assertions (one from you, and one from me), those are: "nil vars are be rare (in optimally written code)", and "Rust's way of working with 'nil' vars is verbose". I suppose I'll concede that non-nil vars is a better default (though I will hold reservation until I see more real statistics, I don't find "no RFC yet!" as hugely convincing), but I also feel Rust could do a better job of giving access to "nilable" vars when they're needed.

> I don't know what this means. Lifetimes rule out dangling pointers...

I mean, Rust prevents you (via compile-time mechanisms) from mutating a variable while it's borrowed by another reference.. If that reference is Option<>, it's only known at runtime weather or not a reference has actually borrowed said varaible. Rust must either treat every Option<> reference as a potential 'loan path', which would significantly diminish their usefulness as a references, encouraging indexing for these scenarios, which leads to almost identical potential for out-of-bounds crashes... or it's relying on some kind of more complex mechanism (lifetime vars maybe?).. or additional runtime overhead.

I really don't know enough about Rust to know how far off-base that is. So any clarity is appreciated.

> In an earlier comment I was able to construct a Nim program that exhibited very different behavior in debug and optimized builds, using nothing but GC'd pointers.

I remember this comment, but I didn't remember it achieving UB in debug code.. I'll look through the history and take another look.


> Rust must either treat every Option<> reference as a potential 'loan path', which would significantly diminish their usefulness as a references, encouraging indexing for these scenarios, which leads to almost identical potential for out-of-bounds crashes... or it's relying on some kind of more complex mechanism (lifetime vars maybe?).. or additional runtime overhead.

Can you give a concrete example of this? I'm a bit confused, but it might just be a terminology thing. In Rust, `Option<T>` does not imply a reference. If you have a `Option<i32>` there are no references involved. An `Option<T>` also owns the `T` if there is one. You can get a reference to it, but you have to check that it indeed holds a `T` (via `match` or `match` using functionality).

Note: I should clarify: What's confusing me is the indexing stuff. I'm not sure if this is referring to something about the `Option<T>` or something else.


> I should clarify: What's confusing me is the indexing stuff. I'm not sure if this is referring to something about the `Option<T>` or something else.

By indexing, I meant as an alternative to references.. For example, if you had a Sprite type which held a 'reference' to a Texture in your game's Texture list.. as soon as you allocate a Sprite it must borrow a reference to a Texture, preventing any future mutation of the Textures list for the lifetime of the Sprite, which obviously is too restricting for most games.. so the alternative is to have the Sprite simply hold an index to an item in the array instead, but this basically comes with the same pitfalls as nilable refs (ie, if you accidentally change it, your program can crash due to bounds-checking errors.. or end up with visual glitches.. not sure which would be more annoying).

The other alternative is to use an Option<&Texture> instead. However, I'm not familiar enough with Rust to know of the restrictions here, or even if that's possible (taking a look at the docs, it looks like it's possible, but life-time vars come into play, which could complicate things).


Rust solutions would probably be the following: Some kind of runtime assistance (`Rc<T>`, `Arc<T>` et al), using indices as you mentioned (though with `list.get(index)` you'd still have to deal with the fact that it might not be valid, since `get` will return an `Option<T>`)[1] Another solution might be to allocate the textures in an arena that lives outside of the scope of your game logic, and have both the texture list and sprites contain borrows (Note I'm not sure about this, as I haven't done much with arenas yet).

Although I'm unsure where the `not nil` as discussed above comes into play here. What part in Nim would be `nil` here where Rust would have `Option<T>`? The difference between `Option<&Texture>` and `&Texture` is that you have to somehow deal with the possibility of no texture when handling the former.

[1] I should note that actual indexing behavior (`list[index]`) will assume you know there is one in there and panic if it isn't. This is one of the things I dislike and hope there will be an optional (no pun intended) lint post-1.0.


> I remember this comment, but I didn't remember it achieving UB in debug code.. I'll look through the history and take another look.

It's: https://news.ycombinator.com/item?id=9050999


So I remembered correctly, Nim does not reach UB in non-release code (or rather, code without --boundCheck:on), it throws an exception. I still think this is a reasonable solution. We catch these errors during development iteration or enable it for safety-critical portions of release code (or the entire project).. and we can opt-out of these checks if we need the performance and safety isn't as important (games, simulations, etc).

I remember Rust does not bounds-check it's iterators, so you don't need to really disable bounds-checks (indeed, you cannot) while Nim, currently, does this more niavely and looses some performance for it. That's a nice thing Rust does, but not something Nim can't eventually catch up too. See this comparison for futher reference: http://arthurtw.github.io/2015/01/12/quick-comparison-nim-vs...


> So I remembered correctly, Nim does not reach UB in non-release code (or rather, code without --boundCheck:on), it throws an exception.

That's not really correct. It's undefined behavior either way; you're just getting lucky because the compiler doesn't happen to take advantage of the undefined behavior to perform optimizations at -O0.


I'm not sure what you're implying.. you can turn on most optimizations and still keep nil-checks on in Nim (either the whole project via --nilChecks:one, or select portions of code via {.push.}).

Unless you're claiming your example was still hiting UB, even with nil-checks on, and just happened to throw an exception by random chance, I'm not really sure how you figure UB is still happening here (since the exception will be thrown, preventing the deref). Nothing is preventing you from using nil-checks in production code.


> In an earlier comment I was able to construct a Nim program that exhibited very different behavior in debug and optimized builds, using nothing but GC'd pointers.

This intrigued me so I found the comment: https://news.ycombinator.com/item?id=9050999


It's not so much about "never nil", but rather "never accidentally null". Rust's compiler prevents you from moving data into a method which then nulls it out, leaving a dangling pointer in the calling code. At best this dangling pointer will look at garbage and cause a crash or undefined behavior. At worst, it will look at other, actively-used memory and cause a security vulnerability. Rust will just refuse to compile until this problem is fixed.

Static analysis of these lifetimes allow a whole class of errors to be avoided (dangling pointers, double-free, iterator invalidation, etc).

Rust's Options are handy for a lot of stuff (async APIs, concurrent code that may/may not succeed, error codes, etc). But they are just icing really, not really the main thrust of Rust's memory model.

> It's certainly not easier to use and reason about, IMO.

To offer a counter-viewpoint, I find that Option<> (and Result<>) are very easy to reason about. They tell you exactly what to expect from a function, and you don't have to guess if you need to catch exceptions or let them throw higher up. Everything is explicit, the only surprises are panics which are cataclysmic anyhow.

> And it seems just as likely you'll end up crashing your program due to a bounds-check error (which may happen more often since Rust encourages indexing over references due to this very design.. at least, so I've read).

If you use iterators in Rust, you never need to worry about out of bounds errors. In fact they skip range checking altogether, since you are guaranteed that the value you are iterating on won't change under your feet (no iterator invalidation, etc), which generates more efficient code[1]

If you use explicit indexing, then yes, you can have a runtime panic if you go OOB. But that's the nature of explicit indexing. It also has to include those safety checks, so will be slower code.

[1] https://doc.rust-lang.org/book/iterators.html


> Rust's compiler prevents you from moving data into a method which then nulls it out

Just for clarity, we have this in Nim too, eg:

  type
    Foo = ref object
    Bar = object
      val: Foo not nil
  let
    f = Bar() # Error, 'val' must be set
  
  proc foobar(f: Foo not nil): Foo not nil =
    return nil # Error, can't return nil
  
  foobar(nil) # Error, can't pass nil
> At best this dangling pointer will look at garbage and cause a crash or undefined behavior. At worst, it will look at other, actively-used memory and cause a security vulnerability.

In C/C++, yes, but this isn't so applicable to Nim where we have GCed references and 'not nil' constraints.

> If you use iterators in Rust, you never need to worry about out of bounds errors

Well I was not talking about iterating through a list, but rather maintaining arbitrary indexes to a mutable list. Eg, a Sprite which contains a index to a Texture array. In that scenario it's just as easy to miscalculate and crash your program via a bounds-checking error as it is to crash by nil-deref.

> To offer a counter-viewpoint, I find that Option<> (and Result<>) are very easy to reason about..

It's good that Rust works for you, truly. And like I said in another post, I agree Rust's design here may be better for some domains. However, Nim's design still feels more elegant and straight-forward to me. Luckily, we both get a powerful language that suits us, regardless of which one we prefer :)


Oh, sorry, I should have been clearer: I wasn't trying to disparage Nim at all (it's on my list of languages to play with). I was just clearing up some points about Rust :)

Edit:

> Well I was not talking about iterating through a list, but rather maintaining arbitrary indexes to a mutable list. Eg, a Sprite which contains a index to a Texture array. In that scenario it's just as easy to miscalculate and crash your program via a bounds-checking error as it is to crash by nil-deref.

The solution here is to just use a reference instead of an arbitrary index. If you hand out references you can lean on the compiler to enforce memory safety -- the compiler won't let you access data that is no longer alive, won't let you accidentally share across thread boundaries if you don't explicitly want that, etc. And if that was a shared mutable list, it's doubly important to let the compiler help you reason about it, since shared, mutable state is the main source of data races.

This is one of those cases where leveraging the compiler allows you to write better, safer code.


No worries. I also wasn't implying you where trying to discourage Nim, and I hope my post didn't come off as accusatory. Cheers!

EDIT:

> The solution here is to just use a reference instead of an arbitrary index.

Ah, sorry I should have said "mutable Texture array". In that case Rust's borrow-checking will 'freeze' the array, preventing Textures from ever being changed during the lifetime of your Sprites. So you're left with either Option<> or indexing as a solution, each with it's own merits, but neither as.. practical as nilable GCed references, IMO (again, just my opinion.. others seem to find it easy enough).


Out of curiosity: How do you convert from a nilable `Foo` to a `Foo not nil`? As in, what do you do if you have a function maybe returning a `Foo` and want to pass its value to a function taking `Foo not nil`?


You prove to the compiler that the nilable var is not-nil via if statement. Eg:

  proc foobar(f:Foo not nil) =
    discard
  
  let f = Foo() # nilable ref
  let b: Foo not nil = f # Error, can't prove 'f' is not nil

  if f != nil:
    let b: Foo not nil = f # can assign non-nil vars to 'f'
    foobar(f) # or can pass 'f' directly


Ah, good to know. So Nim does static analysis on conditionals where Rust would use an explicit pattern match to get at the contained value.

Might be a good example for http://nim-lang.org/manual.html#not-nil-annotation


As for null in general, you can hear it from the horse's mouth here: http://www.infoq.com/presentations/Null-References-The-Billi...

There are a number of ways to approach this topic, so I'll just give you one: in languages with more advanced static type systems, you try to encode as much semantic information as possible in the type. As you've said, the idea of null can be useful, so it deserves a place in the type system. You want to separate things that may be null from things that should never be null. This is because not-null is by far the common case. Allowing everything to be nullable by default optimizes for the lesser-used semantic, which is where errors with null come in: you assume that something isn't null, when it actually is.


I agree the concept of 'non-nil' vars is very useful (and we have that in Nim), but I'm not entirely convinced by the rest of that argument. Namely, I don't agree that nil is rare enough to justify the verbosity Rust uses for it. Non-nil vars may be seen more often, but that doesn't mean nil vars aren't also often used, either. In Nim, both nil and non-nil vars are at roughly the same reach.. while in Rust non-nil vars are significantly easier work with. You may see that as a positive argument for Rust's safety (and you may be right for some domains), but I see it as more a negative argument for Rust's practicality.


> Namely, I don't agree that nil is rare enough to justify the verbosity Rust uses for it. Non-nil vars may be seen more often, but that doesn't mean nil vars aren't also often used, either.

It's not verbose. "Option" is 6 characters. ".map" is 4.

> In Nim, both nil and non-nil vars are at roughly the same reach.. while in Rust non-nil vars are significantly easier work with.

Option values are really easy to work with. Just use map or unwrap if you don't care about handling the null case. If you do care about handling it (which you should, after all!) the code using "if let" is the same as the equivalent "if foo == null".


> It's not verbose. "Option" is 6 characters. ".map" is 4.

I just want to note that verbosity isn't just about symbol length, but also about operator noise and the number of available or required commands used to achieve a goal. Just counting these characters isn't very relevant, and isn't even the best Rust can do (as someone pointed out you can use 'Some()' to get an Option var, which is only 4 chars).

That said, I agree this is rather subjective, and can't be well compared outside the context of the rest of the language.


> (as someone pointed out you can use 'Some()' to get an Option var, which is only 4 chars)

Some() is 6 chars.


I was measuring with pcwalton's ruler.


Fair enough. My argument here is more general than Rust itself, it's relevant to all languages with an Option type and no null. The verbosity can of course vary by language.


What's the rust name/syntax for non-nil vars?


    let foo = 5; // cannot be null
    let bar = Some(5); // technically also can't be null, but could be None
                       // instead of Some(val)
`foo` has the type `i32` here, and `bar` has the type `Option<i32>`.


> I agree the concept of 'non-nil' vars is very useful (

non-zebra numbers[1] are also very useful. But why have a "non-zebra number" when I can just have plain numbers?

[1] A "number" which is either a number, or a zebra


...because zebra's are not a universally useful modelling tool to programmers like references are. Thus, the absence of a reference, ie nil, also becomes a useful, commonly used modeling tool. If we all wrote software using african wildlife metaphor, 'non-zebra' might then be just as useful.


> ...because zebra's are not a universally useful modelling tool to programmers like references are.

References are not the zebras. Nil-references are.

We might as well say that the underlying number that the reference are represented by are useful modelling tools, in some circumstances. That doesn't mean that you want pointer arithmetic for references all the time.


> Nil is a useful modelling tool, even in Rust where it exists via Option<>/None, correct?

Yes. So? A lot of things are useful modelling tools, but that doesn't mean you necessarily want to include them in every reference-like type. A tuple of two things is useful modelling tool; should that then be infused into every type? A "either value or error" is a useful type; should that be infused into every type?

No? Then what makes "Either something or nothing" so special?


I've tried Rust, Nim and Go and I prefer Nim. But this could also be because of my background as a C/C++ programmer, and my particular requirements (general purpose programming language that doesn't try to hold my hand too much).

There are things I don't like about the language (eg. case-insensitivity), but overall if I had to choose a newish language for a new task, I'd choose Nim over Rust and Go. (However, if you threw D into the equation I'd probably go with D simply because I feel it's slightly more mature).

Incidentally, the way I tried to teach myself Nim (and to see if the language was usable for creating small Windows apps) was to write a WinApi program. It took about the same or less effort as what it would have taken me in C/C++, but it just felt much safer and more pleasant to work with.


Do you have some directions on the WinAPI in Nim?

I felt a bit overwhelmed with the whole wrapping thing.


You don't have to wrap anything. It's exactly like using it from C, except maybe a bit easier/safer. Just import windows. Then you can write code like this:

    hWndMain = CreateWindowEx(
        WS_EX_TOPMOST,              # Optional window styles.
        CLASS_NAME,                 # Window class
        WINDOWNAME,                 # Window text
        windowStyles,               # Window style

        # Size and position
        centerX, centerY,
        APP_WIDTH, APP_HEIGHT,

        cast[HWND](nil),        # Parent window    
        cast[HMENU](nil),       # Menu
        hInstance,              # Instance handle
        cast[LPVOID](nil)       # Additional application data
        );


Comparison summary between the Rust inspiration and the Nim version:

* Nim/GCC gains 2 bytes by smartly reusing the previously set AX register's value to set DI where Rust/Clang uses an immediate

* Nim can't express that stuff after the EXIT syscall is unreachable and wastes a byte on a RETQ.


I read this, and wonder why software has gotten so fat? Any simple application these days is easily on the few dozen of MB, most a few hundred, with a few on a few GB in size. Why aren't we streamlining software to reduce its size? I understand we have gotten "rich" on storage, but if the trend continues...

I am sure many portable devices would benefit if applications were trimmed down.


My experience writing KnightOS has given me the understanding that we are wasting the obscene amount of resources available to us from modern computers.


For me, it is easy to understand why, it has to do with shipping code sooner to your customers. It takes a lot of resources and time to optimize your code and the longer your customers have to wait, the more money you lose. In many cases, the extra size is due to frameworks added to the programs to speed up the development.

Frameworks tend to be heavy in size because it needs to accommodate various tasks as well as many platforms it can support. It's not easy to make them modular, so you can pick what you want and leave the rest out to shrink down the size.

For customers, would you rather wait a few days to get a certain feature that works out good enough in a few days or would you rather wait a few months for a feature that works great?

The competition is intense, wait too long to ship a feature and you lose to competitors that managed to get it out sooner than you. So, it's tough to balance each feature and tough to say no to customers, so that you could stay focused and lean.


Because computers can take it and we have limited time / can be lazy? I'm just playing with a barcode reader I made that uses opencv to get the image from the camera. That's 100mb of code to do something you could probably do with <1 mb but it makes things easy and works.


I love these articles; I make sure to bookmark them just in case one day I want to build binaries that do nothing.


In case you haven't seen it, here's a similar article on making a tiny ELF executable. [1]

[1] http://www.muppetlabs.com/~breadbox/software/tiny/teensy.htm...


This post is Nim specific, but the key ideas for getting to a small binary (optimize for size, remove the standard library, avoid compiler main() / crt0 baggage by defining _start, use system calls directly) are the same in C, C++, Rust, etc.


I like the end result. However, it makes me wonder just why it's so acceptable that simple programs like this even compile down to a 160KB executable in the first place.

The actual active code is essentially some text and an interrupt. That much, at least, should be language independent. Are modern compilers incapable of discarding unreferenced code, or am I missing something?


The first compilation, which is 160 KB, is totally unoptimized, contains all kinds of debugging and checks. It's just supposed to be for yourself during development of the program.

Also there is some overhead every Nim program has. But if you get to bigger programs you'll see that Nim's binary size is just fine, for example a NES emulator is just 136 KB: http://hookrace.net/blog/porting-nes-go-nim/#comparison-of-g...


Good to know. Also, pretty cool emulator!


> (1 byte smaller than in Rust)

Nice achievement! The article is quite the journey through various build parameters, switching gcc for clang and glibc for musl along the way. In the end, the secret sauce is syscalls and custom linking, though (as always with this kind of thing).


This seems mostly useful in highly constrained embedded environments (AVR, MSP430, ARM M0, PIC etc.) Unfortunately it seems like none of these "modern" system languages (Nim, Rust) seem to be putting too much effort towards embedded platforms :(


You're right that most Nim users don't do it, but you should be able to use Nim for embedded environments as a replacement for C.:

- http://nim-lang.org/nimc.html#nim-for-embedded-systems

- https://github.com/sirlantis/pebble-nim


Also useful for reasonable computing going forward. See more details: https://twitter.com/lix/status/589171043010412544


Rust isn't putting a lot of specific effort into embedded, but we already work on many embedded platforms. As the language matures, I expect that support to grow.


Embedded and especially bare-metal applications really are 2nd class citizens in the rust ecosystem though. You basically can't use cargo (or at least not without a whole bunch of hacks) and many important features for low-level code are still gated and won't be available for 1.0.

I think it's a bit of a shame because that's basically the #1 differentiator with languages like Go or Java as far as I'm concerned.

But beyond that it's true that the language itself has a lot of potential for embedded applications. The runtime can be made almost as tiny as C's and with libcore you get a much nicer and safer "bare metal" environment than what you'd get in C. And thanks to LLVM you can easily target a whole bunch of architectures.


> You basically can't use cargo (or at least not without a whole bunch of hacks) and many important features for low-level code are still gated and won't be available for 1.0.

Yup, both of these things are true. This is what I meant by increased support: we have a long way to go to make it as nice to use, and on a stable release of Rust. But the fundamentals are in place.


Why can't one use Cargo for bare-metal applications?


See all the extra work that had to be done here: https://github.com/Ogeon/rust-on-raspberry-pi


> Who needs error handling when you can have a 6 KB binary instead

Haha!

What I found was most impressive was the small binary even without the tricks.


"The speed optimized binary is much smaller..."

Did I miss where he optimized for speed?


Including `-d:release` optimizes it, so that's probably what he meant. It's one of the first things in the [tutorial](http://nim-lang.org/0.11.0/tut1.html).


Writing "--opt:size" means to try to optimize for size, trumping optimizations for speed you get with plain "release" mode.


I'm guessing -d:release implies —opt:speed (on top of disabling various runtime checks)


WOW. 10/10


Hey this isn't a constructive comment & you'll probably get downvoted for it. It happens to me whenever I comment in a thread that mentions Node.js because of my name. It is the best, but not everyone understands.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: