Hacker News new | past | comments | ask | show | jobs | submit login
A Programming Language Underdog (totallywearingpants.com)
356 points by dom96 on Sept 22, 2018 | hide | past | favorite | 232 comments



Since I have used Nim in a small team effort creating an enterprise data analysis application, let me put in a good word. The ease of coding and speed of execution was what caught our attention. I was expecting the worst (from past experiences with new languages) but we were so surprised that we could complete our project all in Nim. These are a few things that I love with Nim:

- One's knowledge of Python jives really well with Nim. Thinking in Pythonic ways, a small number of constructs in Nim gets you a long way without gotchas that block you. For example you can easily integrate Numpy in minutes.

- Statically typed, fast compilation, readable code, good module import system, binary executable that you can move around easily without installing anything more at distributed sites, and a built-in test and integration framework.

- An efficient web server comes along, supports a practical and useful template system for small team web applications. Concurrency. Async dispatch/coroutines gives you back-end options to scale.

- Nim's Postgres database client driver is glitch free, easy to use, and it turned out to be a real good work horse.


What about run-time safety? e.g. are the pointers? does it have builtin in array bounds checking? (Heck, does it even have arrays?)


yes to all of these -- Nim also has those intrinsic benefits that statically checked and compiled languages bring.

Arrays? As pleasantly nimble as Python arrays to say the least. Pointers are lengthy discussion, but suffice to say that pointers are smartly handled to avoid their pitfalls at runtime (while integrating with external C if you really need to)

Time time and again as I went through using Nim, what really stuck with me was that the designers had thought through these practical matters so very well, and even in its current < 1 version the language is remarkably complete and robust for practical programmers.


Thanks. That's very interesting. I don't suppose you're going to tell me that it has Clojure's amazing data structures too. :-)


Well: https://github.com/PMunch/nim-persistent-vector But not built in. The basic data types are very light-weight in their implementations to cover a large range of needs.


Nim is my favorite language right now. If you look at the different metrics that we use when discussing programming languages, it might not be the best of any category, but its "good enough" in every category. It might not have as powerful a type system as Haskell, but it's safe enough. It might not be as fast as C, but its fast enough. It might not be as easy as Python or Ruby, but it's pretty easy to get started with. It might not be the best web server / embedded / native UI language, but it can do all of those things. Its primarily procedural, but supports a lite version of FP and OOP, so you can program in a comfortable paradigm. It's truly a jack of all trades languages, and I think with a little time it might start mastering some.


That's exactly how I've reached to the language when I've used it: it's a lot like Python, both superficially and philosophically. It's not the perfect language for anything, but it's pragmatism makes it so you don't have to know so many languages.


What does it use for desktop UI dev?


There are a couple different libraries, not sure which one is the best though. I know an official one is in development as well. https://github.com/VPashkov/awesome-nim/blob/master/README.m...


Thanks.


I had tried Nim a few years ago, and I liked it, but I am more of a Lisp or C person. I am trying to learn Zig [1] which is intended to be a C replacement - no GC, manual memory management, but avoiding C's pitfalls. Just recently I have been playing with Terra [2] for low level stuff.

I am a fan of underdog languages - J, picolisp, shen, xtlang (extempore). [3,4,5,6]

  [1] ziglang.org
  [2] terralang.org
  [3] jsoftware.com
  [4] picolisp.com
  [5] shenlanguage.org
  [6] extemporelang.github.io


If Nim’s compile-to-c is attractive, and you like lisp, how about Chicken Scheme?: https://www.call-cc.org/

What little I’ve done is a pleasure. I like some of Racket’s post-Scheme language features better, but Chicken has a lot of Get Stuff Done libraries (eggs), and compiling a single executable is pretty killer.

Racket will bundle up an executable pretty well too, but it’s hard to compete with Scheme -> C -> statically linked executable for some things.


The way Chicken Scheme compiles to C is pretty neat actually. It gets turned into a giant chain of function calls, so that the stack never actually returns.

So this Chicken code:

    (foo)
    (bar)
    (qux)
gets turned into this C code:

    foo(bar(qux()))
When it hits the C stack's limit, it resets the stack and unwinds to the beginning, and starts over again. I think it was done this way to allow call/cc to work without having to use setjmp/longjmp.


It also, assuming it follows Baker's original paper[1] (which I believe it does), allocates all objects on the stack, maintaining the invariant[2] that nothing on the heap points to anything on the stack, and nothing in an older stack frame points to anything in a newer stack frame; and, to maintain this invariant, it kicks things out of the stack and onto the heap whenever necessary. The stack thus serves as a nursery in a kind of generational GC.

[1] "CONS Should Not CONS Its Arguments, Part II: Cheney on the M.T.A." http://home.pipeline.com/~hbaker1/CheneyMTA.pdf

[2] "CONS Should not CONS its Arguments, or, a Lazy Alloc is a Smart Alloc" http://home.pipeline.com/~hbaker1/LazyAlloc.html


I was using Gambit many years ago, and I remember hitting issues with Windows and Chicken. I do like IUP and Canvas Draw though! I am not a Nim fan, but I don't like Python either, so it's just a subjective syntax thing.


I feel like a few years may have cleared up issues on Windows. Compilers and Windows vs. The Rest of the World felt wilder and weirder not long ago. In general I feel like F/OSS tooling on Windows has gotten lower friction.

I base this not on any specific understanding, but on needing to compile something for Windows every few years and feeling now that the hoops to jump are fewer.


I agree. I've noticed the same. I should give Chicken another try before the year ends.


I think Zig is the most promising of these - they've been able to do very cool stuff early on, such as outperforming C for certain problems, and generating absurdly small binaries.

Last time I checked though they hadn't quite nailed their memory-management model and things were still in a state of flux.


Eh, aside from embedded development binary size isn't really a problem.

And at this point, having experience with ATS + Rust, the fact that the memory-management model is still wonky is kind of disappointing.


FYI putting the links in a code block prevents them from being links


I read the article. Still no idea what Nim is good for and why it's better than some other language.. and for which use cases?


A year ago I've started experimenting with Nim as a "faster Python" — put some declarations, change some keywords and you're ready to go — your "compiled Python" is now ~30x faster.

Recently I've used Nim for the first time for an official project at my job (at university). Instead of doing a simulation with Python+Numpy, I've decided to do it with Nim, and just plot the results with matplotlib. The whole experience was very pleasant.

Speaking of interoperability with Python, there is a great Nim library called Nimpy [0], which makes possible to use Nim as a more pleasant Cython — you can keep writing Python, and just use Nim for the intensive/slow stuff.

[0] https://github.com/yglukhov/nimpy


Did you use language interop for matplotlib, or saving/loading data files?


I saved the data in .csv and then used that, because not only did I need/want to just plot the results, but I also wanted to explore and further analyse the data in a Jupyter Notebook.


This might sound strange but Nim is good for everything. From writing backend services to creating web apps running on the client side. I've built emulators, CLI apps, a full blown forum and much more in it.

It's particularly great for systems programming tasks that require high performance and zero dependencies. For example, Status is currently writing an Ethereum 2.0 sharding client for resource-restricted devices[1] in Nim.

Nim is also awesome if you've got an existing C or C++ codebase, the interop that Nim offers is one of the best (if not /the/ best), especially when it comes to C++ interop. As far as I'm concerned no other language can interoperate with C++ libraries as well as Nim can.

1 - https://github.com/status-im/nimbus


Interesting claim. For me, D has the best C++ interoperability. Small test which D failed:

Create a std::vector<Foo> in Nim. Create some Foo objects and append them. Pass the vector to C++, create some Foo objects there and append them as well.


I think this is possible. I'm boarding a flight right now but I'll try to get back to you with an example later :)


What are Nim’s main weaknesses?


Post author here. A few things that make me sad in the pants:

- js doesn't have source maps (kinda of a big deal to me)

- some error messages are head scratchers (seem to remember trying to add things to an immutable array not being clear)

- docs could use love (eg seeing more examples of macros in action)

- devel (their nightly compiler) can be rough (e.g. i found the "strings cannot be null" cutover a bit rocky -- my own damn fault, i can't go back to 0.18 after being on 0.18.1)

- the big one I think, however, is adoption. I keep hearing "i'll just use rust or go". That's legit as they're also awesome.

nim's stdlib is massive (too big?) and there's tonnes of high quality packages out there. You won't be left thinking... well, crap, looks like I need to talk to roll this redis layer myself.

EDIT: Formatting. How does it even work?


I have implemented source maps for the js backend : https://github.com/nim-lang/Nim/pull/7508

They aren't merged yet in the upstream compiler because I wasn't sure if I wanted to refactor the jsgen with them, but otherwise they are almost there: I use them in a personal project in a forked branch


The docs issue and devel being rough are both due to the pre-1.0 status and the smaller community. It's a little bit of a catch-22; you need adoption to gain contributors but people won't adopt until there's enough contributions to make it stable.


yeah, which is usually where big companies help out. Their respect in the tech communities makes people take note of the new technology.

I really really think "underdog" is the best way to describe Nim because of this.


> nim's stdlib is massive (too big?)

It has less modules than Python.


These affect people in different ways but off the top of my head:

- No Nim v1.0 yet, despite this we do our best to create a deprecation path for everything that's possible.

- No big company like Google/Apple supporting the language.

- Community is smaller than that of Go/Rust.


A biggy is that it only really has a single developer, and no big corporate users. And relatedly the library ecosystem is relatively weak. It also has a GC, so it can't be used for really low-level stuff where that is a problem.

It's looking pretty promising though. Especially if you are a fan of python's syntax.


That is not totally true. Nim's own GC is written in Nim. You can turn the GC off.


I use Nim for Cuda, OpenCL and cache-sensitive/memory-bound multithreaded computation.

You can mix and match manual memory management and GC-managed types in the same codebase.


In this very article, which is pretty light on details, one of the few facts presented is that you can disable GC.


Agreed. I purged a bunch of stuff and still feel like I left too much in.

A couple of bonus facts for you:

- they've got an effects tracking system where you can have the compiler track (and whistleblow!) which functions are pure or not

- their multi-phase compiler allows you to read in source code at build time (from files or external programs!)

- their macro system is typesafe as it operates at the AST level

- the guy who created it will always tell you how he feels

- again with their macro system... there are FP libs, pattern matching libs, and OO libs that can "literally" transform the language to fit your preference

- and one more just for you: they don't support tail call optimization (ducks)


> - the guy who created it will always tell you how he feels

What does this mean?


I'm guessing, but I think the OP means that the Nim creator is a bit like Linus Torvalds. He will tell you if he feels your idea is silly, especially if he's argued against that idea hundreds of times already.


it sounds awesome


Two developers at least. One of whom is in this very thread. It's a concern, yes, but actually has been around for more than a decade, and is showing absolutely no sign of slowing down or going into hibernation.

I have just seen a ... competing language team spending developer time on purging the code of what they call ableism. apparently it's now offensive to talk of a sanity check ot to facetiously refer to OCD in a comment.

At least we may hope the Nim team lacks manpower for such idiocy.


Which competing language team?

All I found on Google was this gist of someone saying "sanity check" should be avoided. ("health check, too) [1], and some issues and pull requests in projects that were not languages.

One of the latter is clearly trolling to test Linus Torvalds' resolve to be polite [2][3][4][5], complaining about "ableist/saneist" terms, including "silly", on several of Linus' repositories.

You can tell it is a troll because it is just copying/pasting the exact same complaint, just changing the name of the project. It is not even bothering to change the list of alleged problematic words and their counts, so for example it is claiming that perconvert has 144 occurrences of "sanity check" when it actually has 0. In fact every single claim on that one is wrong. The only word from the complaint actually in pesconvert is stupid, which occurs one time, not the six times claimed. Second sign it is a troll is that it is from a GitHub account created just before the complaints were posted.

[1] https://gist.github.com/seanmhanson/fe370c2d8bd2b3228680e388...

[2] https://github.com/torvalds/uemacs/issues/16

[3] https://github.com/torvalds/linux/pull/595#issuecomment-4236...

[4] https://github.com/torvalds/pesconvert/issues/4

[5] https://github.com/torvalds/test-tlb/issues/5



Submitted by the same person who is trying to troll Linus. Looks like the troll is happy with the result:

> Thank you so much, it's so much more inclusive now. My rabbi will be pleased.


Trouble is the commit is there: The thing has been taken at face value.

This galloping madness is beginning to scare the shit out of me.


No it hasn't.

>We realise it's a troll, but we had an internal discussion and we decided that we wanted to remove these anyway. We're not being terrorised into change just because a (bad) troll appeared, they just happened to bring attention to a real issue.


"We didn't do it because the troll said so, but because the troll said so".

Either way, the decision is ... whichever derogative may not yet be blacklisted, sorry, interdicted.


It has no algebraic data types and pattern matching.


Someone wrote pattern matching as a macro - nim is pretty powerful.



That's the one.


Do you have a reference for that immediately to hand, or should I search for it ...


What algebraic datatypes is Nim missing?

It has generics, tuples, tagged and untagged unions, it even has C++20 concepts.


This is enough to make me keep using C++ for the use cases where Nim is meant to shine, or choose Rust instead. If I'm to learn a modern language, I really expect it to support modern paradigms.

I can see why it'd be interesting to someone with no C/C++ knowledge to get into systems programming though.


Tabs are not allowed for indentation.


Nice. I don't mean to be flippant, but I would consider that a feature.


Preferring spaces for indent?


Being opinionated


My preference is for spaces as well. Sadly, go uses tabs.


const hand* ^^ is that a pointer? Are we moving backwards?


No, it isn't. `*` after a name means that the variable/function/type/etc. is public and can be seen/used when you import a module.

So the whole expression just means "a public constant string (automatically inferred type) named `hand`".


oh, that was a surprise. thanks


How about the garbage collector? For really performance sensitive applications you'd turn it off and resort to manual memory management?


Yes, but the GC is very flexible so you might get away with using its soft real-time features[1].

1 - https://nim-lang.org/docs/gc.html


Nim also has a stack based region allocator, but it didn't seem too well documented when I tried to use it.


You can also mix GC-managed and manually allocated types in the same codebase.


Is it the best?

I read Rust is basically a safer drop-in replacement for C.


Learning rust is difficult, and takes time. It's debatable whether it's worth it, but a learning a new paradigm (automatic memory management via borrow checking) hardly qualifies as a drop in replacement if you don't know how to use it already.


I like Nim, although I don't think it's great for everything and I do think it fills a particular niche. For me, that is a systems-programming language akin to Go or Rust with a familiar syntax. Coming primarily from Python, sometimes I need a language that is a) safe b) fast c) easily capable of producing a cross-platform executable. Nim provides all of these with a Pythonic syntax and style, and that is really the main reason I like it. It lacks the sponsorship and audience of the other two, but is still a well maintained language with an active community and all the core functionality you would likely expect. That being said, a few years ago I was a strong proponent of Nim, while today it is clear that Rust and Go have captured the mindshare and continue to grow momentum. In fact it becomes hard to recommend Nim when I think of where these 3 languages will likely be in 5 years.


I've played with it, and have written a basic fasta/fastq bioinformatics library in nim (https://github.com/jhbadger/nimbioseq)

What I find appealing is that seems to reach the goal of "a compiled typed language that feels like a scripting language" more so than other similar languages like Kotlin and Swift. There is just so little "boilerplate" code that always seems to have to be included in compiled languages and yet it seems to generate decent smallish binaries (via C). There's also a Javascript backend for web development, but I haven't used that.


I've tried it out and read the Nim book, but I'm not using it for any larger project right now. So here is a biased opinion.

Pros: Nim is about as easy to program as Python, has the same speed as C, a working FFI, and all the usual bells and whistles of modern "battery included" languages like a package manager with lots of packages. It is garbage collected, which is good -- about this, some will disagree, of course. It has nice high level constructs and doesn't attempt to reinvent OOP or something like that, the language is fairly straightforward. It also has a powerful macro programming facility, which is cumbersome to use though.

Cons: The community is too small and so far has not attracted CS people or many professionals, and there is the usual bulk of abandoned or undocumented packages you get with these kind of languages (more on that below). It has a few controversial syntax choices (e.g. identifier case rules) and also a number of semantic misfeatures that ease compatibility with C. It has support for native threads but no Green threads like Go, and consequently also no Green thread -> OS scheduling that would be ideal. (You'd like the language to do some flow and dependency analysis and parallelize to Green threads automatically, which are then mapped to OS threads, but AFAIK only few experimental languages can do that by now.) Its garbage collector is not optimal and not as performant as Go's, I believe. It uses whitespace for blocks, like Python.

Overall, Nim is a pretty good general-purpose programming language.

I should say that I have a long-term interest in esoteric languages and have been working on my own for a while. However, my main use case is desktop application development and unfortunately so far there is not a single new language that I would really recommend for developing desktop applications, unless you're fine with bindings to some monstruous web interfaces (Elektron, Sciter, etc.) and want to program half of your application in Javascript.

Rust, Go, Nim, Elixier, Julia, Crystal, etc. do not have GUI frameworks that are ready for prime-time use in production, except maybe for a few interfaces to web apps. Their native libraries (like Nimx for nim, duit for Go, Conrod for Rust) are unfinished, limited or simply too impractical, and bindings to Qt and wxWidgets are either undocumented, incomplete, or suffer from weird license restrictions (like a Go Qt binding I've taken a look at, I forgot its name). Some of the libraries also create monstrously large executables.

For command-line tools you can use any of them, just like thousands of other languages. For web programming, you can also use them but then there is also Common Lisp, Racket and plenty of other languages good for that. For desktop applications with modern GUI, on the other hand, you will be too limited with any of these languages and constantly chase some incomplete bindings or try to figure out how the bindings work. (Most docs for bindings simply assume that you've used the respective library a thousand times in C or C++, in case of which you could, frankly speaking, probably safe yourself the trouble and do it in C or C++ anyway.)

For this reason, I've decided to use Lazarus for my GUIs. Qt with C++ or Python is also a good choice. I also use Racket, but its native GUI is too limited, and still have hopes for Go.


I'm writing a Scheme interpreter in C++ with the express purpose of easily using Qt from it. It supports call/cc efficiently, which I hope will allow me to write Scheme programs handling GUI ops which use signals/slots behind the scenes in direct mode (basically have (potentially many) Scheme threads of execution instead of callbacks). Another thing that I'm vaguely interested in experimenting with is offering a "reactive"/functional/immediate mode layer on top of Qt with it, but I don't have enough experience yet to know how possible that will be.

The interpreter currently relies on refcounting for simplicity and to provide deterministic latency behaviour. It's (currently) a simple s-expression interpreter partially because (at least at first) I want to use it to interoperate with another Scheme system via sending s-expressions forth and back, and then that's the lowest latency option. I've got various ideas in the areas of debugging and typing that I'll try to implement, and if I succeed on that path and start wanting to use it as the main Scheme system then I'll certainly move on to compiling to byte code or use a JIT.

I just started a month ago, and have to nail down licensing with my employer, when that's settled I'll publish the code.


That's a very interesting project. Still, if you're aiming at real-world usage, please seriously consider postponing your own language+implementation (or keep it as a side project) and writing a library for Chez Scheme instead. That would be awesome.

Chez is very mature and probably the fastest Scheme available. It's so good that the Racket team is currently converting Racket to Chez as the backend language and compiler.


I know about Chez and Racket's effort. The reason I'm rolling my own is that as mentioned I've got ideas to help debugging and typing that I find easier to implement when knowing the system and the system is simple. And that I want to control GC pauses, and get easy interop with C++ (e.g. I can use Qt strings as Scheme strings that way). I also don't want to be tied into a particular Scheme implementation (e.g. some of the GUI apps that I write I will want to run client side in the browser, too). I would be happy to meet with any Chez users/implementors though to learn about its internals etc. I'm looking for Scheme and CL people to meet in London, BTW (https://www.meetup.com/London-Metaprogrammers/ - first meetup soon).

PS. consider it to be a Scheme really optimised for doing work with Qt. It appears that binding Qt well into another language is difficult (Python being about the only one where that was done successfully?), so I'm taking the approach of working "close to the metal" (C++), do things that are better done in C++ (like subclassing widgets) there, then making interfaces to scm as I go (i.e. make it so easy to make interfaces that this is a reasonable approach to do). That way I don't have to do all the work of binding the entirety of Qt, I can use qtcreator as I see fit, I can decide on a usage basis how to interface to a widget (modal in the Scheme view but not modal in Qt's view?, when does it need destruction in that case?). I guess the knowledge or abstractions coming out from this might be portable to other implementations, though.

Also again, I also do have code in another implementation (Gambit) that will have to stay there for the time being; communicating with the GUI via sexprs will of course be an indirection, but given that web programming could work the same way (the Scheme implementation on the server communicating with the one in the browser) it looks like the right approach to try for me. That will also mean that the approach will work with any Scheme implementation as the server (Chez, Chicken, whatever).


I am curious to know what you find limiting about the Racket GUI. There have been some interesting[1] things[2] being developed with it recently. I have been able to use it to write a stock trading simulator[3] with price charts.

[1] https://alex-hhh.github.io/2018/06/a-racket-gui-widget-to-di...

[2] https://alex-hhh.github.io/2018/05/running-and-cycling-worko...

[3] https://github.com/evdubs/chart-simulator


Same as OP but posted from another account. Well, it always depends on the application. Here are the main limitations:

1. no internal drag&drop from control to control in a frame or from frame to frame, like from an editor snip to a listbox, or from a listbox item to a text field or canvas

2. text% and editor-canvas% are too slow for some applications, esp. for displaying lots of data fast or styling snips

3. text% does not allow associating arbitrary data with ranges (strange enough, list-box% has this)

4. text% uses a nonstandard format, neither RTF, XML, HTML, rich text is not easily drag&droppable or copy&pastable to other applications in a platform-compliant way without writing your own converter

5. no images in list-boxes, and generally speaking no advanced custom "grid" control (e.g. also no editable fields in list-boxes or similar table features)

6. no images in menu items

7. no toolbar, you have to make one your own and it will not be platform-compliant (macOS)

8. no docking manager or other advanced user configuration controls (we could implement these easily if we had frame-internal drag&drop, but we don't, so we can't)

9. no built-in input validation for text fields, like limiting one to integers, floats, dates, you have to do that on your own

10. it appears that some icons are not properly installed by Racket's deployment functions even if you specify them in the #:aux argument of create-embedding-executable

11. no access to tray icon

12. no way to obtain system colors from the system color scheme for custom controls, so you cannot create theme compliant custom controls

13. clipboard operations are limited (unless this has changed since I checked last time), meaning e.g. you cannot easily implement a "receiver" for some mime type data

14. related to the previous one, only whole frames can receive drag&drop objects and you basically just get a file path

That are all the points off the top of my head. For me, only 1, 3, 4, and 5 are problematic and 1 is a show-stopper. 3 is also important, since implementing this on your own can lead to a vast range of problems (you'd have to constantly maintain a data structure in sync with the snips in the editor).


Consider posting this on the Racket mailing list. If no one asks for it, we don't know what to implement next.


I added this feedback to racket/gui. Not sure if that's helpful.

https://github.com/racket/gui/issues/115


> It uses whitespace for blocks, like Python.

Why is this a con?


It's arguably more error-prone, e.g. Rob Pike justifies Go's decision to use curly braces like so:

>Some observers objected to Go's C-like block structure with braces, preferring the use of spaces for indentation, in the style of Python or Haskell. However, we have had extensive experience tracking down build and test failures caused by cross-language builds where a Python snippet embedded in another language, for instance through a SWIG invocation, is subtly and invisibly broken by a change in the indentation of the surrounding code. Our position is therefore that, although spaces for indentation is nice for small programs, it doesn't scale well, and the bigger and more heterogeneous the code base, the more trouble it can cause. It is better to forgo convenience for safety and dependability, so Go has brace-bounded blocks.


I never understood this. Whenever I see similar criticism, I wonder: what the heck does its author use to edit programs? Notepad?

(Well, given it's Rob Pike and looking at some of his work[1], this time it may well be close to the truth... ;))

[1] http://acme.cat-v.org/


and then you do both, indents and braces, indents that it looks nice and braces to satisfy the machine. Then you can just omit the braces... If i would just got a penny for each time i've counted braces...


What Crystal is for Ruby devs, Nim is for Python devs, I guess.


In theory yes, in practice it's not that clear.

You make a fast Rails in Crystal and a large percentage of Ruby community will jump in. In Python you can't do that, the community is fragmented. And you have already Julia..


In some ways, yes. But in practice Crystal is far closer to Ruby than Nim is to Python.


I read the article, and I’ve been spending the past 20 minutes looking at the SPA framework.

I’m still not sure what it is or why anyone would use it. It looks extremely complicated and verbose.


Karax is very promising to me. The Nim forum was re-built with it recently: https://github.com/nim-lang/nimforum However, it has no totally no documentation, and seems super alpha.


Don't know why you're being downvoted. I had the same experience reading through Karax GitHub. I might be spoiled by other docs.


Karax needs a promotional website, badly. It's a really great SPA framework and IMO is totally competitive with React.


By removing the C standard lib and using a custom linker you can get the Hello World down to 150 bytes. Here is a fun read: https://hookrace.net/blog/nim-binary-size/.


Incidentally, in Rust this kind of thing could get you 151 bytes out of Rust: http://mainisusuallyafunction.blogspot.com/2015/01/151-byte-... (linked in that post)

But that was before 1.0. Updating the example and applying some more tweaks, it’s down to 145: https://github.com/tormol/tiny-rust-executable


Rust is also terribly overcomplicated and a mishmash salad of different languages, so much so that in the end, it's just simpler to call ld(1) on a .o file directly.


Other than a sort of ELF golf why would dropping the C lib be useful?


In high-assurance fields, the mindset is all code is malicious, buggy garbage until proven otherwise by rigorous analysis. That means everything we include will cost us. If we don't use it, we don't include it by default to save us work.


> ELF golf

actually turns out to be important in embedded work


Embedded development.


Really excited to see Nim here on top of HN.

I wish it would gain more traction.

I like the pythonic syntax and easy 'fast code'.


I like the concept of Nim, as in: fast, statically typed, inferred, compiled etc. but I do not enjoy whitespace-sensitive languages, so am keeping an eye on Crystal more than Nim.

If that doesn't bother you, Nim is really neat.


Actually, I'm curious why people don't like white space sensitive languages. I get the tab v space thing, and there are certainly a couple of other down sides, but none of these seem like deal breakers to me. Given that python was the second language I learned, it's possible that I drank the coolaid early and I'm blind to some things that are truly egregious.

So the question is: why do folks completely avoid a language for a single relatively bland syntactic feature? Is there some cost I'm not aware of, or is it just stylistic/aesthetic?


> why do folks completely avoid a language for a single relatively bland syntactic feature?

Personal preference isn't a good enough reason?

I don't like white space sensitive languages because I've seen what happens in python when somebody accidentally adds a couple of lines formatted with spaces into a file formatted with tabs. I've seen git and svn mangle tabs. Long blocks are harder to track. Refactoring functions and nested ifs are much harder to keep track of. If you somehow lose all of the formatting in a block or a file, it's much more difficult to recreate the code if the only block delimiters are whitespace.

Essentially, white space delimiters are just one more thing that can go wrong and ruin my day. I try to keep those to a minimum. That said, Nim is my new go to for short scripts. I wouldn't write anything large in it for the reasons mentioned above.


Nim disallows tabs entirely, and in Python 3 it's an error to mix the two in the same file. So those errors can't happen anymore.

Out of your list, the only one that seems like a real problem is recreating blocks if the code lost all formatting.


You just described two errors that do actually happen and in the next sentence say those errors can't happen anymore. What am I missing here?


The comment I replied to was talking about errors arising from mixing tabs and spaces and incorrect indentation levels that arise from it.

If a language either disallows tabs entirely or will refuse to run/compile code that mixes tabs and spaces in the same source file, you obviously can't get errors related to mixing tabs and spaces.


loosing parts of your code is bad. The same goes for braces, if you loose them in a big c program your day is ruined as well.


To me, braces are simpler and more explicit, but then I drank the C/C++ koolaid.

Whitespace is intended for human readability, with spaces and tabs not having any implicitly contradictory meaning. In a whitespace sensitive language, you have to set your text editor to make those invisible characters visible to make certain to only use the correct invisible character, then employ multiple such characters based on the necessary level of indentation to do the work of a single set of braces.


I know this has been said before but for me it's as simple as:

"Format your code as you would have done anyway but just leave out the curly braces".

It reduces rather than increases the number of things I have to think about.


My problem with this is that you have to make sure every contributor has the same editor settings. You have to also configure every editor before you can use it to write code in such a language, which is sometimes impractical.

Curly braces make this not an issue an they're visible. I don't want to depend on non-visible characters for behavior, but it's only a personal preference.


> you have to make sure every contributor has the same editor settings

You have to do that in any language. Ever worked on a C/C++ files where the indentation is different from your settings? I see only 2 choices: either you temporarily adapt your settings, or you just cringe your way through.

The third alternative (use your own settings anyway), is just lazy and mean.

> I don't want to depend on non-visible characters […]

There's an easy solution. First, either forbid or mandate tabs for indentation. Second, forbid trailing whitespace. That way all spaces will be visible.


> My problem with this is that you have to make sure every contributor has the same editor settings. You have to also configure every editor before you can use it to write code in such a language, which is sometimes impractical.

I'm not aware of ever having to do any of these things. I'm not even sure what you mean by "configure". Every editor I've installed has always done the right thing out of the box and every contributor who isn't completely incompetent has done the right thing naturally.

Compared to my experience in curly-brace languages where indentation holy wars about and it's actually painful to read code in with a brace style you're not used to - I have more respect for the wisdom embedded in Python and PEP8 daily.


For me, the process is "just write the braces, the editor/tooling will do the formatting for you", so no difference.


Except my code looks prettier than yours. ;-)

Joking aside and as silly as it is to talk about "objective aesthetics" - surely you can see an argument for "less clutter == better" - as much as you've trained yourself to not see the braces, they add nothing that indentation doesn't already provide other than visual noise.


Yeah, no, I think code with braces looks better....

Objective asthetics? As far as I'm concerned, the tabs vs. spaces debate has basically proven that it doesn't exist for programming languages... (I'm rabidly pro-tabs, by the way). Maybe some of it is "trained myself not to see the braces", but it looks wrong without them.

All that aside, you just moved the argument from "My way is faster/less work" to "My way looks better", which is somewhat objective -> subjective.


> you have to set your text editor to make those invisible characters visible

Or you set it to replace one with the other, and not bother you.


Still more work than just using braces.


>Still more work than just using braces.

How so? It's a one time change to a setting in your editor, vs thousands and thousands of keystrokes.


I don't have to change anything in my editor all to use braces, I can just use them.

No extra work is less work than even a little.


It really bothered me in python, but for some reason I didn't mind it in F#. Of course, some of that may be related to the fact that I enjoyed F# so much that I was willing to put up with significant whitespace. Or perhaps it's just been too long since I've used python and today it would bother me less.


I thought F# was roughly an Ocaml clone. What significant whitespace does it have?


I'm pretty sure whitespace in F# is one way to control blocks. So anything further indented than an if is part of the if block. I'm definitely not an F# expert though, so I'm sure there's more to it than just that.

I also think there's a way to write F# that doesn't have significant whitespace, but uses a lot more keywords. Verbose syntax, I think that's called. I almost never see examples written that way, though.


I can't speak for F#, but it does have its origins in Ocaml, and whitespace doesn't matter there. Try putting that entire if-block (or any other statement) all one one line, just to see what happens.


Last I checked, Crystal had a tendency to use union types where it should use sum types. For example, if you have an array and you pop it and get nil, there's no way of knowing if you got nil because the array was empty or because the popped element was nil.


Nim is my first whitespace-sensitive language. I had no plans to love it. We're now engaged.


When's the wedding? :D


exactly, they're just better ;)


> I do not enjoy whitespace-sensitive languages

Out of curiosity, why? What’s wrong with using whitespace to organize things?


I find the framing of “no big backers” kind of weird, considering they do have one big backer in a crypto currency startup that is providing the overwhelming majority of its funding. Sure it’s not the big coffers of Google, but it’s concerning that the entire language seemingly depends on the fates and desires of an incredibly volatile field.

https://nim-lang.org/sponsors.html


This is a very recent development and it will only serve to accelerate development.

But yes, we do have sponsorship now. I don't consider it at the same level as the likes of Google/Mozilla though. If it wasn't for those companies I doubt Rust/Go would be alive today. You've gotta give us some points for our persistence :)


It is not only that the companies pushing those languages have lots of money, it's that they already have lots of users. Often, they publish their language as if it were an "upgrade" from what they were already using, with an even greater chance of success if it is compatible with existing ecosystem.

On the other hand, a language like Nim is starting from scratch. They don't have a specific target audience. There is so much competition in that crowded space that languages which are roughly on par in features and use similar paradigms are fighting for the attention of the geeks who would actually take the time to look, rather than it being bestowed upon them by the people already providing their tools.

Languages are not general purpose in the truest sense. They have their own little ecosystems where they're expected to be used and their proponents are often bubbled in that ecosystem. It's easier to migrate to a new language in your ecosystem than to move to a new ecosystem. Someone just hoping to solve a specific task will pick the tool that has an established history of being practical in that domain and won't take risks with new languages.

Anyway, language choice is often about what's fashionable than what's worthwhile. Did we get stuck with Javascript because of it's superior features or big company backing it? People want to learn whatever is de rigueur, often to improve their job prospects. Does having Nim on your resume give you an edge?

Personally, I've looked at Nim and find it interesting, but not novel enough that I feel I need to use it. What are the killer features that only Nim provides and you feel you can't do without them after using it?


Compile-time function evaluation, superb metaprogramming and easy to write at a very low-level while you can enjoy the GC for non-perf critical parts. I use those daily in cryptography, VM/interpreter and bigint libraries development and in Nim it's a blast.


A total investment of $19k probably doesn't qualify as a "big backer" when comparing to things like swift and typescript.


It is 19k a month.


the last 8 years (or so) Nim lived withouth big bakers


At the time of me writing this, the link to nim at the top of the article is wrong. I let the author know, but it ought to be:

https://nim-lang.org/

The first reference to the FQDN ends with "-" before the TLD, apparently inadvertently.


Right on, it's fixed now.


Does nim have a concurrency story yet? That’s what gave me pause last time I checked it out.


Yep. Concurrency is achieved via async await. Parallelism via `spawn`.

Here is an example from my book that uses both: https://github.com/dom96/nim-in-action-code/blob/master/Chap...


Where is supporting use of multiple cores on the roadmap?


Compile with `--threads:on` and everything in the threadpool module are available. You can also use `CreateThread` for long-lived threads.

Alternatively, you have an open-mp for loop that you can use with `||`:

    for i in 0||1000:
      doParallelSomething()


Exactly the same story with Dlang.


All the languages mentioned got rather much publicity in the last years. Wouldn't call one of them obscure..


I tried to phrase it as "wandering down the obscurity path" because I agree with what you're saying: they aren't obscure. They're just not the default choice.


I do not think all this language fragmentation is a good thing. A million little obscure languages that all at the end of the day do the same thing.

Yeah, we need language research to keep devising new features and more efficient ways of programming, but this is different.

I wish the world would get behind a couple well thought out languages that cover most programming needs (functional, systems/bare metal, scripting) and stick with those.

I've seen some ridiculous and unsustainable stacks at some shops because everyone got to pick their favorite language. Then the morale of subsequent hires is in toilet because there's such a cognitive load to learn all these little crap languages.

I feel like some of these languages come about because someone needed to do some task and didn't understand or take the time to learn how to do it in an existing language. Among the latest versions of the mainstream language, there is no programming paradigm you cannot do.

And where these obscure languages REALLY fall down is tooling. Got debugger support for this? Got perftool support? No, of course you don't.

With a small number of languages, work can be put into serious tooling, and fixing compiler bugs, rather than a few devs spread thin trying to keep up with the bugs in their hobby language.


> I wish the world would get behind a couple well thought out languages that cover most programming needs (functional, systems/bare metal, scripting) and stick with those.

uh, thats basically been the case since ... 20 years or so? I mean i dislike java and c like the plague, but they're nonetheless the defacto enterprise standards at the moment for performance critical work and ... everything else.

just look at the TIOBE Index [1] for example. Ill admit that this is a pretty skewed statistic, but its nonetheless pretty representative for the basic idea behind it.

There is, of course, lots of services / software in other languages -- and often with very good reasons (take erlang for example for layer < 4 routing), but thats basically because every other language was insufficient for the usecase they had. Or because they wanted to deliver the software today, and not next week (i'm looking at you, php)

as an aside: <5%% is still a staggering amount of code, so please don't feel marginalized everyone ;)

[1] https://www.tiobe.com/tiobe-index/


Only Java and C? What about C++, JavaScript, Objective C/Swift, Python, etc.? And there's also stuff like MATLAB and R, which are the defacto standard in their own areas ...


> There is, of course, lots of services / software in other languages


It’s no different from ecosystem/library fragmentation within a language. For example, there are at least six approaches to accelerating numerical code in Python (weave, plain C + ctypes, Cython, f2py, numexpr, numba) each with its own cognitive load, interop and debug problems (Julia advocates rejoice, but it’s coming for them too).

It seems like a community needs huge incentives to avoid churn and fragmentation eg (a) strong backwards compatibility commitments, (b) big backing company to do the endless boring stuff, (c) strong benevolent dictator, ...

The same is true for languages except there’s no outer “scope”, except for platforms like iOS that might dictate toolchains or a company where CTO might make such choices.

But trying to avoid fragmentation among hackers seems like barking up the wrong tree.


"Julia advocates rejoice, but it’s coming for them too"

Why do you say that? The whole point of Julia is to create a language that's similar to Python in ease of development but is natively fast. Do you think it fails at this?


I think Julia was well designed to stay fast for a long time, for many tasks. But anything that isn’t universally optimally addressable by a Lispish front end to LLVM JIT is going grow multiple approaches that aren’t fully compatible, in the same way that Python, not built for speed of execution, grew multiple approaches to fast code ( just realized I forgot about PyPy in my list above). So I expect there to be multiple eventually incompatible approaches to AOT compilation, web frameworks, GUIs etc. Julia’s youth restricts divergence in the short run but long run I think divergence is a healthy part of any ecosystem, and not to be disparaged as in the post I originally replied to.


Due to GC Julia will never be as fast as C, C++ or Rust.


This isn’t true. GC’d languages can be as fast or faster than C. Especially LuaJIT.


>Especially LuaJIT.

In the general case that can't really be true, because the Lua interpreter is written in C.

With regards to GC languages in general, if you spend a lot of time working around the GC by doing things like object pooling, which is really just reinventing manual memory allocation, you can get close to a non GC language in terms of performance.

GC languages are obviously fine for plenty of use cases, and for some use code snippets they can be faster, but there is no way to make a GC free--there's going to be some overhead no matter what you do.


The point of a tracing JIT is that it runs code in an interpreter, then generates machine code for loops and hot spots. By doing this at runtime you can take advantage of knowledge that a C compiler doesn't have. This is why LuaJIT is often faster than C.


>This is why LuaJIT is often faster than C

LuaJIT can be faster than C for some code. Just like C can be faster than someone's naive hand coded assembly.

That doesn't change the fact that in the general case C is still faster, and there are classes of critical high performance code that have to be written in C (or Assembly, Rust, or even Fortran). Sometimes, manual memory management is necessary to get acceptable performance (also determinism is occasionally required).

All else being equal, GC is always going to be slower than non-GC because a GC introduces unavoidable overhead.

I've worked in this space btw and I've never seen any evidence that LuaJIT is actually faster than C for anything outside of very specific micro-benchmarks.


What benchmark would convince you? It's easy to dismiss any evidence as a very-specific micro benchmark.


Multiple large programs written in LuaJIT that have better performance than the same programs written in optimized C.

The vast majority of benchmarks I've seen are down to LuaJIT performing specific optimizations out of the box that the C compiler used in the comparison can perform but doesn't.

In particular the last time I looked at LuaJIT vs C++ benchmarks, the C++ compiler flags weren't set allow the use of SIMD instructions by default, but LuaJIT does.

There was another recent example I saw where LuaJIT was calling C functions faster than C in a benchmark. Then someone pointed out what the LuaJIT interpreter was actually doing, and how to implement the same speed up in C.

Java people made the same arguments years ago: "Java is just as fast or faster than C++". You'll notice that after 20 years of comparisons, no one who writes high performance code for a living makes that claim.


Java is just as fast or faster than C++ most of the time. No one who writes high performance code for a living makes that claim.

It's true though. https://lmax-exchange.github.io/disruptor/files/Disruptor-1....


Java is fast enough that the increased programmer productivity of the GC and other features wins out in many cases. People aren't choosing Java over C++ because it results in generally more performant code.

How many AAA game engines are written in Java?


People make the same argument against having dozens of Linux distributions and dozens of window managers. Having a lot of choice doesn't hurt anything, and benefits everybody in the long run.

> I wish the world would get behind a couple well thought out languages that cover most programming needs (functional, systems/bare metal, scripting) and stick with those.

For the most part, "the world" has got behind a small handful of languages. Java, C++, Python, Javascript and maybe a half dozen others make up the majority of real world projects.

> With a small number of languages, work can be put into serious tooling, and fixing compiler bugs, rather than a few devs spread thin trying to keep up with the bugs in their hobby language.

That assumption probably doesn't hold. If the authors of Nim weren't working on Nim, there's no guarantee they'd go work on tooling for some other language. Furthermore, who would decide which small group of languages people can work on?

With the exceptions of Java and Javascript, almost all of today's popular languages started off as somebody's small pet project. The good ones (by some metric, anyway) grew and their usage spread and the not so good ones died out. The best way to make forward progress is to try new things and see what works and what doesn't.


> Having a lot of choice doesn't hurt anything, and benefits everybody in the long run.

But it does hurt in many cases; there is only so much programmer time; if people would invest their time in less projects, they would move on quicker.

I am as guilty of this as probably most people here, having written frameworks, orm’s, parsers, libs, languages, game engines etc instead of helping out on existing projects. I know I did it because I thought I could do better than what was there; usually that was false, however sometimes, at least I thought, it was true. For the bigger ones; frameworks, linux distros, languages, it was always false though so I would say it does hurt; I wasted time and the world did not improve. Only I improved as I liked it and learned, but I would have done as well if I had helped an existing project.

Another point to that is that a lot of people, especially people who solely use computers to make money to survive, really do not like choice in my experience. A lot of my colleagues ask me what to use and hate the fact that when they Google that there is a choice. They do not want choice, they want to use what is the de facto standard for the particular case in every case. With choice and fast change, the de facto standard isn’t there and even more experienced people experience the same angst beginners have when learning frontend web dev. It causes a lot of (probably underestimated) stress in the workplace.


Folks that I worked with had the same idea: “Fortran was good enough for our ancestors, it is good enough for the world. And COBOL”

This was before the obscure language C was invented.

And as far as debugging is concerned, debuggers are overused. Reading “Programmers at work” you see that 1. They used the equivalent of printf for debugging 2. Mostly they had not read TAOCP 3. They did not like C++.


This are great arguments for why we should all just use Java. Great tooling! Lots of high quality libraries! Nothing to make you get out of bed in the morning!

Why enjoy anything after all?


We forgot to add enjoyment to the unit tests and now it would require a refactor, sorry about that.


GC is a nogo for some stuff.


After twenty years of professional Python (mostly) development I have suddenly and recently become a neophyte Prolog programmer.

Bottom line: If you're not using Prolog you are almost certainly wasting your time.

Almost all PLs you're likely to be acquainted with can be thought of as syntactic sugar over Lambda Abstraction. Prolog, from this POV, is syntactic sugar over Predicate Logic. It's actually a fantastically simple language, both in syntax and in its underlying operation, which could be summarized as Logical Unification with chronological backtracking.

I have been working with Prolog for only about two months but I am already way more productive. Typically a piece of code that might have been five pages of Python will amount to a page or less of Prolog. I hasten to point out that 1/5 code means 1/5 of the bugs, but it is also much more difficult to introduce bugs into Prolog code, so the total ratio of bugs per unit of functionality is much lower. Further, Prolog code typically operates in several "directions", e.g. the Prolog append() relation (not function) will append two lists to make a third, but can also be used to extract all prefixes (or suffixes) of a list, etc.; a Prolog Sudoku program will solve, validate, and generate puzzles from one problem description.[1] So you get more functionality with less code and many fewer bugs. It's also very easy to debug Prolog code when you do encounter errors. I'm spending fewer hours typing in code, fewer hours debugging, and I'm still more productive than I was. Looking back, I estimate that as much as half of my career was wasted effort due to not using Prolog.

I'm implementing a sort of compiler in Prolog and I am impressed with the amount of work I haven't had to do. I'm beginning to suspect that most high-level languages are actually a dead-end. For efficient, provably-correct software generated by an efficient development process, I think we should be using Prolog with automatic machine-code generators.

Last but not least, Prolog is old. It's older than many of the folks reading this. Almost everything has been done, years ago, often by researchers in universities. Symbolic math? Differentiation? Partial evaluation? Code generators? Hardware modeling? Reactive extensions? Constraint propagation? Done and done. You probably can't name something that hasn't been explored yet.

[1] https://swish.swi-prolog.org/p/Boring%20Sudoku.swinb


Lovely paean to Prolog. Now do APL. :-)


Er, if most languages are syntactic sugar over lambda abstraction, APL is syntactic chocolate. :-)


C/C++, Java, Javascript, and Python make up over 50% of language use. Just use those. Massive tooling around all of them.


This. But also "Javascript" is a way overloaded term. The JS community is quite fragmented. Same could really be said of all of them. "Just use those" may come with a caveat of "also be conservative about what tooling you adopt around the language you choose."


I almost left Javascript off the list because of it's peculiar (but interesting!), prismatic nature. Still, they all transpile to Javascript (I think?), so really it's all just javascript ;)


In that case Nim should make the cut since it transpiles to C, C++, Obj-C, and JavaScript. :)


While we're at it, let's dispel the notion that we need tiny domain specific languages to accomplish daily tasks. One that covers HTML, Makefiles, SQL, and Awk is all anyone will need.


You know, I'll have to add to the disagreement. Because of all the technical reasons yeah, your way would stop language development, but also because herding cats is a really counterproductive thing to do, every time somebody powerful enough to get a chance tries it, everybody loses in the end. (But you are free to keep thinking this way. Of course, I won't change to your preferred set, but if you want to reduce fragmentation, you are free to surrender your preferences.)

Anyway, perftools are only a must have when performance becomes a problem, step through debuggers are way overrated, there are many features that no amount of libraries or tools will give you, and ecosystems quality matter about as much as size.


Realistically, what features can be provided by the language but not by libraries? If there are many cases, would that not indicate that the language's expressiveness is lacking?

IMO, the goal of languages isn't to provide many features, but to provide constraints, so that programmers can stick to a set of rules that their peers can understand and agree upon. The most powerful language is the machine code for whatever CPU you're using, because there are no constraints, you have all of the power of the processor.


I think it's great that a lot of these minor languages get some play in companies. If they're good enough to overcome the problems you mention, it improves things for everyone. If they're not, then the company dies and so does an evangelist for that language. It's a little like programming language Darwinism; a few companies need to die in the process, but ultimately it's better for programmers worldwide.


"I do not think all this language fragmentation is a good thing."

I both agree and disagree with this.

I love the creativity and imagination that goes into a language like nim. But at one time I thought the same about python! Sometime obscure little languages become important.

On the other hand, the tooling statement is dead on. At the point I find out it doesn't have a step-through debugger I'm just reading the docs for fun and then moving on.


I recognize Common Lisp in your expectations :] http://lisp-lang.org/ multiparadigm, efficient, super tools (debugger, repl),…

https://github.com/CodyReichert/awesome-cl


The honest truth is that it’s easier to be famous as a big dev in a small pond than a small dev in a big pond. As communities stagnate, it is harder to get your name known and becomes more political than technical.

It is a social problem, not a technical one.


See, this is what I thought too. Turns out, by compiling to C or C++, you can use their toolchains. Debuggers, codecov, emscripten...


"Can use" is different from "can use productively." Technically if it's compiled to C/C++ you can use gdb/lldb etc, BUT the compiled version may be so drastically different from the input that it's effectively useless to track down logic errors in the original.


C has this nice preprocessor directive that indicate the actual source code:

   #line 42 "actual_source.lang"
Subsequent tools like gdb/lldb pick up on that, and point to your source code instead of the intermediate C. So you don't care that the C code is wildly different from your own, the tools can still point to to the right place.

Sounds pretty productive to me.


Who chooses?


Perhaps the decision should be centralized, perhaps by congressional committee. Or would it be better to put it on the blockchain?


I really like Nim, but one thing that's stopped me from using it for any recent project is the lack of Protobuf and gRPC support, which we use extensively. There are some Protobuf libraries, but they're all incomplete and unmaintained. Being able to generate code from gRPC proto files (either compile-time or runtime generation) is a must.


Sounds like a perfect Nim project for you ;)

In all seriousness, I would love to write this myself... but I physically cannot pour my heart into any more Nim projects. We need more people to help us out!


Definitely a chicken/egg problem for nascent languages like Nim -- someone has to furnish the house before people can properly move in.

The only way I could solve this and make Nim a viable language for these projects is to spend my spare time tinkering, and that's unlikely to happen.

The situation is particularly problematic for something like gRPC where you really have to know Nim well to create a good tool.

As cool as Nim is, I've gotten to a stage in my career where I just want tools that work. That's the attraction of Go these days (though Rust is looking nearly as good now) -- most libraries already exist, and you can focus on the task at hand, instead of being forced to invent a bunch of wheels first.


Might be nice for scripting games in Unreal Engine? Looks like someone else had the same idea:

https://github.com/pragmagic/nimue4


What's the status of Nim's garbage collector? https://nim-lang.org/docs/gc.html Last time Nim came up on HN someone said they were changing from reference counting to tracing. Also seems like the Nim developers are reluctant to make a choice here? GC is a show-stopper for some developers but an absolute necessity for others.


Nim has multiple garbage collectors you can change at compile time. Here's the --gc option from the documentation.

    --gc:refc|v2|markAndSweep|boehm|go|none|regions
I don't know if this is up to date or not. There's an open issue on github to improve the documentation on the garbage collector.

https://github.com/nim-lang/Nim/issues/8802


My impression from the article and docs is that Nim offers multiple choices specifically because garbage collection ain't one-size-fits-all.


Does it have arbitrary length integers? I've not been able to find any reference to such.

Edit: OK, there's a library called "bigints" - it's not clear how "natural" the resulting code will be, but I might experiment.


Try "StInt: A fast and portable stack-based multi-precision integer library in pure Nim"

Being stack based and having some nice compile time evaluation features should make it very performant.

Here: https://github.com/status-im/nim-stint


That's excellent - thank you.

Assume I'm not a complete numpty, but are there directions for:

* Download

* Install

* Invoke

... anywhere? I'm not at all familiar with retrieving and installing libraries from git, nor with NIM and how/where to install code. All assistance gratefully received. Pointers to existing detailed instructions also very welcome. I can investigate all this myself with trial-and-error, but if someone's done it before it saves me the time and effort of making all the mistakes again.

Thanks.


Easiest way to start with nim is via choosenim: https://github.com/dom96/choosenim

then `nimble install https://github.com/status-im/nim-stint`

Unfortunately I didn't have time to focus on documentation but the easiest way to get started with Stint is to check the tests: https://github.com/status-im/nim-stint/tree/master/tests.

The casts are there to check binary representation compatibility in the tests and are not needed otherwise.

Alternatively you can check:

- https://github.com/FedeOmoto/nim-gmp, A GMP wrapper (very low-level/C-like and not updated since 2015)

- https://github.com/status-im/nim-decimal, a arbitrary-precision floating point wrapper to MpDecimal (used by python). Unfortunately it's very low-level/C-like at the moment.


How does Nim stack up against Go? I got the impression that Nim was very performant in the concurrency/parallelism space, but I'm wondering whether the switch is worth making.


Nim has a level of richness you won't find in Go. Optional GC, unions, enums, macros, compile-time function evaluation (so code generation and other compile-time magic can happen in Nim code instead of via reflection or the rather poor "go generate"), generics, async/await, real threads... Nim has much of Go's simplicity while managing to be a lot richer. But it's also less mature and has almost zero mindshare.


Depends what you're working on and whether you're happy with Go :)


The tutorial starts with a quote from a song by Rammstein.

"Der Mensch ist doch ein Augentier -- schöne Dinge wünsch ich mir."

Humans are just creatures of the eye -- Beautiful things are what I want


There is a race going on for the first language not being javascript being compiled to javascript/webassembly and actually worth using.

No, coffee-script, typescript and the like don't really count, they are relatively thin layers. Several options are in the works. Implementations of Python and .Net (Blazor). Languages like Rust or Nim.

The crucial point is that it needs to have a compelling framework like vuejs or react. Then the toolchain needs to be superb and result in small enough webassembly packages.

No project is currently achieving this, to my knowledge. But the race is heating up!


In my opinion, that race has been won.

Ocaml, or it's cousin ReasonMl are well worth looking into. Compile to JS with Bucklescript. Compelling frameworks are Bucklescript-Tea for an Elm like experience or Reason-React as a layer over React. Both frameworks are excellent.


Bucklescript doesn't really count, because it is a relatively thin layer on top of javascript, not even Webassembly. It looks interesting, but the lack of popularity argues somewhat against it.

It may be still in the race...


> No, coffee-script, typescript and the like don't really count, they are relatively thin layers.

Why is being thin a disadvantage?


Both coffeescript and typescript have a respectable following, and for good reason. But their data model is completely tied to javascript, and thus limited. Also they are pretty much stuck with javascripts performance.

Basically, Webassembly was created for performance reasons. But you don't need Webassembly just to compile Javascript, Typescript and Coffee-script, and you can't use its benefits through these languages.

On the other hand the performance of Webassembly, both in terms of size and speed, and they enable the use of other data models, like C, Python, Java, .Net or Rust. Webassembly gives us the chance to quit the memory management paradigms of javascript for other options, like manual management, ownership or various garbage collectors.

But those advantages may not be good enough to justify leaving things like react or vuejs (or similar) behind, or glueing the new paradigms to those frameworks. Rust for example has a React-like framework for use in Webassembly.


Another interesting upcoming language ist JAI: https://inductive.no/jai/

Its purpose is to become a better C++ for game development (high performance, simplicity).


Jai is a perfect example of how fashion dictates programming language popularity. The language isn't even released yet and people are mentioning it as the next great thing. If it wasn't for Jonathan Blow's successful indie games nobody would bat an eye.

That being said, I've got huge respect for the guy. I asked him about Nim in one of his streams and his reply was very courteous and reasonable.


I kind of wonder why everybody is so negative about me posting JAI here. I mean, yes the compiler isn't released yet, but that's why I wrote 'upcoming'.

You write about the 'next great thing', but I was merely talking about 'a better C++ for game development', which narrows it down to a very specific use-case. And I don't know what is wrong about if someone has had some success by building games to also look at the tools he uses for building those?

Besides that, looking at the few examples I have seen I didn't find the languages elegant, but it seems like it got inspired by some other modern languages like Go (e.g., no parentheses for if statements) while explicitly rejecting other ideas like the GC.


> jai compiler – not released yet …

That's ... like the most important part?!?!??


Can anyone explain what "designed for good programmers" means?


> Can anyone explain what "designed for good programmers" means?

It means "programmers who agree with Jonathan Blow and his opinions on what good programming is."


Many programming languages seem as though they are designed to force the programmer to be "safe", Rust's borrow checker is an example, but so is basically every GC-language ever. What Jon means by "designed for good programmers" is that the language design doesn't have this mindset that it needs to prevent the programmers from harming themselves, and instead focuses on allowing people who know what they're doing to be more productive doing it.


In JAI you own the code, the data, the build system, everything. JAI runs at compile time, so you can introspect your code and generate custom builds or generate custom files, or structures, like dictionaries of similarly named functions.

JAI doesn't implement abstractions (like objects or a GC) that might get in your way. JAI supports gradual refactoring and you can change your mind about the way memory is layed out ("AOS" (structs of arrays), delegates), and still use the same code. Want to shoot yourself in the foot with uninitialized variables? Fine!

In general, if you know what you are doing, JAI will allow it. JAI is similar to C in spirit, but C is a rather poor implementation in comparison.


Doesn't even try to prevent mistakes that a good programmer would avoid anyway.

For instance, C++ has operator overloading, which when misused leads do horrible, unreadable code. Still, the feature can be useful in some cases (vector/matrix library comes to mind). Java on the other hand avoids this feature because it doesn't trust the programmers with it.

As for JAI, it mostly means Jonathan Blow and his peers. I mean, he's obviously making a language for himself and whoever he hires. There's no point in catering to a different, wider audience. (The reason there's no point is because he doesn't need a whole ecosystem to support his language. He's trying to make it worth the effort even if he was the sole user.)


No garbage collection.


Interesting!

Minus: yet another proprietary package manager, bypassing the operating system's software management subsystem (operational maintainability)

Plus: finally a language which compiles to binary executable machine code.

Plus: transpiles to multiple "backend" languages (but transpilers incur a performance penalty).

Plus: can link with shared object libraries, thus having instantaneous integration choices with an enormous corpus of existing software.

Plus: can turn off garbage collection, which is ideal for command line programs designed to be chained with the shell pipe mechanism.


> Minus: yet another proprietary package manager, bypassing the operating system's software management subsystem (operational maintainability)

False. Nimble installs packages only in the current user's home.

It's one of the few package managers that does not encourage the dreadful "sudo pip/npm/... install"


oh, guess I shouldn't implement that[1] then :)

To be honest, now that I'm thinking about it again, I think you're totally right.

1 - https://github.com/nim-lang/nimble/issues/80


If you implement that, you instantaneously take away the ability to deploy applications through OS's automated deployment (like Kickstart, Jumpstart, or AutoYaST) and you create more work for every system engineer in existence because they have to finish your work for you, namely integration of your software (nim language) into the OS.

For every platform nim supports, native operating system package(s), delivering into /opt/nim (and system-wide configuration into /etc/opt/nim) should be provided. This is not some whim, it's a formal system engineering specification. See LSB FHS, /opt, /etc/opt and /var/opt sections, or the illumos filesystem(5) manual page for the same. (Why aren't you as a third party supposed to deliver software into /usr as you intend in that Github issue?)

If you do not do it this way, it will impede adoption because it's one thing for a developer to play around in a language and another for a system engineer to make it operational: system engineers' time is usually in extremely short supply, and if they have to package your language for you, they could easily just move on to writing software in a different language that they can mass-provision natively, and nim will stay an underdog. Operational maintainability is the end goal, means mass deployment without extra code or software like Docker, only using native subsystems. Sooner or later things must go into production.

Never bypass OS packaging. Unless you are an OS vendor, never deliver anywhere into /usr, not even /usr/local.


You seem to be confusing /opt and /usr.

> native operating system package(s), delivering into /opt/nim (and system-wide configuration into /etc/opt/nim) should be provided

No, native package managers should install binaries in /usr/[s]bin and libraries under /usr

> Never bypass OS packaging. Unless you are an OS vendor, never deliver anywhere into /usr, not even /usr/local.

+1


"You seem to be confusing /opt and /usr."

It is very unfortunate for me that it seems this way to you, but as someone doing precisely this kind of engineering for almost three decades, I can assure you that this is not the case.

"native package managers should install binaries in /usr/[s]bin and libraries under /usr"

Unless one is an operating system vendor (like Joyent, redhat, SuSE, Debian, Canonical, CentOS, ...) third party and unbundled applications must never be installed into /usr: by doing so, one risks destroying or damaging systems in production in the case where the vendor decides to deliver their own version of the same software, and the vendor's upgrade overwrites one's own software and configuration. Long story short: don't deliver your unbundled software into someone else's space. /usr belongs to an OS vendor and to that vendor alone (vendor in this context also includes volunteer operating system projects one is not a part of).

/opt, /etc/opt and /var/opt exist for a reason. Refer to the filesystem(5) manual page on illumos and the LSB FHS specification, sections 3.13., 3.7.4. and 5.12.:

http://illumos.org/man/5/filesystem

http://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.html#opt...

http://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.html#etc...

http://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.html#var...

section 3.13.2., Requirements, explicitly states:

No other package files may exist outside the /opt, /var/opt, and /etc/opt hierarchies except for those package files that must reside in specific locations within the filesystem tree in order to function properly. For example, device lock files must be placed in /var/lock and devices must be located in /dev.

http://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.html#req...


> as someone doing precisely this kind of engineering for almost three decades, I can assure you that this is not the case

Same here.

> Unless one is an operating system vendor (like Joyent, redhat, SuSE, Debian, Canonical, CentOS, ...) third party and unbundled applications must never be installed into /usr

That's what I wrote: "native package managers should install binaries in /usr/[s]bin and libraries under /usr"

"native" as in: provided by the distribution. Not third party.

https://github.com/nim-lang/nimble/issues/80 is about using /opt/nimble and you wrote "If you implement that, you instantaneously take away the ability to deploy applications through OS's automated deployment", hence the confusion.


""native" as in: provided by the distribution. Not third party."

It was a misunderstanding then: I thought native meant the packaging format native to the OS, not packages which come with the OS (bundled software).

Issue 80 from what I understood is about nimble, a packaging format proprietary to one programming language, providing system wide installation capability; this would indeed preclude using native provisioning technologies because those use the OS's native packaging format and hence know nothing of "nimble".


It's not false: installing any kind of software without using OS packaging, even in one's home directory makes it a hack. It's working for you in your home directory; what if it now needs to work for everyone, or if it needs to be deployed without any human interaction whatsoever?


Nimble is used only at build time for libraries and few tools. Packaging binaries for production is not its use-case.


Ever worked at a financial, military or an intelligence institution? For example, what if the system where the software must be built has no connection to the internet?

Resisting creation of operating system packages, even for dependencies is silly, because doing so is not hard at all: all it requires is reading some documentation, but the knowledge gained can be reused over and over and over again for the rest of one's career. Learning different OS packaging systems is an investment which pays hefty dividends.


I fully agree. I worked on systems that are not connected to the Internet for security reasons and receive "offline" security updates using distribution packages.

It's also very common in security-sensitive environments to give blanket approvals to specific distributions and prohibit cowboy deployments.

(I'm not sure why you are replying to my post)


i must admit that i'm not a big fan of nimble (nim's package manager; for a few reasons) but on the other site you have apt, pacman, yum, whatelsenot, afaik most of them are not even able to install two versions of the same library side by side. The situation is so bad that most applications these days are packed with `flatpack` or something else, because of the so broken linux package management...

In contrast to eg. npm nimble is superfast.


> most of them are not even able to install two versions of the same library

That's entirely by design.

Distributions exist to provide a set of packages that are well tested, and work reliably, together.

And then guarantee that such set will stay the same and receive timely security updates for 3 or 5 or more years so that people can reliably use it in production.


and in the real world you do configure; make install then ;) edit: or docker every one i've met so far, uses docker as a kind of "package manager". Database? Oh year use this docker image, elastic search & kibana? shure docker. What they actualy want most of the time is: an easy way to install this stacks, in an non ancient version, where i can actually use the stuff i need to use.


"and in the real world you do configure; make install then"

In the real world, one gets an existing RPM .spec file, edits it for the required source one is about to build, and then runs:

  rpmbuild --clean -ba software.spec
once one has RPM's there is no need for Docker, as multiple applications can be cleanly installed, upgraded or removed on the system, and the entire system can be automatically PXE booted and provisioned by Kickstart without a single line of glue code.

Say no to hacking with Docker and make install instead of formal system engineering.


A shared object library is supposed to use linker map files during her creation to declare interface versions, so that the run time linker can request and obtain the correct version of the interface. Like all the traditional UNIX operating environments solved it: no need for multiple versions, one library contains all versions and the run time linker maps in the requested code. See how illumos does it. Search for linker map files.


i will thank you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: