I've used Nin for this years advent-of-code challenge. Never used Nim before but there are some great tutorials [1].
It's a nice language with a great standard library. Implicit static typing is great after a bit of getting used to. Some things like tuples or "Object Variants" are fun to use.
The only thing I'm missing is strong debugging support. You can use GDB and there is even some basic support for debugging in VSCode (through gdb behind the scenes). But most of the time I had to fall back to simple "echo" debugging because names were sometimes mangled, variables were not displayed or lines in nim did not correspond with lines in the final output, making debugging a chore. Unfortunately someone once started to write an embedded debugger (ENDB) for all platforms, but development stopped.
What's your opinion on this? In some way I would love to use Nim more in the future, but I'm afraid, that the larger the codebase gets, the more I'd suffer just because of debugging...
I wouldn't consider interactive step debugging an essential feature in large programs. My experience with C++ is that once program becomes "large" or makes enough use of threads, gdb falls over anyway and you're stuck with logging and "print" debugging anyway.
This largely depends in the IDE. The thing is, that a language like Nim is 'competing' with other languages like Java, C# or Python where interactive debugging is a given. I would highly prefer a language which compiles down to C code, but it's hard to give up an IDE like Visual Studio or Eclipse.
I know that many strong developers do not care that much about debugging or can use GDB without thinking. But unfortunately I'm not one of them. And I fear that Nim will stay a niche language because of that..
I'm wondering if the JS compile target could be used for debugging, at least in cases where it's the general logic of the app you want to debug and not some native code -specific bug.
JS already has pretty good debugger story for VSCode & other editors, that has been tried and tested with other languages that compile to JavaScript. If Nim can output good source maps, the JS debuggers should pretty much just work.
I think this is a very good idea but it may depend on what you are doing. For example, if you are using Nim to talk to some C library, I believe you won't be able to compile the C part into JS for debugging (even if you are not debugging the C part of course).
Sometimes I think if you’re spending too much time in a debugger, you’re already lost. Your program is doing something that you didn’t correctly plan for, or build tolerances to handle gracefully.
And if you’re using a debugger to fiddle with variable manipulation at that level, then you’ll probably just make your program even more unstable.
But granted, sometimes you do need a debugger, when you’re using someone else’s library.
Very interesting. Is the memory model essentially a middle ground between GC-based and manual memory management?:
> Scope-based memory management (destructors are injected after the scope) - generally reduces RAM usage of the programs and improves performance.
How does Nim handle references? How similar is this to Rust's lifetimes GC-ed instead of manually managed?
Is this essentially like Rust, with the main difference that lifetimes (references) are handled via reference counting?
> Shared heap - different threads have access to the same memory and you don’t need to copy variables to pass them between threads - you can instead move them
I think this is the same as Rust thread move semantics?
Seems Nim will do some static analysis where if it can infer a reference is locally scoped in some way, it will deterministically inject a call to the destructor at the end of the scope. If not, it will do reference counting which will also have (if using orc) an extra cycle detection pass so that the references count references can also handle cycles. Finally, Nim will also do some static analysis, and if it can infer that even though a reference is passed to something else, if it can tell that it isn't used afterwards from the previous place, instead of copying the data, it will share it, bit since it's not used by the previous owner you can think of it as moving it rather then sharing it. If Nim can't be sure this is the case or sees it isn't, then data will be copied.
That's what I understood at least. So ya, it does look like some kind of hybrid between statically inferable ownership semantics, reference counting, ARC and GC.
So if I understand that correctly: Nim will try to do a light version of Rust's borrow checking and then gradually move from static to dynamic solutions for memory management.
I read that Nim's GC is optional, does this mean it can be deactivated and when the static memory management fails, I can do it manually for the edge cases that would otherwise done by the GC?
> So if I understand that correctly: Nim will try to do a light version of Rust's borrow checking and then gradually move from static to dynamic solutions for memory management
I think that's an okay way to put it. Though it might be best to think of it the other way around. It'll default to GC and copying data around (pass by value) and will try to optimize things if it can to more deterministic memory management and pass by reference.
> I read that Nim's GC is optional, does this mean it can be deactivated and when the static memory management fails, I can do it manually for the edge cases that would otherwise done by the GC?
It isn't optional like that. It's more that by default all data is passed by value and allocated on the stack. And if you want to create things on the heap, you need to be explicit about it, at which point you can choose if you want the pointer to be managed by Nim's GC or manually. That's done by having two type of pointers, ref will be under GC, and ptr won't.
Sort of. You either deactivate it completely with --gc:none ; or, you do it on per-variable basis by using a "ptr" (pointer) type, which is outside of the GC's realm. Any variable declared "ref" (reference) is within the realm of Nim's GC, and will be tracked by the gc you chose (none, boehm, mark&sweep, arc, orc, ...).
Additionally, you can define type bound operators to help the GC do its job, but IIRC (and I'm not sure I understand correctly), that doesn't turn off arc/orc - rather, it tells them how to apply some things.
Refcounted refs have copy and destruct semantics, and then what you're talking about is basically move semantics over those refs. Refcount increments be elided if the ref is "moved." The increment is only needed on copy. Additionally, a moved-from value doesn't need to be destructed (in the case of a refcounted ref -- decrementing the count and deallocating the pointed-at memory if count goes to zero). A talk about similar concepts in the "Lobster" language that explains it pretty well I think: https://youtu.be/WUkYIdv9B8c
I meant the copy semantics of the ref itself. The copy operation on a refcounted ref increments the refcount at the object it's referring to. The references themselves are values. Another way of looking at this--in C++ the copy constructor of std::shared_ptr. Move semantics as generally applied across values, when applied to such references, is what gives rise to the refcount elison stuff (the move constructor of std::shared_ptr does no increment).
(I can't see the link, its offline) the difference is though, there is no expensive heap scans. That is new (it's called "trial deletion", see the Bacon paper) and makes it "deterministic" due to that, independently from the heap size and suitable for hard real-time systems https://nim-lang.org/blog/2020/12/08/introducing-orc.html
The relative simplicity of the language is a feature. I can imagine introducing Nim to embedded/microcontroller firmware developers that only know/use C99 now. I cannot imagine introducing Rust - while a great language, it is C++ level complexity - and that is not always warranted.
Nim fits great on microcontrollers! Benefit is that it compiles to C and therefore can run anywhere C can. It also lets you reuse all the rest of the embedded C environment and build tools, which given the nature of embedded toolchains is handy for hitting the ground running. Rust's standard library/runtime binaries are really heavy compared to Nim's.
They're doing great! Nim has proven surprisingly stable, which was a fear I had at first. The core team has improved compiler support and I've not run into any issues updating so far. It bolster's my confidence in Nim.
Even if Nim development were to suddenly "stop" tomorrow, it'd easily be useable for embedded for a decade or more. There's improvement to be made in Nim, but it's a resilient system design. If the same were to happen to Rust or Crystal, for example, you'd have issues keeping the compiler up to date with LLVM, etc. Compiling to C gives a lot of stability for embedded work. Sort of similar to how Delphi Pascal is still useable today.
One surprise came when I had to do some optimizations and went from `-d:debug` to `-d:release`. The debug code added enough overhead it fixed some timing issues with the high end ADC we're using. Moving to release mode made it too fast and required adding the required delay (it was in the ADC datasheet but forgot it). Easy to solve, but it's worth noting.
> Do you have futur plans for the new esp32 riscv processors?
If a risc-v processor supports C, then it already supports Nim! The real question is libraries and/or build support for the target. An afternoon of work can get the Nim build setup to work with almost any C/C++ build system. Really, Nim on risc-v just requires a target board and someone to sit down for a few hours. It's a fun "hacking exercise" :-)
As a side tangent, I'd like to make a first class "pure" FreeRTOS library as it's the most widely used RTOS. That's after dealing with a lot of annoying race conditions in the `esp-idf` and seeing the incredible amount of hacky C code in embedded systems.
Building on FreeRTOS but introducing esp32-like libraries for vfat/files/networking/etc that'd work across microcontrollers would make for an incredibly productive environment (for embedded work).
There's also the possibility of using DrNim [1] to add formal verification for certain algorithms! Nim's effect system is very flexible. Really useful would be using the "guards and locks" [2] to write and prevent deadlocks (and maybe avoid locks?) when writing resource management and device drivers. A lot of the esp-idf issue I've run into are race conditions or slow speed since even `echo`/`printf` must have a lock to protect the system UART. Those are open questions/problems, and I'm starting a new job next month so it depends on what the needs there end up being.
That’s very insightful. Thank you. Some more questions since you are the person which shows up for esp32 and nim in the whole wide world :)
Did you try out the esp-idf 4.3 release yet? I think you mentioned somewhere to not use 4.2. but 4.3 gives you the esp-managed Mqtt service on Aws (rainmaker)
Do you have any thoughts on Freertos vs Zephyr whilst using Nim?
> That’s very insightful. Thank you. Some more questions since you are the person which shows up for esp32 and nim in the whole wide world :)
You're welcome. lol, no one else did it so there you are.
> Did you try out the esp-idf 4.3 release yet?
No, my work requires stability so I haven't tried any of the new esp-idf releases. Nesper does compile with 4.2 if you pass the flag for it, but it's not well tested. If none of the major API's have changed 4.3 should work.
It would be good to wrap the mqtt aws service. Don't know if I'll have time for it though. But it's really not too hard to wrap using `c2nim` on the header files. PR's welcome! :-)
> Do you have any thoughts on Freertos vs Zephyr whilst using Nim?
I would like to try Zephyr sometime. FreeRTOS is used by AWS and the esp32 so it seemed a good start.
I think the main difference (benefit?) is that Nim has the convenience of a garbage collected language while retaining good and predictable performance (e.g. via ARC/ORC and thread-local GC). Rust has no GC so requires more from the programmer. Personally, I'd rather use a GC language to have less complexity in the code even though I have to spend some more time learning to use the GC correctly.
Nim's ARC thing is pretty different from blackbox GCs you have to poke with a stick like Go's or JVM's. It's pretty deterministic where additional calls are inserted by the compiler, and you can use --expandArc or look at the generated C.
As a single anecdote -- I've studied Rust before and it's been a while. Pretty familiar with C++ move semantics stuff. I decided to read the Rust book again while also learning Nim simultaneously; and I had an entt wrapper and graphics rendering in Nim without GC before I finished getting through the borrow checking chapter in the Rust book.
It really depends on what you're doing. If you've developing an OS or web browser or game engine you're going to be having to think about memory management very carefully in any event and dealing with Rust's lifetimes is just formalizing the work you'd be doing anyways. But most applications aren't optimized that heavily and as long as you don't have a memory leak or an algorithmic pessimization you don't have to worry about performance in most cases and having a garbage collector is saving you a lot of work.
People do game engines in Nim. ReelValley was even a commercial effort. (This is only to supplement the parent comment, not contradict.)
Your mileage may vary, but several times I've tried some single threaded thing in both Rust, C++ and Nim and the Nim came out faster (e.g. [1] had final Nim version 5.0 ms, C++ 27 ms, Rust 42 ms), not putting much effort into any, and is likely more readable to someone brand new to the language. Writing generic data structures & algos in Nim is also a true breeze/pleasure. Anyway, they are all "fast and maybe-safe by default" and all respond similarly to optimization care. There is no obvious performance disadvantage (and compiling times are much better in Nim, yielding scripting language-like code iteration).
Rust algorithm is only one way to do automatic memory management. And regarded as being very restrictive. The scheme Nim implements (ORC), which is innovative as well, is more permissive and unobtrusive. I hope in the next year it becomes the default.
For me, it's a full stack language, write the frontent with karax https://github.com/pragmagic/karax or react bindings https://github.com/kristianmandrup/react-16.nim and backend in pure nim with any of the available web frameworks (jester/prologue/looper) and orms (ormin/norm) and share code and type declaration between the two.
Well, it works with TinyCC/tcc where you can get order 200 millisecond compiles. While tcc does do most of c99, I do not think Nim requires anything past c89. libc-wise, it doesn't use even all of standard C. Minimal stdio/string stuff. You can easily access any C libs, though. So, a given program may have more dependency, but that is up to you (or to your Nim package deps).
Full disclosure, not everything works - different backends tend to have slightly different bugs/coverage, but this also applies to c++ and js backends. But I use tcc as my default backend (in my nim.cfg file) every day and only run into problems like a time or two per year.
Doesn't depend much on the standard library -- almost not at all, and usually behind flag (e.g., if you don't -d:useMalloc , nim will not us malloc).
I think it's no longer maintained, but there's a kernel that boots on metal written in Nim. Furthermore, people are running Nim on ESP32, Nintendo Switch, JS (Browser, Node), iOS, Android and more; From that alone its clear that the C library can't be a significant dependency.
(And ... just about any C compiler around, including GCC, LLVM, TCC, MS C, Intel C, Zig C are supported when using C as a backend).
Nim looks Pytonish (and has some Python-inspired syntax), but it is very much NOT Python.
People coming from Python that expect a "compiled, faster" Python often find a compiled, faster language but have very weird concepts expecting that language to behave like Python though it isn't.
When I first found Nim I was looking for a faster compiled Python. I've found something that is much better: static types, less of an emphasis on OOP, static dependency-free fast binaries and macros all work together well to make Nim awesome to work with but what makes Python great is retained: speed of development, ergonomic syntax, and a small learning curve.
> very weird concepts
I'm curious, what weird concepts are you referring to?
The Nim forum has recurring questions about deserializing JSON that only make sense if you assume Nim is runtime dynamic like Python.
And many random question start with “Python let’s you..” or “shouldn’t we make this more like Python” to which Araq rather consistently (and rightly) replies “no, because this is not Python”
> The Nim forum has recurring questions about deserializing JSON that only make sense if you assume Nim is runtime dynamic like Python.
Nim is a statically typed Python, and actually the way you can deserialise JSON is very Python-like, so I'm not sure where you got this from. Here is an example:
But replace the last line with 3+j["foo"] in python (and respectively, echo (3+j["foo"]) in Nim), and the Python prints 45 whereas the Nim compiler greets you with ~40 lines of output asking you what exactly you meant.
Just to clarify: I'm not complaining. I'm happy that the compiler bugs you to do 3+int(j["foo"]) if you're going to treat it as an int. But I don't consider Nim a statically typed Python.
(Also: am a very satisfied owner of a dead-tree version of the Nim book. Thanks! Highly recommended. And dom96 is awesome in general)
When I started learning Nim, I did not really expect anything, but time and again I caught myself thinking, ‘This is so much like Python!’ Eventually I lost interest because I do not need a faster Python that is not Python, and I didn’t find much else to be enthusiastic about.
I've written a large-ish amount of Nim code on personal projects while using Python day to day at work, and almost every time I'm working with Nim my brain is screaming at me at how unlike Python it is.
Nim took some inspiration from Python's syntax, but the similarity is only skin deep. The construct of the language is very, very different resulting in code that usually looks and is structured differently.
IMO going into Nim thinking it will be like Python but compiled! and faster! with types! is not a good way to approach it. It is an entirely different language altogether.
Yeah, it values composition over inheritance (mentioned in the official tutorials) and discourages you from using methods and other OOP concepts. It's by far a procedural language. Functional style is also not preferred either, imho a good thing, fp does not result in fast code.
Most of them actually, it is only lacking in manpower.
It provides the productivity of having a GC around (no need with coding for borrow checker patterns), you can still go as low level as you want, including writing GC free code, and is very fast compiling code.
This is subjective, but I prefer Nim's python-like scoping syntax; ie no semilonons nor curlybraces. Someone else in this thread posted a comment to the opposite effect! That said, I don't see a compelling class of projects to use it on instead of Rust.
I'll consider Nim next time I'm looking at making a python project. However - most of the times this happens is if I'm making a web backend or doing something numerical, where Python has a library advantage.
Funny how personal preference with regard to some syntax is so polarising - I'm coming from a primarily C# background and while Nim looks interesting, I find the lack of semicolon line endings and the whitespace-based scopes, to be really off-putting!
Syntactic indentation has been very polarizing since at least Python 0.9 (and maybe others) in the late 80s. Like ()s in Nim, Haskell also has a "both brackets or whitespace" kind of vibe.
But, yeah, choice lets different parts of code bases differ. No two ways around that. A ton of people had knee-jerk resistance to Python's lexical style but "got over it" eventually.
There really is a lot more to Nim than just this (its generic/template/macro metaprogramming, user-defined operators, GC options, speed, etc.). Many, many things can be done as libraries that would require direct compiler support in other languages. I would encourage you to give it a try.
> There really is a lot more to Nim than just this
I've actually just been skimming some tutorials and docs, and wow, you're not wrong! I'm impressed - Nim seems to be rammed with features, but also easy to get started with (as long as you ignore some of the more advanced stuff).
I've recently been trying to find time and motivation to start learning Rust, but I have to say that (even with horrible indentation syntax :p ) Nim looks really compelling too...
If a little CLI app to do something useful (instead of say, some shell script/batch file or something) is your entry point then you might try cligen [1]. Everyone is different, but from many and varied reports (e.g. in this very thread [2]), it is hugely more probable that you get up & running with some basic skills in like 30 minutes to 4 hours or so with Nim than you would with Rust. Nim really seems to "scale up gracefully" for most.
Well, cligen is pretty mature/featureful, less work to use and well maintained and is the most common recommendation in Nim. If you happen to like more verbosely specified CLI interfaces like Python's argparse there are at least two pretty close to the Python argparse -- therapist [1] and nim-argparse [2]. There is much more to any programming language than its stdlib. { If you find these insufficiently "drop-in compatible", well, you can write your own port of argparse.py that is closer. :-) That is probably a good exercise. }
Excellent language! I’m using it for scientific computing and like it a lot. There are a few but useful and good libraries. Arraymancer,ggplotnim, alea and stats. Also, great Python integration nimpy library.
I also have many thing I don't like in every language I use, but I still use them because they're still useful. If we're talking about syntax for example, for me it is Clojure > Python > Elixir >> Ruby, but I know lots of people that thing Ruby is the most beautiful language on earth.
BTW, if we are talking about things that really goes on nerve for me it is the fact that Golang doesn't compile when there is unused imports in the module, so I can't simple comment a random line and try to compile it again. But I know people that like this because they editor are configured so the import is automatically removed. Ok I guess, but I still find this behavior annoying (I would much prefer that this was a compiler warning).
I wish IDEs/text editors enabled code to be rendered and edited in different ways: braces, do/end, whitespace. That could very well be a good way to resolve these disagreements.
Python using whitespace doesn't allow me to mess up and have my formatter autoformat it. Js and others with {} have way too many symbols flying around for easy parsing.
Thats why I really like crystal! Do/end, low level, elegant. Wish the devs would focus more on wasm and other features that would put it more in the spotlight.
I worked in Python for a while, and once you have a production bug because the indentation shifted around on one line when you moved some code around, you start to get really cautious and nervous every time you cut-and-paste. I much prefer C++ and Java now, where I can just paste and reformat without fear.
(I still use Python, and I'm still going to try Nim some day. Syntactic whitespace is a pain, and, in my opinion, a bad design decision, but it isn't everything.)
Nim has a great feature set and will be a good fit for Python or Ruby teams wanting a more performant language. At the same time, it's more expressive than Golang and will fill a niche Golang couldn't meet.
I like that we now have Golang, Rust, Swift, Nim, and Kotlin. They're all doing new things and excelling at their own niches.
I already use golang for work, and now I'm just doing a little toy project in nim. What I like most about nim is that it has generics, and produces binaries with small size. I'm creating a graphical program with a http server running in the background, and mixing asynchttp with UI threads is a bit of a pain in the posterior. Granted, I'm new to the language, I'm considering using channels, maybe that will make it easier to share state between threads. At least compared to golang, nim hasn't been a smooth experience yet when it comes to threads.
Anyways, I'm just voicing my personal gripe with the use of whitespace-sensitive syntax. Nim is a overall a good language to use.
> I'm creating a graphical program with a http server running in the background, and mixing asynchttp with UI threads is a bit of a pain in the posterior. Granted, I'm new to the language, I'm considering using channels, maybe that will make it easier to share state between threads.
Ideally you shouldn't need to use UI threads and instead integrate the async loop into the UI's event loop or vice versa. Happy to advise more if you can let me know which UI framework you are using :)
This being said, I am hoping to implement better support for using `spawn` and channels with async, i.e. the ability to await the result of `spawn` or a channel. I think that will make a lot of use cases much easier.
Best of all, they also provided that required punch in the Java and .NET design room to force them to revisit not having a proper AOT toolchain available since version 1.0.
> the lack of explicit start/end markers for code blocks is a very unfortunate choice.
> It was a bold choice by Python, but in my experience it turned out to be a poor choice
Could you explain why? I've heavily used languages with explicit block delimiters as well as Python, in collaborative environments, and the significant whitespace in Python does not really cause me any trouble. There is some slight overhead of making sure of the indentation when copy-pasting, but a decent text editor will make fixing things simple.
As for the theoretical mixing of or trade-offs between tabs and spaces - that's just not a practical concern that exists in my experience.
Of course that's a matter of taste, but the whole thing just seems like a minor detail to me - especially when we have the option of code autoformatting now.
Am I missing something here that makes other people's experiences much worse?
Just to give a bit of context, I've been programming for almost 30 years, using many different languages. I've worked on many things, but of particular relevance I was part of a small team for several years which amongst other things had a 100kloc Python project that I did significant work on. I continue to use Python for all sorts of smaller projects, and for the most part enjoy it a lot. I've worked on large C++ code bases for years as well, and many of the usual suspects.
To take the most subjective part first, for me, I just find implicit blocks much harder to parse than with explicit start/stop symbols/keywords. My eyes and brain just seem to have an easier time identifying the explicit blocks. It's like writing sentences without using a period to end them, but just three spaces say.
Yes it's a bit more verbose, but I find it really helps readability for me. This of course might very well be the way my brain works, or the way my brain learned to work as a result of my first programming experiences, hence being subjective.
A bit more objectively, in my experience it seems easier for myself and others to make mistakes at the end of implicit blocks, either having a line indented that shouldn't be or vice versa. It is my experience that with explicit blocks those faults tend to stick out like a sore thumb.
I even have an add-on for my current IDE which, amongst other issues, complains loudly about that. If the language had had implicit blocks it couldn't really do that to the same degree.
I also find explicit blocks is easier to use with tools, especially when diffing. When using a language with explicit blocks I can enable the ignore whitespace feature of the diff tool and if say an outer "if" was added, you get about two lines that changed. With implicit blocks all the lines of the block change, so you got to scan through them to make sure no "real" changes were made. I tended to spend a lot longer on non-trivial Python merges compared to say C++ due to this.
I've also had the misfortune to have to edit Python code using only simple tools, without any Python-magic or similar. It's a royal pain if you have to copy/paste code around.
But yeah, I accept it mostly boils down to preference. I mean clearly there are some more objective metrics, but how you weigh those will be subjective so, yeah...
Thanks a lot for explaining your perspective. It's interesting to me, since all the things you mention are not part of my experience at all, despite working with Python a lot. Even diffing, as you said, is easy because of the "ignore whitespace" you mention. Anyway, it's good for me to get a reminder that I can't so easily generalize from myself. :)
One of my favourite things about working with Vala¹ was being able to use Genie² with the off-side rule syntax where it felt appropriate. Some chunks of code feel better to me when structured in certain ways.
I often find myself wondering why other languages haven't implemented support for an alternative syntax. I suspect the answer is that simply no one cares enough to put the work in, but perhaps there is a better reason.
Yeah, but having 2 ways is a cop-out, IMO - if I'm writing code in a language, or reading someone else's for that matter, there is an expectation of it being idiomatic.
Araq: as soon as I hear “business logic” I stop listening
He felt compelled to make these comments about a conference talk from Facebook, who are arguably running the largest web application in the world, regardless what you think about Facebook. Such a lack of humility does not instill confidence in project leadership.
Well, offer up your language of choice, and I'm sure we could work together to find some online comments to assassinate the creators' character.
Perhaps instead of trying to drag someone in a public forum you should've reached out to him and brought it to his attention. Gitter is an IRC-esque platform, it's not exactly designed for deep academic discussions and thoughtful prose, it's a chatroom.
Also, since you've seen fit to bring up Facebook's project leadership, perhaps you should look at the MANY failings of theirs. Cambridge Analytica is only the most notorious, there are thousands of cases of injustice done for users. For a brand operating behind a mission of "connecting the world," they sure are disconnected from serving some of their users.
Facebook runs an impressive ship, sure. They are worthy of much less of my respect than Araq. I've written <1000 lines of Nim, so I'm no expert, but anybody can see that it's his pride and joy. I applaud him for remaining human.
That would be a great argument, except I didn't start out with the intention of assassinating anyone's character.
The above thread is literally the 3rd search result for "Karax state management". You know, a very reasonable search query when someone's trying to determine whether a frontend framework is ready for production use. This does not bode well.
> Well, offer up your language of choice, and I'm sure we could work together to find some online comments to assassinate the creators' character.
Yeah, okay, let's play that game: you asked for it. Let's try to search for "Reagent state management", also the foremost frontend framework for a relatively obscure language (Clojurescript). I guarantee you will not find Rich Hickey making an ass of himself on the first page of results. I'm fairly certain you won't find it in the results at all, because he is more mature than that.
Let's try again for Scala.js: "Slinky state management". Again, no sign of Martin Odersky acting like an immature jackass anywhere in the search results.
Words matter. First impressions matter. If you don't realize that I'm afraid you are in need of growing up yourself.
I'm on mobile, but I'm certain Odersky was brutalized for "Pimp my library" or Pimp my Classes or something like that some years back.
I can appreciate your perspective a little more from your clarifying it. I stand by my original point that this is something NOT to be handled in public.
You're certainly right that it'd be hard to find Rich Hickey making an ass out of himself. I can't see myself arguing that anyhow. I guess I don't really think Araq made an ass out of himself in making that comment, especially with the context of the conversation. I think there is a lot of pretentious attitudes in front-end circles, and it's certainly very irritating to listen to people talk so highly of these things as if they are pristine towers of brilliance when it's JUST a webpage.
You can certainly get stateful components with Karax. Perhaps I am missing something but Karax itself doesn't need any special support for them.
In the code for NimForum I've laid out each component as a stateful component, one example is the `threadlist` module: https://github.com/nim-lang/nimforum/blob/master/src/fronten.... You can see the `State` type defined there and all components follow this convention.
I just watched that Facebook F8 talk Araq gave up on quickly. It is a very high-level software engineering management sort of talk which actually becomes pretty clear pretty quickly. While at FB they may be engineers, at many other companies they could almost have been MBAs. I take Araq's comments as simply a lack of interest in discussion at that level, and his need to make said comments just an explanation to the conversor there and hardly worth making a big deal over.
In any event, even if you radically disagree with Araq's personal communication style, I think you should still give Nim a try. Many people do not like Linus Torvalds' communication style and still use Linux with great joy...
It's a nice language with a great standard library. Implicit static typing is great after a bit of getting used to. Some things like tuples or "Object Variants" are fun to use.
The only thing I'm missing is strong debugging support. You can use GDB and there is even some basic support for debugging in VSCode (through gdb behind the scenes). But most of the time I had to fall back to simple "echo" debugging because names were sometimes mangled, variables were not displayed or lines in nim did not correspond with lines in the final output, making debugging a chore. Unfortunately someone once started to write an embedded debugger (ENDB) for all platforms, but development stopped.
What's your opinion on this? In some way I would love to use Nim more in the future, but I'm afraid, that the larger the codebase gets, the more I'd suffer just because of debugging...
[1] https://nim-lang.org/docs/tut1.html