- One's knowledge of Python jives really well with Nim. Thinking in Pythonic ways, a small number of constructs in Nim gets you a long way without gotchas that block you. For example you can easily integrate Numpy in minutes.
- Statically typed, fast compilation, readable code, good module import system, binary executable that you can move around easily without installing anything more at distributed sites, and a built-in test and integration framework.
- An efficient web server comes along, supports a practical and useful template system for small team web applications. Concurrency. Async dispatch/coroutines gives you back-end options to scale.
- Nim's Postgres database client driver is glitch free, easy to use, and it turned out to be a real good work horse.
Arrays? As pleasantly nimble as Python arrays to say the least. Pointers are lengthy discussion, but suffice to say that pointers are smartly handled to avoid their pitfalls at runtime (while integrating with external C if you really need to)
Time time and again as I went through using Nim, what really stuck with me was that the designers had thought through these practical matters so very well, and even in its current < 1 version the language is remarkably complete and robust for practical programmers.
I am a fan of underdog languages - J, picolisp, shen, xtlang (extempore). [3,4,5,6]
What little I’ve done is a pleasure. I like some of Racket’s post-Scheme language features better, but Chicken has a lot of Get Stuff Done libraries (eggs), and compiling a single executable is pretty killer.
Racket will bundle up an executable pretty well too, but it’s hard to compete with Scheme -> C -> statically linked executable for some things.
So this Chicken code:
 "CONS Should Not CONS Its Arguments, Part II: Cheney on the M.T.A." http://home.pipeline.com/~hbaker1/CheneyMTA.pdf
 "CONS Should not CONS its Arguments, or, a Lazy Alloc is a Smart Alloc" http://home.pipeline.com/~hbaker1/LazyAlloc.html
I base this not on any specific understanding, but on needing to compile something for Windows every few years and feeling now that the hoops to jump are fewer.
Last time I checked though they hadn't quite nailed their memory-management model and things were still in a state of flux.
And at this point, having experience with ATS + Rust, the fact that the memory-management model is still wonky is kind of disappointing.
Recently I've used Nim for the first time for an official project at my job (at university). Instead of doing a simulation with Python+Numpy, I've decided to do it with Nim, and just plot the results with matplotlib. The whole experience was very pleasant.
Speaking of interoperability with Python, there is a great Nim library called Nimpy , which makes possible to use Nim as a more pleasant Cython — you can keep writing Python, and just use Nim for the intensive/slow stuff.
It's particularly great for systems programming tasks that require high performance and zero dependencies. For example, Status is currently writing an Ethereum 2.0 sharding client for resource-restricted devices in Nim.
Nim is also awesome if you've got an existing C or C++ codebase, the interop that Nim offers is one of the best (if not /the/ best), especially when it comes to C++ interop. As far as I'm concerned no other language can interoperate with C++ libraries as well as Nim can.
1 - https://github.com/status-im/nimbus
Create a std::vector<Foo> in Nim. Create some Foo objects and append them. Pass the vector to C++, create some Foo objects there and append them as well.
- js doesn't have source maps (kinda of a big deal to me)
- some error messages are head scratchers (seem to remember trying to add things to an immutable array not being clear)
- docs could use love (eg seeing more examples of macros in action)
- devel (their nightly compiler) can be rough (e.g. i found the "strings cannot be null" cutover a bit rocky -- my own damn fault, i can't go back to 0.18 after being on 0.18.1)
- the big one I think, however, is adoption. I keep hearing "i'll just use rust or go". That's legit as they're also awesome.
nim's stdlib is massive (too big?) and there's tonnes of high quality packages out there. You won't be left thinking... well, crap, looks like I need to talk to roll this redis layer myself.
EDIT: Formatting. How does it even work?
They aren't merged yet in the upstream compiler because I wasn't sure if I wanted to refactor the jsgen with them, but otherwise they are almost there: I use them in a personal project in a forked branch
I really really think "underdog" is the best way to describe Nim because of this.
It has less modules than Python.
- No Nim v1.0 yet, despite this we do our best to create a deprecation path for everything that's possible.
- No big company like Google/Apple supporting the language.
- Community is smaller than that of Go/Rust.
It's looking pretty promising though. Especially if you are a fan of python's syntax.
You can mix and match manual memory management and GC-managed types in the same codebase.
A couple of bonus facts for you:
- they've got an effects tracking system where you can have the compiler track (and whistleblow!) which functions are pure or not
- their multi-phase compiler allows you to read in source code at build time (from files or external programs!)
- their macro system is typesafe as it operates at the AST level
- the guy who created it will always tell you how he feels
- again with their macro system... there are FP libs, pattern matching libs, and OO libs that can "literally" transform the language to fit your preference
- and one more just for you: they don't support tail call optimization (ducks)
What does this mean?
I have just seen a ... competing language team spending developer time on purging the code of what they call ableism. apparently it's now offensive to talk of a sanity check ot to facetiously refer to OCD in a comment.
At least we may hope the Nim team lacks manpower for such idiocy.
All I found on Google was this gist of someone saying "sanity check" should be avoided. ("health check, too) , and some issues and pull requests in projects that were not languages.
One of the latter is clearly trolling to test Linus Torvalds' resolve to be polite , complaining about "ableist/saneist" terms, including "silly", on several of Linus' repositories.
You can tell it is a troll because it is just copying/pasting the exact same complaint, just changing the name of the project. It is not even bothering to change the list of alleged problematic words and their counts, so for example it is claiming that perconvert has 144 occurrences of "sanity check" when it actually has 0. In fact every single claim on that one is wrong. The only word from the complaint actually in pesconvert is stupid, which occurs one time, not the six times claimed. Second sign it is a troll is that it is from a GitHub account created just before the complaints were posted.
> Thank you so much, it's so much more inclusive now. My rabbi will be pleased.
This galloping madness is beginning to scare the shit out of me.
>We realise it's a troll, but we had an internal discussion and we decided that we wanted to remove these anyway. We're not being terrorised into change just because a (bad) troll appeared, they just happened to bring attention to a real issue.
Either way, the decision is ... whichever derogative may not yet be blacklisted, sorry, interdicted.
It has generics, tuples, tagged and untagged unions, it even has C++20 concepts.
I can see why it'd be interesting to someone with no C/C++ knowledge to get into systems programming though.
So the whole expression just means "a public constant string (automatically inferred type) named `hand`".
1 - https://nim-lang.org/docs/gc.html
I read Rust is basically a safer drop-in replacement for C.
Pros: Nim is about as easy to program as Python, has the same speed as C, a working FFI, and all the usual bells and whistles of modern "battery included" languages like a package manager with lots of packages. It is garbage collected, which is good -- about this, some will disagree, of course. It has nice high level constructs and doesn't attempt to reinvent OOP or something like that, the language is fairly straightforward. It also has a powerful macro programming facility, which is cumbersome to use though.
Cons: The community is too small and so far has not attracted CS people or many professionals, and there is the usual bulk of abandoned or undocumented packages you get with these kind of languages (more on that below). It has a few controversial syntax choices (e.g. identifier case rules) and also a number of semantic misfeatures that ease compatibility with C. It has support for native threads but no Green threads like Go, and consequently also no Green thread -> OS scheduling that would be ideal. (You'd like the language to do some flow and dependency analysis and parallelize to Green threads automatically, which are then mapped to OS threads, but AFAIK only few experimental languages can do that by now.) Its garbage collector is not optimal and not as performant as Go's, I believe. It uses whitespace for blocks, like Python.
Overall, Nim is a pretty good general-purpose programming language.
Rust, Go, Nim, Elixier, Julia, Crystal, etc. do not have GUI frameworks that are ready for prime-time use in production, except maybe for a few interfaces to web apps. Their native libraries (like Nimx for nim, duit for Go, Conrod for Rust) are unfinished, limited or simply too impractical, and bindings to Qt and wxWidgets are either undocumented, incomplete, or suffer from weird license restrictions (like a Go Qt binding I've taken a look at, I forgot its name). Some of the libraries also create monstrously large executables.
For command-line tools you can use any of them, just like thousands of other languages. For web programming, you can also use them but then there is also Common Lisp, Racket and plenty of other languages good for that. For desktop applications with modern GUI, on the other hand, you will be too limited with any of these languages and constantly chase some incomplete bindings or try to figure out how the bindings work. (Most docs for bindings simply assume that you've used the respective library a thousand times in C or C++, in case of which you could, frankly speaking, probably safe yourself the trouble and do it in C or C++ anyway.)
For this reason, I've decided to use Lazarus for my GUIs. Qt with C++ or Python is also a good choice. I also use Racket, but its native GUI is too limited, and still have hopes for Go.
The interpreter currently relies on refcounting for simplicity and to provide deterministic latency behaviour. It's (currently) a simple s-expression interpreter partially because (at least at first) I want to use it to interoperate with another Scheme system via sending s-expressions forth and back, and then that's the lowest latency option. I've got various ideas in the areas of debugging and typing that I'll try to implement, and if I succeed on that path and start wanting to use it as the main Scheme system then I'll certainly move on to compiling to byte code or use a JIT.
I just started a month ago, and have to nail down licensing with my employer, when that's settled I'll publish the code.
Chez is very mature and probably the fastest Scheme available. It's so good that the Racket team is currently converting Racket to Chez as the backend language and compiler.
PS. consider it to be a Scheme really optimised for doing work with Qt. It appears that binding Qt well into another language is difficult (Python being about the only one where that was done successfully?), so I'm taking the approach of working "close to the metal" (C++), do things that are better done in C++ (like subclassing widgets) there, then making interfaces to scm as I go (i.e. make it so easy to make interfaces that this is a reasonable approach to do). That way I don't have to do all the work of binding the entirety of Qt, I can use qtcreator as I see fit, I can decide on a usage basis how to interface to a widget (modal in the Scheme view but not modal in Qt's view?, when does it need destruction in that case?). I guess the knowledge or abstractions coming out from this might be portable to other implementations, though.
Also again, I also do have code in another implementation (Gambit) that will have to stay there for the time being; communicating with the GUI via sexprs will of course be an indirection, but given that web programming could work the same way (the Scheme implementation on the server communicating with the one in the browser) it looks like the right approach to try for me. That will also mean that the approach will work with any Scheme implementation as the server (Chez, Chicken, whatever).
1. no internal drag&drop from control to control in a frame or from frame to frame, like from an editor snip to a listbox, or from a listbox item to a text field or canvas
2. text% and editor-canvas% are too slow for some applications, esp. for displaying lots of data fast or styling snips
3. text% does not allow associating arbitrary data with ranges (strange enough, list-box% has this)
4. text% uses a nonstandard format, neither RTF, XML, HTML, rich text is not easily drag&droppable or copy&pastable to other applications in a platform-compliant way without writing your own converter
5. no images in list-boxes, and generally speaking no advanced custom "grid" control (e.g. also no editable fields in list-boxes or similar table features)
6. no images in menu items
7. no toolbar, you have to make one your own and it will not be platform-compliant (macOS)
8. no docking manager or other advanced user configuration controls (we could implement these easily if we had frame-internal drag&drop, but we don't, so we can't)
9. no built-in input validation for text fields, like limiting one to integers, floats, dates, you have to do that on your own
10. it appears that some icons are not properly installed by Racket's deployment functions even if you specify them in the #:aux argument of create-embedding-executable
11. no access to tray icon
12. no way to obtain system colors from the system color scheme for custom controls, so you cannot create theme compliant custom controls
13. clipboard operations are limited (unless this has changed since I checked last time), meaning e.g. you cannot easily implement a "receiver" for some mime type data
14. related to the previous one, only whole frames can receive drag&drop objects and you basically just get a file path
That are all the points off the top of my head. For me, only 1, 3, 4, and 5 are problematic and 1 is a show-stopper. 3 is also important, since implementing this on your own can lead to a vast range of problems (you'd have to constantly maintain a data structure in sync with the snips in the editor).
Why is this a con?
>Some observers objected to Go's C-like block structure with braces, preferring the use of spaces for indentation, in the style of Python or Haskell. However, we have had extensive experience tracking down build and test failures caused by cross-language builds where a Python snippet embedded in another language, for instance through a SWIG invocation, is subtly and invisibly broken by a change in the indentation of the surrounding code. Our position is therefore that, although spaces for indentation is nice for small programs, it doesn't scale well, and the bigger and more heterogeneous the code base, the more trouble it can cause. It is better to forgo convenience for safety and dependability, so Go has brace-bounded blocks.
(Well, given it's Rob Pike and looking at some of his work, this time it may well be close to the truth... ;))
You make a fast Rails in Crystal and a large percentage of Ruby community will jump in. In Python you can't do that, the community is fragmented. And you have already Julia..
I’m still not sure what it is or why anyone would use it. It looks extremely complicated and verbose.
But that was before 1.0. Updating the example and applying some more tweaks, it’s down to 145: https://github.com/tormol/tiny-rust-executable
actually turns out to be important in embedded work
I wish it would gain more traction.
I like the pythonic syntax and easy 'fast code'.
If that doesn't bother you, Nim is really neat.
So the question is: why do folks completely avoid a language for a single relatively bland syntactic feature? Is there some cost I'm not aware of, or is it just stylistic/aesthetic?
Personal preference isn't a good enough reason?
I don't like white space sensitive languages because I've seen what happens in python when somebody accidentally adds a couple of lines formatted with spaces into a file formatted with tabs. I've seen git and svn mangle tabs. Long blocks are harder to track. Refactoring functions and nested ifs are much harder to keep track of. If you somehow lose all of the formatting in a block or a file, it's much more difficult to recreate the code if the only block delimiters are whitespace.
Essentially, white space delimiters are just one more thing that can go wrong and ruin my day. I try to keep those to a minimum. That said, Nim is my new go to for short scripts. I wouldn't write anything large in it for the reasons mentioned above.
Out of your list, the only one that seems like a real problem is recreating blocks if the code lost all formatting.
If a language either disallows tabs entirely or will refuse to run/compile code that mixes tabs and spaces in the same source file, you obviously can't get errors related to mixing tabs and spaces.
Whitespace is intended for human readability, with spaces and tabs not having any implicitly contradictory meaning. In a whitespace sensitive language, you have to set your text editor to make those invisible characters visible to make certain to only use the correct invisible character, then employ multiple such characters based on the necessary level of indentation to do the work of a single set of braces.
"Format your code as you would have done anyway but just leave out the curly braces".
It reduces rather than increases the number of things I have to think about.
Curly braces make this not an issue an they're visible. I don't want to depend on non-visible characters for behavior, but it's only a personal preference.
You have to do that in any language. Ever worked on a C/C++ files where the indentation is different from your settings? I see only 2 choices: either you temporarily adapt your settings, or you just cringe your way through.
The third alternative (use your own settings anyway), is just lazy and mean.
> I don't want to depend on non-visible characters […]
There's an easy solution. First, either forbid or mandate tabs for indentation. Second, forbid trailing whitespace. That way all spaces will be visible.
I'm not aware of ever having to do any of these things. I'm not even sure what you mean by "configure". Every editor I've installed has always done the right thing out of the box and every contributor who isn't completely incompetent has done the right thing naturally.
Compared to my experience in curly-brace languages where indentation holy wars about and it's actually painful to read code in with a brace style you're not used to - I have more respect for the wisdom embedded in Python and PEP8 daily.
Joking aside and as silly as it is to talk about "objective aesthetics" - surely you can see an argument for "less clutter == better" - as much as you've trained yourself to not see the braces, they add nothing that indentation doesn't already provide other than visual noise.
Objective asthetics? As far as I'm concerned, the tabs vs. spaces debate has basically proven that it doesn't exist for programming languages... (I'm rabidly pro-tabs, by the way). Maybe some of it is "trained myself not to see the braces", but it looks wrong without them.
All that aside, you just moved the argument from "My way is faster/less work" to "My way looks better", which is somewhat objective -> subjective.
Or you set it to replace one with the other, and not bother you.
How so? It's a one time change to a setting in your editor, vs thousands and thousands of keystrokes.
No extra work is less work than even a little.
I also think there's a way to write F# that doesn't have significant whitespace, but uses a lot more keywords. Verbose syntax, I think that's called. I almost never see examples written that way, though.
Out of curiosity, why? What’s wrong with using whitespace to organize things?
But yes, we do have sponsorship now. I don't consider it at the same level as the likes of Google/Mozilla though. If it wasn't for those companies I doubt Rust/Go would be alive today. You've gotta give us some points for our persistence :)
On the other hand, a language like Nim is starting from scratch. They don't have a specific target audience. There is so much competition in that crowded space that languages which are roughly on par in features and use similar paradigms are fighting for the attention of the geeks who would actually take the time to look, rather than it being bestowed upon them by the people already providing their tools.
Languages are not general purpose in the truest sense. They have their own little ecosystems where they're expected to be used and their proponents are often bubbled in that ecosystem. It's easier to migrate to a new language in your ecosystem than to move to a new ecosystem. Someone just hoping to solve a specific task will pick the tool that has an established history of being practical in that domain and won't take risks with new languages.
Personally, I've looked at Nim and find it interesting, but not novel enough that I feel I need to use it. What are the killer features that only Nim provides and you feel you can't do without them after using it?
The first reference to the FQDN ends with "-" before the TLD, apparently inadvertently.
Here is an example from my book that uses both: https://github.com/dom96/nim-in-action-code/blob/master/Chap...
Alternatively, you have an open-mp for loop that you can use with `||`:
for i in 0||1000:
Yeah, we need language research to keep devising new features and more efficient ways of programming, but this is different.
I wish the world would get behind a couple well thought out languages that cover most programming needs (functional, systems/bare metal, scripting) and stick with those.
I've seen some ridiculous and unsustainable stacks at some shops because everyone got to pick their favorite language. Then the morale of subsequent hires is in toilet because there's such a cognitive load to learn all these little crap languages.
I feel like some of these languages come about because someone needed to do some task and didn't understand or take the time to learn how to do it in an existing language. Among the latest versions of the mainstream language, there is no programming paradigm you cannot do.
And where these obscure languages REALLY fall down is tooling. Got debugger support for this? Got perftool support? No, of course you don't.
With a small number of languages, work can be put into serious tooling, and fixing compiler bugs, rather than a few devs spread thin trying to keep up with the bugs in their hobby language.
uh, thats basically been the case since ... 20 years or so? I mean i dislike java and c like the plague, but they're nonetheless the defacto enterprise standards at the moment for performance critical work and ... everything else.
just look at the TIOBE Index  for example. Ill admit that this is a pretty skewed statistic, but its nonetheless pretty representative for the basic idea behind it.
There is, of course, lots of services / software in other languages -- and often with very good reasons (take erlang for example for layer < 4 routing), but thats basically because every other language was insufficient for the usecase they had. Or because they wanted to deliver the software today, and not next week (i'm looking at you, php)
as an aside: <5%% is still a staggering amount of code, so please don't feel marginalized everyone ;)
It seems like a community needs huge incentives to avoid churn and fragmentation eg (a) strong backwards compatibility commitments, (b) big backing company to do the endless boring stuff, (c) strong benevolent dictator, ...
The same is true for languages except there’s no outer “scope”, except for platforms like iOS that might dictate toolchains or a company where CTO might make such choices.
But trying to avoid fragmentation among hackers seems like barking up the wrong tree.
Why do you say that? The whole point of Julia is to create a language that's similar to Python in ease of development but is natively fast. Do you think it fails at this?
In the general case that can't really be true, because the Lua interpreter is written in C.
With regards to GC languages in general, if you spend a lot of time working around the GC by doing things like object pooling, which is really just reinventing manual memory allocation, you can get close to a non GC language in terms of performance.
GC languages are obviously fine for plenty of use cases, and for some use code snippets they can be faster, but there is no way to make a GC free--there's going to be some overhead no matter what you do.
LuaJIT can be faster than C for some code. Just like C can be faster than someone's naive hand coded assembly.
That doesn't change the fact that in the general case C is still faster, and there are classes of critical high performance code that have to be written in C (or Assembly, Rust, or even Fortran). Sometimes, manual memory management is necessary to get acceptable performance (also determinism is occasionally required).
All else being equal, GC is always going to be slower than non-GC because a GC introduces unavoidable overhead.
I've worked in this space btw and I've never seen any evidence that LuaJIT is actually faster than C for anything outside of very specific micro-benchmarks.
The vast majority of benchmarks I've seen are down to LuaJIT performing specific optimizations out of the box that the C compiler used in the comparison can perform but doesn't.
In particular the last time I looked at LuaJIT vs C++ benchmarks, the C++ compiler flags weren't set allow the use of SIMD instructions by default, but LuaJIT does.
There was another recent example I saw where LuaJIT was calling C functions faster than C in a benchmark. Then someone pointed out what the LuaJIT interpreter was actually doing, and how to implement the same speed up in C.
Java people made the same arguments years ago: "Java is just as fast or faster than C++". You'll notice that after 20 years of comparisons, no one who writes high performance code for a living makes that claim.
It's true though. https://lmax-exchange.github.io/disruptor/files/Disruptor-1....
How many AAA game engines are written in Java?
> I wish the world would get behind a couple well thought out languages that cover most programming needs (functional, systems/bare metal, scripting) and stick with those.
> With a small number of languages, work can be put into serious tooling, and fixing compiler bugs, rather than a few devs spread thin trying to keep up with the bugs in their hobby language.
That assumption probably doesn't hold. If the authors of Nim weren't working on Nim, there's no guarantee they'd go work on tooling for some other language. Furthermore, who would decide which small group of languages people can work on?
But it does hurt in many cases; there is only so much programmer time; if people would invest their time in less projects, they would move on quicker.
I am as guilty of this as probably most people here, having written frameworks, orm’s, parsers, libs, languages, game engines etc instead of helping out on existing projects. I know I did it because I thought I could do better than what was there; usually that was false, however sometimes, at least I thought, it was true. For the bigger ones; frameworks, linux distros, languages, it was always false though so I would say it does hurt; I wasted time and the world did not improve. Only I improved as I liked it and learned, but I would have done as well if I had helped an existing project.
Another point to that is that a lot of people, especially people who solely use computers to make money to survive, really do not like choice in my experience. A lot of my colleagues ask me what to use and hate the fact that when they Google that there is a choice. They do not want choice, they want to use what is the de facto standard for the particular case in every case. With choice and fast change, the de facto standard isn’t there and even more experienced people experience the same angst beginners have when learning frontend web dev. It causes a lot of (probably underestimated) stress in the workplace.
This was before the obscure language C was invented.
And as far as debugging is concerned, debuggers are overused. Reading “Programmers at work” you see that 1. They used the equivalent of printf for debugging 2. Mostly they had not read TAOCP 3. They did not like C++.
Why enjoy anything after all?
Bottom line: If you're not using Prolog you are almost certainly wasting your time.
Almost all PLs you're likely to be acquainted with can be thought of as syntactic sugar over Lambda Abstraction. Prolog, from this POV, is syntactic sugar over Predicate Logic. It's actually a fantastically simple language, both in syntax and in its underlying operation, which could be summarized as Logical Unification with chronological backtracking.
I have been working with Prolog for only about two months but I am already way more productive. Typically a piece of code that might have been five pages of Python will amount to a page or less of Prolog. I hasten to point out that 1/5 code means 1/5 of the bugs, but it is also much more difficult to introduce bugs into Prolog code, so the total ratio of bugs per unit of functionality is much lower. Further, Prolog code typically operates in several "directions", e.g. the Prolog append() relation (not function) will append two lists to make a third, but can also be used to extract all prefixes (or suffixes) of a list, etc.; a Prolog Sudoku program will solve, validate, and generate puzzles from one problem description. So you get more functionality with less code and many fewer bugs. It's also very easy to debug Prolog code when you do encounter errors. I'm spending fewer hours typing in code, fewer hours debugging, and I'm still more productive than I was. Looking back, I estimate that as much as half of my career was wasted effort due to not using Prolog.
I'm implementing a sort of compiler in Prolog and I am impressed with the amount of work I haven't had to do. I'm beginning to suspect that most high-level languages are actually a dead-end. For efficient, provably-correct software generated by an efficient development process, I think we should be using Prolog with automatic machine-code generators.
Last but not least, Prolog is old. It's older than many of the folks reading this. Almost everything has been done, years ago, often by researchers in universities. Symbolic math? Differentiation? Partial evaluation? Code generators? Hardware modeling? Reactive extensions? Constraint propagation? Done and done. You probably can't name something that hasn't been explored yet.
Anyway, perftools are only a must have when performance becomes a problem, step through debuggers are way overrated, there are many features that no amount of libraries or tools will give you, and ecosystems quality matter about as much as size.
IMO, the goal of languages isn't to provide many features, but to provide constraints, so that programmers can stick to a set of rules that their peers can understand and agree upon. The most powerful language is the machine code for whatever CPU you're using, because there are no constraints, you have all of the power of the processor.
I both agree and disagree with this.
I love the creativity and imagination that goes into a language like nim. But at one time I thought the same about python! Sometime obscure little languages become important.
On the other hand, the tooling statement is dead on. At the point I find out it doesn't have a step-through debugger I'm just reading the docs for fun and then moving on.
It is a social problem, not a technical one.
#line 42 "actual_source.lang"
Sounds pretty productive to me.
In all seriousness, I would love to write this myself... but I physically cannot pour my heart into any more Nim projects. We need more people to help us out!
The only way I could solve this and make Nim a viable language for these projects is to spend my spare time tinkering, and that's unlikely to happen.
The situation is particularly problematic for something like gRPC where you really have to know Nim well to create a good tool.
As cool as Nim is, I've gotten to a stage in my career where I just want tools that work. That's the attraction of Go these days (though Rust is looking nearly as good now) -- most libraries already exist, and you can focus on the task at hand, instead of being forced to invent a bunch of wheels first.
Edit: OK, there's a library called "bigints" - it's not clear how "natural" the resulting code will be, but I might experiment.
Being stack based and having some nice compile time evaluation features should make it very performant.
Assume I'm not a complete numpty, but are there directions for:
... anywhere? I'm not at all familiar with retrieving and installing libraries from git, nor with NIM and how/where to install code. All assistance gratefully received. Pointers to existing detailed instructions also very welcome. I can investigate all this myself with trial-and-error, but if someone's done it before it saves me the time and effort of making all the mistakes again.
then `nimble install https://github.com/status-im/nim-stint`
Unfortunately I didn't have time to focus on documentation but the easiest way to get started with Stint is to check the tests: https://github.com/status-im/nim-stint/tree/master/tests.
The casts are there to check binary representation compatibility in the tests and are not needed otherwise.
Alternatively you can check:
- https://github.com/FedeOmoto/nim-gmp, A GMP wrapper (very low-level/C-like and not updated since 2015)
- https://github.com/status-im/nim-decimal, a arbitrary-precision floating point wrapper to MpDecimal (used by python). Unfortunately it's very low-level/C-like at the moment.
"Der Mensch ist doch ein Augentier -- schöne Dinge wünsch ich mir."
Humans are just creatures of the eye -- Beautiful things are what I want
No, coffee-script, typescript and the like don't really count, they are relatively thin layers. Several options are in the works. Implementations of Python and .Net (Blazor). Languages like Rust or Nim.
The crucial point is that it needs to have a compelling framework like vuejs or react. Then the toolchain needs to be superb and result in small enough webassembly packages.
No project is currently achieving this, to my knowledge. But the race is heating up!
Ocaml, or it's cousin ReasonMl are well worth looking into. Compile to JS with Bucklescript. Compelling frameworks are Bucklescript-Tea for an Elm like experience or Reason-React as a layer over React. Both frameworks are excellent.
It may be still in the race...
Why is being thin a disadvantage?
But those advantages may not be good enough to justify leaving things like react or vuejs (or similar) behind, or glueing the new paradigms to those frameworks. Rust for example has a React-like framework for use in Webassembly.
Its purpose is to become a better C++ for game development (high performance, simplicity).
That being said, I've got huge respect for the guy. I asked him about Nim in one of his streams and his reply was very courteous and reasonable.
You write about the 'next great thing', but I was merely talking about 'a better C++ for game development', which narrows it down to a very specific use-case. And I don't know what is wrong about if someone has had some success by building games to also look at the tools he uses for building those?
Besides that, looking at the few examples I have seen I didn't find the languages elegant, but it seems like it got inspired by some other modern languages like Go (e.g., no parentheses for if statements) while explicitly rejecting other ideas like the GC.
That's ... like the most important part?!?!??
It means "programmers who agree with Jonathan Blow and his opinions on what good programming is."
JAI doesn't implement abstractions (like objects or a GC) that might get in your way. JAI supports gradual refactoring and you can change your mind about the way memory is layed out ("AOS" (structs of arrays), delegates), and still use the same code. Want to shoot yourself in the foot with uninitialized variables? Fine!
In general, if you know what you are doing, JAI will allow it. JAI is similar to C in spirit, but C is a rather poor implementation in comparison.
For instance, C++ has operator overloading, which when misused leads do horrible, unreadable code. Still, the feature can be useful in some cases (vector/matrix library comes to mind). Java on the other hand avoids this feature because it doesn't trust the programmers with it.
As for JAI, it mostly means Jonathan Blow and his peers. I mean, he's obviously making a language for himself and whoever he hires. There's no point in catering to a different, wider audience. (The reason there's no point is because he doesn't need a whole ecosystem to support his language. He's trying to make it worth the effort even if he was the sole user.)
Minus: yet another proprietary package manager, bypassing the operating system's software management subsystem (operational maintainability)
Plus: finally a language which compiles to binary executable machine code.
Plus: transpiles to multiple "backend" languages (but transpilers incur a performance penalty).
Plus: can link with shared object libraries, thus having instantaneous integration choices with an enormous corpus of existing software.
Plus: can turn off garbage collection, which is ideal for command line programs designed to be chained with the shell pipe mechanism.
False. Nimble installs packages only in the current user's home.
It's one of the few package managers that does not encourage the dreadful "sudo pip/npm/... install"
To be honest, now that I'm thinking about it again, I think you're totally right.
1 - https://github.com/nim-lang/nimble/issues/80
For every platform nim supports, native operating system package(s), delivering into /opt/nim (and system-wide configuration into /etc/opt/nim) should be provided. This is not some whim, it's a formal system engineering specification. See LSB FHS, /opt, /etc/opt and /var/opt sections, or the illumos filesystem(5) manual page for the same. (Why aren't you as a third party supposed to deliver software into /usr as you intend in that Github issue?)
If you do not do it this way, it will impede adoption because it's one thing for a developer to play around in a language and another for a system engineer to make it operational: system engineers' time is usually in extremely short supply, and if they have to package your language for you, they could easily just move on to writing software in a different language that they can mass-provision natively, and nim will stay an underdog. Operational maintainability is the end goal, means mass deployment without extra code or software like Docker, only using native subsystems. Sooner or later things must go into production.
Never bypass OS packaging. Unless you are an OS vendor, never deliver anywhere into /usr, not even /usr/local.
> native operating system package(s), delivering into /opt/nim (and system-wide configuration into /etc/opt/nim) should be provided
No, native package managers should install binaries in /usr/[s]bin and libraries under /usr
> Never bypass OS packaging. Unless you are an OS vendor, never deliver anywhere into /usr, not even /usr/local.
It is very unfortunate for me that it seems this way to you, but as someone doing precisely this kind of engineering for almost three decades, I can assure you that this is not the case.
"native package managers should install binaries in /usr/[s]bin and libraries under /usr"
Unless one is an operating system vendor (like Joyent, redhat, SuSE, Debian, Canonical, CentOS, ...) third party and unbundled applications must never be installed into /usr: by doing so, one risks destroying or damaging systems in production in the case where the vendor decides to deliver their own version of the same software, and the vendor's upgrade overwrites one's own software and configuration. Long story short: don't deliver your unbundled software into someone else's space. /usr belongs to an OS vendor and to that vendor alone (vendor in this context also includes volunteer operating system projects one is not a part of).
/opt, /etc/opt and /var/opt exist for a reason. Refer to the filesystem(5) manual page on illumos and the LSB FHS specification, sections 3.13., 3.7.4. and 5.12.:
section 3.13.2., Requirements, explicitly states:
No other package files may exist outside the /opt, /var/opt, and /etc/opt hierarchies except for those package files that must reside in specific locations within the filesystem tree in order to function properly. For example, device lock files must be placed in /var/lock and devices must be located in /dev.
> Unless one is an operating system vendor (like Joyent, redhat, SuSE, Debian, Canonical, CentOS, ...) third party and unbundled applications must never be installed into /usr
That's what I wrote: "native package managers should install binaries in /usr/[s]bin and libraries under /usr"
"native" as in: provided by the distribution. Not third party.
https://github.com/nim-lang/nimble/issues/80 is about using /opt/nimble and you wrote "If you implement that, you instantaneously take away the ability to deploy applications through OS's automated deployment", hence the confusion.
It was a misunderstanding then: I thought native meant the packaging format native to the OS, not packages which come with the OS (bundled software).
Issue 80 from what I understood is about nimble, a packaging format proprietary to one programming language, providing system wide installation capability; this would indeed preclude using native provisioning technologies because those use the OS's native packaging format and hence know nothing of "nimble".
Resisting creation of operating system packages, even for dependencies is silly, because doing so is not hard at all: all it requires is reading some documentation, but the knowledge gained can be reused over and over and over again for the rest of one's career. Learning different OS packaging systems is an investment which pays hefty dividends.
It's also very common in security-sensitive environments to give blanket approvals to specific distributions and prohibit cowboy deployments.
(I'm not sure why you are replying to my post)
In contrast to eg. npm nimble is superfast.
That's entirely by design.
Distributions exist to provide a set of packages that are well tested, and work reliably, together.
And then guarantee that such set will stay the same and receive timely security updates for 3 or 5 or more years so that people can reliably use it in production.
In the real world, one gets an existing RPM .spec file, edits it for the required source one is about to build, and then runs:
rpmbuild --clean -ba software.spec
Say no to hacking with Docker and make install instead of formal system engineering.