I wrote a web framework (http://kemalcr.com/) and it's in production without any problem for more than 1 year on a big scale app. You can check my slides for some interesting graphs (http://www.slideshare.net/sdogruyol/kemal-rubyconfbrasil-201...)
Also be sure to check out http://crystalshards.xyz/ to discover Crystal projects(shards).
What I've felt with crystal is that I find it an immensely practical language. Maybe that stems from the compiler being written in crystal. The stdlib has so many things in that make everyday work in the language a joy. For example JSON/YAML/DB.mapping which help you map objects between JSON, YAML and the DB (they're also really fast). Including a standard, performant http stack in the stdlib is a breath of fresh air. The performance of evented io in general is really nice. The concurrency story (channels + fibers) is simple, and doesn't show itself when not required (looking at you nodejs). Most of all, the type system gets out of your way and usually only shows itself when you're making a mistake.
The question is, how easy is it to make syntax-driven tools? One thing that Ruby never did quite as well as Smalltalk was having low cost of entry for creating language parsing tools on the language itself. (At one time, there were multiple online study groups of multiple people trying to parse Ruby working for over a year. With Smalltalk, you can start hand-coding a top-down parser in the morning and be mostly done in the afternoon.)
Most of all, the type system gets out of your way and usually only shows itself when you're making a mistake.
You can actually require the lexer and parser portions of the compiler in crystal code, and get an AST from a file relatively easily. The crystal compiler then has tools for easily walking the AST.
Interesting language, looks especially interesting for rapidly throwing things together. Add a solid API to a window system toolkit and you've got a logical successor to Visual Basic.
Write an editor, cross compiler for some other language, and a shell in it. That will tell you if it is a good "systems" language.
Write a distributed service, a web application, and a database in it. That will tell you if it is a good "web" language.
Write a physics simulator, a responsive AI, and a real time user interaction model in it. That will tell you if it is a good "game" language.
Write a convolution, a linear regression, and a monte carlo simulation in it. That will tell you if it is a good "modeling" language.
In my Christmas wishlist is a language that I can write a good 3D CAD package in. The current choice is C++ but that can be really clumsy in many ways. Such a language would have a strong linear algebra library and a boolean solids package. Nobody is working in that problem space from a language perspective as far as I can tell.
CPU Raytracer https://github.com/l3kn/raytracer
CRSFML multimedia/game library
I can see that becoming problematic for simulations (lots of particles, for example) and games especially. At some point you're going to want pass-by-reference and in-place mutations for better performance (ideally hidden behind some form of encapsulation of course). Is there no way around this?
Any embedded developers want to add to or dispute anything on that list? It's recent draft of a reference I'm working on to answer this common question.
I think crystal already does this well, especially in release mode when llvm gets to optimise the program as a whole. This also seems like something that can be improved without any breaking changes, which is nice.
> the cost (esp time/memory) of language primitives must be intuitive (ideally deterministic)
I'm less sure of this, maybe having dynamic dispatch would be an issue here? I think that release optimisations would make the performance of any code pretty variable depending on what optimisations are applied. Asterite probably knows more about this than me.
> be able to size/manipulate variables at the bit level
Crystal supports bitwise operators for doing bit twiddling, and obviously llvm optimises that very well. As for sizing values at the bit level, unfortunately only the standard Int8, Int16, Int32, Int64 and their unsigned varieties are available for integers in crystal. There also isn't a great deal of fine control over padding and placement in structs, apart from padding on/off. This could probably be easily improved.
> support pointers
Crystal supports both pass-by-reference and pass-by-value structures (class and struct), and all pointer operations available in C should be easily doable in Crystal.
> have anywhere from tiny to no runtime
Crystal has quite a big runtime by default, but fortunately it's really easy to turn it off and import only the bits you want using --prelude=empty, which imports only the bare minimum . You could then build your own standard library built without a GC, or implement a GC module (for example  which just uses malloc) which would allow you to reuse String, Array, hash, etc.
> a good C FFI
> support inline assembly
Crystal has this too, used here: https://github.com/crystal-lang/crystal/blob/master/src/fibe... although this too could be improved.
Overall I think Crystal has promise, and could be used for embedded programming without too much hassle, although its not a high priority of the core team. I know someone built a small kernel which just outputted "hello world" on x86 qemu.
Why re-write what has already been done? Stand on the shoulders of giants.
Because we can do better. For example, look at Mir GLAS, written in D.
Good to see they aren't reinventing the wheel, and openly expressing inspiration from Numpy is also a nice touch.
that's just one, some googling will help you find others
This language definitely deserves your attention. It's fast, it's easy on the eyes, and it produces native code. A real workhorse language.
Want Go's speed but hate Go's if err nil verbosity? Check out Crystal.
A unicode core, useful core packages (MIME, ECC/RSA, Templates, etc...), and easy parallelism are so nice coming from 90's scripting land.
Oh, and I love Go's err bubbling (keeps libs from dropping important messages) after years of Exception catching.
You can read more here: https://crystal-lang.org/docs/guides/concurrency.html
A relatively new programming language should really use all cores, ideally it should distribute the green threads across the available cores with its own scheduler automatically and have synchronization primitives like channel work across them.
If the language is powerful enough in the expressiveness it offers for building libraries that provide the same features as builtin types, that is also achievable.
This is the path being trailed by the languages more expressive than Go.
node.js may have a language and standard library that isn't nearly as nice as Python, but it has a built-in event loop. This sidesteps an entire universe of problems. Unfortunately node still has another, similar problem: Does the library use callbacks? Promises? Generator-based coroutines? async/await? If it doesn't use the same style as what your codebase currently uses, you're going to spend a lot of time converting between continuation styles.
Languages like Go and Erlang simply don't have any problems like this.
Now, one might argue that leaving it up to libraries allows greater exploration of the problem space and that blessing a concurrency method stifles innovation, and that might be right. But having a consistent concurrency story built right in to the language is a large advantage.
The contrast to these are C++ systems, where nearly each application and framework has it's own eventloop integration (and/or coroutine implementations, thread models, futures, ...). This means integration between various libraries gets super hard and code reuse isn't that big.
The only language I can think of which isn't super opinionated but get's now halfway standardized APIs is C#, due to the standardization on Task<T> APIs and async/await.
It will get pretty interesting what the future will bring for Rust and Swift, as these are not opinionated either.
It's a great language and the limits of its concurrency model aren't something that will really be apparent to you if you haven't learned Erlang/Elixir. For most people, it provides a concurrency model that's a cut above everything else out there.
The main differences in Go and BEAM languages (Erlang/Elixir) are cooperative vs prescheduling, shared memory vs immutable message passing, general garbage collection vs heap per process, and lastly general runtime vs isolation.
Cooperative scheduling means that the scheduler has to have control relinquished back to it (such as with I/O events), while prescheduling will take back control. Cooperative means that you can get better end-to-end performance for a benchmark but you run the risk of a code taking over the processor. Prescheduling will allow consistent performance for all operations in the run time without allowing anything to overtake it. That's one of the ways that it's possible to reliably run a database within the runtime alongside the rest of your code on BEAM.
Shared memory with pointers is pretty standard and definitely provides some performance perks, especially when dealing with large data structures. The flip side means that native clustering doesn't work. With BEAM languages that lack shared memory and rely on message passing, you can just as easily call a function on a different server in another data center as you can a function in your current heap space. This makes it possible to smoothly distribute everything without having to worry about updating shared memory on a specific machine or having the state get out of sync. The channels model helps to avoid this, but by even including shared memory in the language you make the trade off of losing natural clustering for distribution.
With shared memory comes garbage collection and GC pauses, although the Go team has done great work optimizing this. With BEAM every new process (equivalent to a go routine) gets it's own heap meaning that it can be independently cleaned up without pausing the entire stack. This also makes hot deployments possible, so doing things like deploying an update to a codebase with millions of active websocket connections can be done without trigger millions of simultaneous re-connections.
The general runtime vs isolation means that a goroutine that blows up and crash the entire system if it's not properly handled at the point of the problem. When writing Go code you find yourself writing a line of error handling for every line of functionality. With BEAM isolation, processes are kicked off with id's and the processes are so inexpensive that the standard method is to create two - one as a supervisor and one as the process. If the process ever crashes for some reason, the supervisor just restarts it immediately. This creates a granular level of isolation and reliability. There is a library for Go that I remember seeing that seems to create a supervisor pattern for reliability though (http://www.jerf.org/iri/post/2930).
Go will win benchmarks because of it the choices the language made, but the benefits for long term run time, reliability, distribution and performance consistency in the face of bad actors will be in favor of Erlang/Elixir.
That said, the steps that Go took toward implementing the closest thing to Erlang-like concurrency makes it the winner by far among the non-BEAM languages.
That path was the default prior to Go. Go's built-in concurrency, with the predecessor work by Rob Pike and others, was "the breath of fresh air" when it popularized its particular approach to concurrency several years ago.
If the language is powerful enough in the expressiveness
...it can create problems when programming "in the large." The challenge with programming languages has never been "expressiveness" in isolation. Few languages are more expressive than Lisp, and that's been around for a very long time. The problems have always had to do with various tradeoffs with performance and programmer effort in different contexts.
Last time this came up on HN I shrugged it off but after playing around with it a bit as a former Ruby programmer I'm in love.
Super simple to compile a little binary and it's blazing fast when I run it. I think I'll be donating some money.
If there is any stress test worth noticing it's this. 98.2% of the code is written in Crystal.
I use it because iterating quickly and being super productive means I can do more myself and keep the team small. And it's worked brilliantly.
So throw this into the mix and my world just got a whole lot better. It's like these folk designed this for my project.
Good luck guys!
Seriously, though, even the antiquated entertainment devices on planes these days make flying with kids so much easier than it used to be.
When you say "not safety related in any way", do you mean that your code runs on auxiliary systems, like the entertainment platform?
All up it's 9 processes talking to each other over zeroMQ.
Most are ruby, the camera and IMU are C++. Due to using a simple messaging system I can throw other languages in. Only one part uses any significant CPU and maybe Crystal can help with that. Should be easy to port over.
If you have code paths that set a particular variable to different types, it promotes the type of the variable to a union. I'm not convinced that's a good idea, but then I don't usually use dynamically typed languages; perhaps this is the most natural way to do static typing for people who come from languages like Ruby.
The problem you end up getting is basically the dual of the 'expression problem'. You get easy type expandability with class augmentation by these semantics, but any client of the type is subsequently bound to deal with any augmentation of the type. (I.e., if in the method defining class 'a', you insert an elseif in between the consequent and the alternative of type Bool, you now just extended the class signature of 'a' to Int32 | String | Bool. Assuming all consumers of 'a' need exhaustive handling, you've opened your class and extended its functionality at the expense of code which had previously satisfied the dependencies to consume 'a' safely.
Crystal still remains a real interesting project to follow. I haven't examined the Crystal type system too carefully, but the concept of Parent+ as virtual types is an interesting concept that is certainly worth exploring.
The same seems true of mathematics. At least in the parts that get commonly taught, we trend toward a common representation.
In programming, it seemed like we might move toward a core "C-like" syntax with various derivatives, for a while.
But now we seem to be moving toward a world where everyone wants to create their own domain-specific language.
I can't help but long for a common tongue. One which allows developers in different domains to understand at least the functionality, if not the motivation without having to learn a new language.
Secondly, personally speaking I've never had much difficulty understanding programming languages I'm unfamiliar with provided their syntax is sufficiently similar to those I use. For example, I've never much worked in Ruby but glancing at Crystal there's nothing really confusing about its syntax. Yes, we see new languages popping up here almost weekly, but we don't see new paradigms: usually most of the stuff here is procedural and on occasion it's functional. Furthermore, having seen the remarkable benefits the introduction of different paradigms has had for the community at large (think about a world without LISP for a moment) I for one believe we should encourage the development of new and strange ideas; most of them will fail, but every so often we might get a new world.
Oddly enough, I've just been thinking the opposite. I've been looking for a compiled language to learn, to add to my basic tinkering knowledge of Python.
From my shortlist of Crystal, Go and Nim, I opted to look at Nim first, as it has the most Python like syntax. I've actually found that to be a hindrance. Nim's close enough to Python that it fooled me into thinking it would be a doddle to learn, but just different enough that I constantly run into compilation errors from assuming [yes, I know!] that basic language constructs will be written the same in Nim as they are in Python and then falling foul of some subtle difference.
I'm actually considering moving my sights to Crystal or Go now, in the belief that they'll be different enough from Python for my brain to jolt out of 'muscle memory' mode and pay more attention to taking in the 'new stuff'.
If you know C++ and Python, you can pick up most mainstream languages. If you learn a more functional language afterwards, you'll basically be able to pick up anything.
Foo.bar(args) is preferable to bar(args) to me.
Crystal was started like 5 years ago.
> "the trend in human languages seems to be towards a single common tongue"
One of the clear goals of the creator was to make it as similar to Ruby as possible; the creator was striving for some kind of "common tongue", no doubt. Its not some strange eccentric new language.
The proliferation of new languages is a symptom of the growth of programming as an occupation (paid or otherwise). The larger the pool of potential developers, the more it makes sense to develop languages that each excel at a specific niche.
IOW, the number of active users that a language needs to create a self-sustaining community is mostly an absolute threshold. If the pool of developers is small, then a language that appeals to only 1% of developers may not become self-sustaining. If the pool is large, a language that appeals to only 1% of developers may have 100,000 users or more, which is more than enough to create a vibrant ecosystem and adequate tooling to remain relevant.
That doesn't mean that a crunch won't eventually happen (possibly precipitated by a slowing growth rate) where the industry largely settles on a handful of common languages, but I bet that even then the niches will continue to be populated by special-purpose tools, even if their hype has long-since evaporated by then.
Besides, even if one language ends up being the only one needed, I'd be pretty confident that it's way too early in the study of programming languages, compilers and runtimes too try to stop experimenting and look some language to settle down.
Of course it's much easier to learn a new programming language, than it is to learn a new natural language. You can't pick up German over the weekend, until we figure out the technology to help us do that.
I think the distinction with human language is that programming languages don't just describe what is happening, they describe what they themselves are doing. The description is "complected" with the action being described, so changes in one may (and often do) affect the other. I often find this philosophically fascinating...
Another one is that programming languages have more in common with mechanisms that natural languages.
You could also say that we do have a few lingua franca programming languages which are also very similar to each other, to the point that someone fluent in one can easily read another.
Also, what do you think are the odds that a mathematician can understand the intricacies of a fluid engineering paper (in a natural language) and vice versa?
One more thing, maths notation is ridiculously diverse and overloaded and is in no way entirely uniform -- or even close. This is a problem when the disparity exists within a branch of maths, but an advantage when it's between very different areas, such as Analysis and Combinatorics.
Me too. That's why I'm building – you guessed it – another language ;-)
Seriously though - what's your take on the subject?
There's no silver bullet.
For that matter, crystal vs elixr.
Kind of wishing all these projects just had made up names.
I find that once a tech becomes somewhat popular and you use specific words to search, technical pages tend to come earlier than the original meaning.
Those chefs that search with same terms would suddenly get their results overtaken by voodoo pages.
First google result: http://terraria.gamepedia.com/Crystal_Shard
But yeah, it works for now.
But yeah a more unique name would have been nice... or even just crystal spelled like "Krystl" or something.
>ruby crystal gem programming -gemstones
It's not bad. It's fine. Not the pejorative, dismissive sort of "fine", but fine. But it's not Ruby. Ruby is slow in large part because Ruby does hard things. Some of them aren't really advisable, from a perf perspective--but they are from a human perspective, and Ruby preferring humans over computers is a huge feature for me.
I wish I could upvote your comment more than once.
Crystal is nice but I've found simple dynamic things to be a challenge. I haven't looked in a few months but I wasn't able to do this
a = 
a << 1
a << "two"
Same with hashes. This broke some things for me and ended up feeling like I wanted to program in Ruby but there were all these gotchas.
If I have to break the mental expectations I have for writing something to get the performance needed, then I'm fine to switch gears and write Java or C in order to get it.
I don't think that's the case. Crystal takes its syntax from ruby because it's easy to write, easy to read, and familiar, it doesn't have to be the same as ruby in semantics. In fact that's impossible because it's statically typed (which I personally prefer to ruby).
If you're going to resort to Java or c for speed, why not use crystal? Crystal beats Java and c on expressivity of type system, expressivity of code (less boilerplate) and performance per unit effort (at least on common tasks). So the question is why Java or c? They both certainly have larger and more mature ecosystems, and if you have compute-heavy workloads they'll probably win out on performance, but why do you instantly make the leap from crystal != ruby to "might as well use java/c"?
The hardest part of learning crystal for rubyists is unlearning their ruby habits when working on crystal. Once you're over that hill you might find crystal a wonderful language in its own right.
Nothing against Crystal, I think it's great but it would almost have been better coming to Crystal from something other than Ruby.
It's not a bad language. (I would rather see it than Go in just about any situation.) But I don't see a positive value prop, for me, as I don't particularly value syntax (I can write anything; even if syntax was a big pain point for me, this is why I use IntelliJ and ReSharper/ReSharper C++) and I do value ecosystem and tooling. Maybe if it was JVM-targeted or CLR-targeted, to piggyback off those ecosystems, but even then the value prop is muddy.
Maybe Crystal gets there some day, but it's not there yet.
a =  of Int32|String
a << 1
a << "two"
`include`, `extend`, and `prepend` are used a lot in Ruby because the very mutable nature of class definitions allows for powerful metaprogramming without having to do a lot of work--and all are, and can't not be, runtime-evaluated. I regularly build objects in Ruby out of dynamically generated mixins reliant on runtime behavior. As do most people. Consider ActiveRecord (which I don't care for, but it's a good example). You create a class that inherits from ActiveRecord::Base--and, at runtime, it queries the database and builds a set of methods for the class based on the schema of its table. Or something like dry-struct, which evaluates, at runtime, a set of attribute specifications and dynamically creates the API for accessing it (as well as validators, etc., which can be programmatically devised).
And that's not even taking into account a plugin architecture that is functionally "require this file" and being able to interpret it at runtime; Exor, linked above, is a lot of pretty finicky work to simulate a worse version of that in C#. Or the general usefulness and value of monkey-patching in a lot of contexts. Oh, and I can drop `binding.pry` into any spot in my code and have a REPL in which I can define methods on existing classes, inline, and build functionality that I can then dump out into my editor. Compiled languages that do that are, er, rare. (JRebel and hot reloading isn't as nice as just doing it.)
Ruby isn't the perfect tool. But when I want to write a little code to do a lot and don't have many other concerns, that's why I go to it. That's why something like Chef uses it instead of the programming mistake of 2014-2016 (using YAML for everything). Because nothing code-related expresses intent better than code itself, and Ruby is built around making that easy for me when I can think about classes and modules as things that exist rather than immutable blueprints off in the ether.
Like, take a look at this. I wrote it for a game engine in C#. It allows for dynamic loading of modules or static packaging on platforms that require it. This is what I would consider a low-end thing for a Ruby application that needed extensibility--just something you do out of hand. Crystal can't do that. Maybe it will in the future, but it's not even a 1.0 feature. So Crystal might be a challenge to a Golang-ish thing, but it's not a challenge to a Ruby or a JVM/CLR thing on that front.
 - https://github.com/eropple/Exor
Plugins have a long tradition already, starting with the old dlopen and loadLibrary. Even in C++ (mostly) you can have it done, just look at Qt.
A class in Ruby is just an object. It is not a Class, it inherits from Class. Mutating them to achieve your goals is commonplace and valuable. It's the closest thing to brain candy as a meaningful, good Lisp I've used since college. (Clojure is not my thing. And obviously I know that Ruby has a lot of Smalltalk-ish stuff in it rather than Lisp, but I never used Smalltalk...)
You can fake a lot in a compiled language. I have done this. But you had better be getting significant wins for doing it, and something like Crystal does not offer them over something like Ruby unless you're doing something that doing it in Ruby was a bad idea in the first place. (If you wrote something in Ruby where you actually cared about perf, for example, you picked wrong.) Right now, writing Rust plugins and using them from Ruby is a more Ruby experience than Crystal is. Which isn't a shot at the language, I have to stress--just that it's not Ruby, it's something very different.
The people who wrote Rails thought they did - it defines tons of code at runtime. We can't say that they needed to or not. Nobody needs a programming language abstraction. But it worked for them.
Looks like they took the Fiber concept of ruby and made it into something like goroutines, with channels handling.
Can't comment on anything more than that right now. Have yet to dive very deep into it.
Nil tends to be used in languages where the concept of nothing is represented by some sort of container. In lisp, it's simply the empty list (since all data lives in lists in lisp). In Ruby, it's a singleton object that inherits from the same base class as everything else. In both cases, the language is structured such that every container can easily be asked the question "Are you nil?" (as opposed to false or 0).
That's not true. Every modern Lisp implementation has many primitive data structures: vectors, hash tables, etc.
Nil: Ruby, Go, Oberon, Modula-2, Swift, various Pascal dialects, Lisp?.
Then, of course, there's Python with None.
For finding an analogue for the typical null use case in Rust, one would use the `None` variant of the `Option` enum.
Neither nil, null, none or undef refer to (void*)NULL, they are usually a special symbol in the global symbol table.
Only some languages use null/nil as NULL/0 internally.
Matz (ruby) also borrowed nil from Lisp. (He was also inspired by Smalltalk.)
Many languages use "null" in similar ways. I guess it's a matter of particular syntactic conventions.
The nil-()-false correspondence is only specific to the Lisp dialects which are close to the original tree. In Scheme, an empty list is indeed (), and this object also the terminating atom of a list. However, the object isn't a symbol at all, let alone the symbol nil. Moreover, the object isn't a Boolean false; Scheme has #f for that.
The reason being that it is OOP. I eschew OOP (particularly for biz software which is my day job) but have to admit OOP still is a pretty good fit for UI and video game programming. I tried some reactive FP game programming and I just didn't make as much progress as I would have liked.
I tried googling for some roguelike crystal programmed games and didn't find any yet (it is difficult because crystal has lots of conflicts particularly with rpg like games)....
Also, even though I spent years writing ruby, I don't have problem switching to other languages.
- Slack chan, IRC, mailing list
- package manager (Hex)
- build tool (Mix)
- web framework (plug & Phoenix)
- db framework (ecto)
- books from major publishers (manning, pragprog, etc.)
- ancillary teaching (ElixirSips, LearnElixir, Elixir Fountain, etc.)
- helpful blogposts and stack overflow answers
All these things sum up to lower barriers of entry when trying a new language. Elixir has it in spades. It's really easy to pick up and learn the syntax and predicates of the language (BEAM, OTP, etc.)
Even though Crystal is closer to Ruby in syntax and Ruby can be valid Crystal -- Elixir still feels easier, because there is less work to be done with learning everything else.
Elixir has so many superstars working on/with it.
The output binary is bigger than dynamic linked and it would needs some dev libraries before compile.
Requiring the user to install a separate bunch of dependencies adds a huge amount of distribution friction vis-a-vis just shipping a stand-alone application bundle, especially in this day of app stores. Being able to avoid that is very important, IMO.
Probably anyone reading this isn't going to be daunted by having to install some extra stuff, but for the average user it's a real pain.
Does anyone have any experience using this with websockets? How does it perform?
But it lacks a modular interface for modules and libraries that does not import hidden/global variables. They should look at NodeJS for inspiration.
I don't like all the 'end' statements though.
Ok. Bit of a bummer though.
Creating a ruby extension in crystal also works and there are a number of libraries which attempt to allow you to do this (too many in fact). They do the hard work of getting crystal to play nice with the ruby extension api for you.
We'll probably build it into a more stable shard eventually, but at least we know it's doable
Go has full parallelism support. Crystal is WIP.
Go has Windows support. Crystal is WIP.
Crystal and Go performance are very similar.
Go is backed by Google and his community since 2009.
Crystal is backed by Manas.tech and his community since 2012.
Go has full featured IDEs like https://www.jetbrains.com/go/
I hope to have Crystal IDE support after 1.0 :-)
"Since the collector does not require pointers to be tagged, it does not attempt to ensure that all inaccessible storage is reclaimed. However, in our experience, it is typically more successful at reclaiming unused memory than most C programs using explicit deallocation. Unlike manually introduced leaks, the amount of unreclaimed memory typically stays bounded."
Having used the language for a year, I've never really had an issue with the gc other than with really large (think 100mb+),infrequrnt, single allocations which get "stuck" and don't get returned to the OS after. This is a by product of the gc being unable to move objects to compact the heap. This situation can usually be rectified by loading a large file as a stream instead of loading it all into memory. Crystal has excellent tools for working with streams (see IO module) however, so it's not that big of a deal.
How do you know this to be true? You don't - it's just an opinion. And that's why we have different syntaxes - because they're driven by different opinions. If you could prove that one syntax was more effective than all the others - for every purpose that people needed - then you could ask why everyone wasn't doing it that one way.
Write me a dtor that works perfectly and in all situations without allowing for users to break out of a scope without triggering them.
Thanks in advance.
Crystal's type system is both expressive and safe compared to most languages. It has generics, tagged type unions and dynamic dispatch.
It has a built in spec runner and testing suite which are highly usable and proven on the compiler.
Most of all, it is already concurrent. Crystal has lightweight fibers implemented in assembly for x86, arm and arm64. It has channels to communicate between fibers. All io is evented by default, where the http server manages over 100,000 requests per second per core.
Its obvious to me you don't know what you're talking about.
I haven't looked closely enough, the concurrency looks decent but it's not as integrated as in certain other modern langs (e.g. Go, Erlang or certain other FP languages, even certain ones riding on JS)
It is sad to see something this outdated starting up now...