So why would I pick Crystal over Scala or F#? Crystal used to be targeted very specifically at Ruby fans; if it's diverging further from Ruby and becoming just another modern typed language, what's its USP compared to the established languages in that space?
I was looking at https://www.techempower.com/benchmarks/#section=data-r11&hw=... today for unrelated reasons. Crystal does better than I'd expected, but less well than Scala (Spray being the comparison I'm interested in since that's what I use); F# doesn't seem to have been tested.
> We want the compiler to understand what we mean without having to specify types everywhere.
If they are looking for complainers about having to specify type on Array and Hash declaration I'll complain... It's seriously one of the problems with Crystal that kept me away. I'd guess some other rubyists are turned off by specifying types.
Specifying types everywhere basically takes Ruby-like syntax and makes it look like Scala or Rust. Why bother with a fully typed Crystal then when similar alternatives have greater support and other advantages?
If we observe how us human communicate, we see a lot of guessing. Human languages are all ambiguous and their great efficiency depend on (intuitive) guesses -- in expensive guesses. We make guesses as we listen/read, but we do not commit our guesses; rather we modify our guesses as we accumulate further information.
Now imagine if we forbid ourselves to modify our guesses, you'll see how understanding natural language a forbidding task ...
Compilation is essentially a guess of meaning with no room for modifications -- neither self-modifications nor post-hoc modifications. So it is doomed for inefficiency or un-trackable difficulties.
I believe the right way for the future of programming is multi-stage compilation -- with first meta-compilation followed with static compilation. Currently our compilation only refers to the latter. The meta compilation produces human readable intermediate result -- not much different from our current code in C or Java and it will be as easy to feedback from as we do today in C or Java. However the meta-compilations takes many guesses and quite often adventurous guesses but the coder can easily tell whether it is on-track or guessed wrong. When the coder detect the meta-compilation guessed wrong, he will simply modify his source code -- slight change styles or simply write the code in a different way (which is not much different from human communication when we say "what I meant was ..."); or, in a few specific or difficult cases, the coder may opt to directly modify the intermediate code (C or Java or any of current popular languages) -- not much different from natural language when we reduce to more specific way as we write laws and contracts.
I have been exploring such idea with MyDef -- a meta programming system -- and have been more convinced it is the more sensible way (than trying to make break-throughs in the mature field of programming languages).
Ruby's syntax is one of the worst parts of Ruby. [0]
>We love C's efficiency for running code.
Many other languages have efficient native code compilers, too. I don't know why people assume this is only a C thing.
>We want the best of both worlds.
>We want the compiler to understand what we mean without having to specify types everywhere.
This is good. I much prefer dynamic languages to static ones. Though I don't know if Crystal supports the best parts of dynamic languages like runtime program modification AKA live coding.
>We want full OOP.
Single paradigm languages feel very restrictive to me. I like languages that support many paradigms because different parts of programs call for different programming techniques. OOP in particular is a paradigm that I feel forced to use more often than I feel it's the right technique for the task at hand.
> Many other languages have efficient native code compilers, too. I don't know why people assume this is only a C thing.
I think it's just become very common to show C as the reference to show how fast are other programming languages, and everyone keeps it this way.
> Ruby's syntax is one of the worst parts of Ruby. [0]
For people who write the language, yes, it's incredibly hard to parse (and that blog post doesn't mention the new ambiguous things like keywords vs. hashes)
Yet, for people who use Ruby, it's very convenient and easy to read.
I write Ruby for money, and I find it difficult to read and write in certain circumstances. Sometimes Ruby thinks the curly braces mean a hash, sometimes Ruby thinks they mean a lambda. The ambiguities in the syntax (wow, it's just like natural languages!) really drive me crazy.
After learning a variety of programming languages, I think homoiconic syntax is the only type I actually like.
People used to find Perl convenient and easy to read too.
> Similarly, distinguishing puts ([1]).map {|x| x+1} from puts([1]).map {|x| x+1} is handled by setting flags after skipping whitespace, and later checking these flags to emit an LPAREN or an LPAREN_ARG token. As well as semantic whitespace, flags are used for blocks, class definitions and method names too.
Significant whitespace, except to separate tokens, is unusual. Significant whitespace that is used for something other than delineating blocks is extremely unusual.
The question is not whether the language has a native code compiler, as that rarely has a profound effect on the runtime speed of a language. Some native code languages are slower than VM and interpreted languages. And native code != native code. Objective-C and Swift both have native code compilers. But they are slower than C and C++ on the same platforms, for various reasons, such as: the runtime environment itself and how the runtime works, and, in the case of Swift, the organization of its LLVM code prior to final compilation.
>Ruby's syntax is one of the worst parts of Ruby. [0]
Your reference is about parsing the syntax, I am certain the authors mean they want Ruby syntax from the coders perspective not from the parsers perspective.
Nearly all the syntax ambiguities I've had in Ruby were resolved by either adjusting some whitespace or adding parens, and neither make the code any more difficult to read. In my experience Ruby is easy enough for humans to parse (really nice in 95% of cases, weird in 4%, WTF IS HAPPENING in 1%).
That's not really the context for the reference. Also beauty is in the eye of the beholder. The Crystal authors aren't really debating the goodness of Ruby syntax... That they think it's good is kind of the precondition to Crystal existing.
We love Ruby's efficiency for writing code.
We love C's efficiency for running code.
We want the best of both worlds.
We want the compiler to understand what we mean without having to specify types everywhere.
We want full OOP.