Hacker News new | comments | show | ask | jobs | submit login
Crystal Language (crystal-lang.org)
254 points by necrodome 838 days ago | hide | past | web | 171 comments | favorite

I don't understand why Ruby's syntax is seen as so elegant. It's ambiguous and a nightmare to parse.


Lots of people find it elegant in actual use because of the same features that make it hard to parse; this is much the same thing that is true of natural languages.

Elegant to program with. Not write a parser for.

An ambiguous syntax is not elegant to program with. Is this a variable reference? Is it a method call? I dunno!

Boring arm chair language criticism. I have programmed professionally in Ruby for 5 years and it has never been a problem. You may as well complain that the ability to import an identifier leads to confusion about which identifier from which namespace you are calling when you reference one. I have 99 gripes about Ruby, but this ain't one.

I just started working with Ruby on Rails a few weeks ago and I think the so-called "elegant" syntax is the reason why I went with it. That and the fact that Rails is a joy to work with.

Among people who like Ruby, this is seen as a feature, not a bug. It makes refactorings easier when you change something from a local variable to a method call and don't have to search for all the places you would have to add parentheses in a language like Java. Granted, you could argue that your IDE should take care of that for you with automatic refactorings, but to a Rubyist this is a crutch and also something that forces her to use a particular IDE instead of her favorite editor.

> Is this a variable reference? Is it a method call? I dunno!

Neither. It's a message sent to an object. The object decides how to respond to that message.

Caring about the implementation of the message receiver breaks information hiding. If you have to distinguish between .message and .message(), you already know more than you need to know to interact with that object.

Every language is confusing until you understand the mental model (even if you disagree with the utility of that model). If you come to Ruby thinking in Java/C++/C terms, you'll be unhappy, because the mental model is very different.

For Ruby, the book to read is The Well-Grounded Rubyist, which makes the language quite obvious.

I've been programming in Ruby for years, and the inherent ambiguity proven by the complexity of the parser isn't made better by telling me I'm not thinking right.

Variables and methods are treated differently. I can't pass arguments to a variable, I can't even write foo(), but if it were a method I could. I can't treat variables like any other object because they have different rules applied.

> I can't pass arguments to a variable

I'm confused -- if you're knowingly invoking a method, why are you unsure whether it's a method or a variable?

I programmed for years in Ruby too. I just accepted a whole bunch of stuff as mysterious and shrugged. It didn't mean the language was ambiguous, I just didn't have any idea of what the actual model was.

Actually, you're wrong. The syntactic ambiguity between local variable references and implicit self method calls us not accurately resolved by thinking of it as message sends because it's only a message sends if it is a self method call, not it is a local variable reference. This is a real syntactic ambiguity in Ruby.

The book I pointed to also discusses the concept of self. It's not really that ambiguous.

Wait - are we talking C/C++ here? where a function call, a variable declaration, an expression cast all look identical?

You are probably talking about C++, not C, unless those look identical to you:

    int foo;

the cast can equally be int(foo); or (int)(foo); The parenthesis are allowed and optional. The method can be declared (int) foo(int(x), int(y)); which can also be an invocation. etc.

int(foo) and (int) foo(int(x), int(y)) are not legal syntax in C. However, they are in C++ [0], which is my whole point.

[0] your method declaration has a small problem, it apparently cannot have parenthesis around returned type.

I wouldn't really say they are optional. The two do different things. (int) foo (or (int)(foo)) is a c-style cast. int(foo) isn't a cast. Instead it invokes the int constructor with the argument foo.

No, C/C++ have bad syntax too, imo.

Have you written Ruby often and still felt like this? I'm asking because I'm newer to Ruby and this does come up a fair bit, but I was assuming that with time the ambiguity would disappear.

I have written Ruby for many years now. The ambiguity never disappears, and the delight at how "expressive" the syntax is quickly turns to loathing. YMMV, of course.

Yeah, the Ruby-like syntax of Elixir made me stop looking into it more. I remember reading an article (or was that a chapter in a book), which basically said something like "ok, these parentheses are optional, but only for this case. For other cases, parentheses are required." Which is a shame, since the tooling built for it is the most amazing I've ever seen.

I know syntax shouldn't matter, but personally I think consistency is very important.

I think syntax matters a lot - if it incurs development costs through inconsistencies, for example.

I'm going to a meetup in a couple of days where Elixir and Phoenix will be presented, I think I'll keep your comment in mind and maybe ask about inconsistencies in the syntax, as I wasn't aware of this. What I know of the syntax relation to Ruby is that it is only superficial - the semantics are very different.

I don't understand what parsing has to do with elegance in this context. There are some 'best practices' but I don't see the different syntax support as a bug. I feel that it's a trademark of a modern language.

Let's take rust as a counter example. IMHO this is ugly:

    fn foo() {...}

To make it elegant I should be able to remove the parenthesis because there's no argument inside:

    fn foo {...}
But this will come up with an error: "expected one of `(` or `<`, found `{` ...", which is fine. It's easy to parse, but it is not elegant IMHO.

Elegance != easy parsing.

It's not just about making parsing easy (which is usually motivated by the promise of better tooling), it's about making costs explicit. `foo.bar` with no trailing parentheses is always a field access in Rust, and a field access is basically the fastest thing one can do: take a known offset from a memory location. In comparison, `foo.bar()` has a relatively tremendous cost, depending on various factors. It's for this same reason that Rust requires you to write `(foo.bar)()` if you want to call a function stored in a struct.

This isn't a tradeoff worth making in all languages. But it is a tradeoff, and not a rejection of elegance.

`foo.bar` is always a call in Crystal. If `bar` is just an accessor, then the method will be inlined and there is no extra cost compared to make an explicit field access.

I don't think it matters much that it's hard to parse: very few are going to write a parser for that language compared to the ones that are going to write programs in the language. It goes the same for everything else in a language: prefer a fast compiler rather than an elegant (compiler) but slow one.

Ruby's syntax is very short and concise if you compare it to other languages. To know if "x" is a variable or a method you just need to see if "x" was assigned a value in the method, that's all.

Try to build a class with it and you will understand.

I've built many a class in Ruby.

> I've built many a class in Ruby.

I see.....

Anyway, this is elegant to me:

sum ||= (1..100000).to_a.inject(:+)

Maybe you have a different aesthetical sense than the rest of us. That's perfectly fine, we don't have to like the same things.

But since you questioned, now I'm curious and I would like to ask you: what language in your opinion has the most elegant syntax? Also if you have 2 extra minutes free, would you mind to code that same one-liner above in that language so we could compare the two?

In Python it's x = sum(range(100000)). You might say that's cheating because there is a specific sum function, but having smart builtins covering common programming cases is exactly what an elegant language offers.

I wouldn't say it's cheating but without using that function maybe we could see more of the language syntax itself. That way we are just seeing method calling.

In Crystal we also have sum: http://crystal-lang.org/api/

Having migrated to Clojure I prefer:

(reduce + (range 1 100001))

but you can make yours a little nicer by relying on a Range being enumerable, hence:


I prefer J:

You could takeout the >: but then the range is 0 indexed and not 1-based:


Now that is incredibly terse. "/" is the J reducing function? Or is it not comparable?

It is J's 'insert' function. It inserts the verb on the left between the items on the right. I think I've see it called 'apply' for J too, and I think 'reduce' as defined in other functional languages would be correct; it's just that J calls it 'insert'.

>what language in your opinion has the most elegant syntax?

The Lisp family, Scheme in particular.

(define sum (reduce + 0 (iota 100000 1)))

Crystal looks like a neat language, and it's fun to see how many entrants there are in the modern renaissance of scripting languages that compile to native code. I wish there was some more documentation about it though... for example, does the bullet point "Never have to specify the type of a variable or method argument" from the home page imply gradual typing as per Dart and TypeScript, or is it something else entirely?

It has global type inference, there's nothing dynamic in the language. You can specify type restrictions to allow overloading methods, for example, or doing multiple dispatch. But in the general case you don't specify types (except for generic type arguments) and the compiler figures out everything.

I'm curious how possible it is in practice to write large software that relies exclusively on global type inference. Does the compiler itself go without ever explicitly specifying a type?

Yes, you can browse the source code if you want, check it out and compile the compiler. It has between 30k and 60k lines of code (because you have to consider it also includes the standard library) and on a Macbook Pro 2015 it takes less than 10 seconds to compile (in non-release mode).

We'll have to wait until we get project that's larger than the compiler, but we believe there's still room for performance improvements.

It seems to have a fair slew of type-system features, so how does it manage to compile so fast, compared to for instance the Rust compiler, which doesn't do global type inference?

The current slowness of the Rust compiler is largely due to putting more developer attention into improving the runtime speed of fully-optimized binaries. Hence, regardless of which mode it's in, the compiler does a lot of unnecessary work that a debug-mode binary doesn't need or benefit from. There's also simply a ton of low-hanging fruit lying around: we're only two weeks out from 1.0 and the nightly compiler is already reportedly 30% faster for code compiling in the wild. There's another 10% compilation speed reduction coming from work around optimizing linking. There's also work towards parallel codegen coming along nicely (https://github.com/rust-lang/rust/pull/26018) which reports a 30% reduction in compilation time. In the longer run there are plans for incremental compilation which should drastically reduce the amount of work that gets unnecessarily repeated with each compilation cycle (https://github.com/rust-lang/rfcs/pull/594). In the even longer run there are plans to fully parallelize the compiler phases and also to pre-optimize code before it gets to LLVM in order to reduce the sheer quantity of IR that we shovel into it.

TL;DR: Crystal manages much faster compilation than Rust because Rust's compiler developers may have deprioritized optimizing the compiler for a bit too long. :)

Thanks for linking to those RFCs. The planned or in-progress work for Rust compilation speed is very exciting. I can't wait to reap the benefits!

It's because we spent some time thinking the algorithms and optimizing them, and whenever we make changes to the compiler we make sure the times remain pretty much the same (it's hard because the compiler's size grows so the times inevitably grow, at least for the compiler). And from time to time we profile and optimize further, we like speed.

I believe with time Rust can achieve a similar performance, maybe even better because they will be able to do incremental compilation (maybe, I'm not sure how will they do that with parameterized types).

There's also the thing that most of the things in Crystal are lazy: if you don't invoke a method there are no type checks to be done for it. This means that you only pay for what you use (in terms of compile speed) but also what you don't use doesn't end up being in the resulting executable.

> most of the things in Crystal are lazy: if you don't invoke a method there are no type checks to be done for it

Does this mean that if a library doesn't have complete test coverage compile-time errors could be discovered only by clients?

Almost. You at least need to exercise the code at compile time. For example you could write dummy usage tests like this:

typeof(MyClass.new.some_method(1, 2, 3))

That basically says "the above compiles and has some type", so you don't have to test what that method really does, just that it compiles.

I don't think it's a big problem, though. In Ruby it's the same: unless you exercise your code you don't know if it works. Now, think of a classically compiled language like C and C++: if it compiles, does it mean that it works? I doubt you'd release your code without at least a few tests (or a small test-app) to try it.

GHC does global type inference, and has very sophisticated types.

My point was specifically in regard to the feasibility of omitting the types in all cases. I see types often in Haskell code, likely because the community regards that as a best practice (in contrast to Crystal).

Best practice and it's required to make certain more advanced type features work.

It's the same with Crystal. Their generics require type annotations.

In Haskell types are used a lot as documentation and to make static guarantees about the program logic, not just runtime safety. If you're only worrying about illegal operations you practically don't need signatures.

...and is a slow compiler compared to for example Go :-(

OCaml has separate compilation, a REPL, and it's very fast. None of those things is incompatible with global type inference (although particular type systems can be).

Doesn't OCaml require everything to be defined before used though?

That's a problem with Cabal, not GHC. They're working on it.

What algorithm are you using? How do you deal with subtyping and paramatricity?

The algorithm isn't very easy to explain in a few words. We don't use any well-known algorithm. It has some bits of the cartesian-product algorithm, but just tiny bits.

There's some info here:

http://crystal-lang.org/2013/09/23/type-inference-part-1.htm... http://crystal-lang.org/2014/04/27/type-inference-rules.html

About subtyping and paramtricity, I'm not sure what's that, but the compiler just keeps track of all the types and forms unions, except in one case which is this one (a union of references with a base type): http://crystal-lang.org/docs/syntax_and_semantics/virtual_an...

If you are using Agesen-style CPA, then it is more of a global analysis (type recovery) then type inference. I would be a bit worried about scaling, but if you've put 50kloc through it, maybe you've figured it out? I'm working on my own type-less type inference system that I'll share more about soon.

Do you have a (more or less) formal description of the type system?

They have a couple posts on their inference algorithm, but I confess I haven't read them.

I'm looking for something beautiful like Ruby but fast like Go. Do you think Crystal fits this bill?

Also, are there packages/libs/gems for Crystal? What are they called? What do I google for?

One of the major reasons why I dumped Go is that it's just too verbose and makes me write too much boilerplate code. I want to sort a collection and I have to write the same algorithm every single time for every single type. It's just boring and my time could be better spent elsewhere.

I appreciate the feedback HN!

I think Crystal could fit this bill, yes: it has Ruby syntax and the concurrency model will be something like Go (spawn and channels, but we are still working on this). And the code ends up without much repetition, as you have generics, macros, very few type annotations, and things like sort, map, select, inject, etc.

The community likes to call a library as "shard", so I think we'll eventually end up using that name :-)

You can see a list of shards here: http://crystalshards.herokuapp.com/

For now that just lists GitHub repositories. Right now there's a very basic package manager that fetches repositories from GitHub, but it's too basic. Someone is working on a much better package manager ( https://github.com/ysbaddaden/shards ) that we'll probably incorporate in the future.

Nim perhaps? There's very little boilerplate but it is statically typed. Someone mentioned infra-ruby, which is worth checking out. Lastly, crystal-lang is pretty awesome. You could use this opportunity to create what is missing in the ecosystem and make it open source. This will encourage others to chime in, accelerating development for everyone.

You definitely should at least look into Julia. I would say it is more elegant than any other language I've seen while being faster than pretty much anything except for straight C / C++ / D / Rust (native systems languages).

It should be easy to beat Go in performance and match or exceed Ruby in elegance and simplicity.

It's Achilles heel(s) right now though are multi-threading and JIT compilation times (which should be alleviated once caching is implemented).

It sounds really close to what you want. Its package management is even based off of github. Want to get a package to write out image files? Pkg.add("Image"). Update all your libraries? Pkg.update()

Over the next 5 years I think it might eat into a lot of the territory of scripting languages.

Also the generic programming focus means you should never have to write anything more than a new comparison for your new types.

Wow, are people really that offended by someone suggesting Julia? It is a bullseye answer to the question.

Julia's 1-based array indexing is counter-intuitive and a turn off.

I've seen people say this but really 1 based indexing should be extremely trivial to all the other complexities of writing a worthwhile program for anything more than a one liner.

But Matlab users love it! oh wait..

You're looking for Scala. The "good parts".

Still looking for them ;)

You're sounding like a broken record by now, Cedric.

http://elixir-lang.org may be even faster than go. web apps respond in microseconds.

Elixer and Erlang are not faster than go.

The ability to respond in microseconds is a latency issue, not a raw speed issue. Erlang and Elixer can do that because of the way the core systems are architected but the underlying VM and language constructs are quite a bit slower than anything that Go has to offer.

For maximum throughput out of a given chunk of hardware you'd want Go.

Erlang / Elixer and the associated VM and eco-system have their own strength, reliability for instance, with some C code thrown in for heavy lifting if required.

If we are talking about maximum throughput, then you want C++ and in my tests Java has been better than Go, .NET probably is better than Go as well - because when speaking of maximum throughput RAM becomes the biggest bottleneck. And as soon as you're talking about platforms with a non-optional GC, then we start speaking about the performance and predictability of that GC. Go doesn't bring improvements for managing memory access patterns over other GC-enabled platforms, like Java or .NET and compared to those, its garbage collector story has been much, much worse, albeit improving.

But we're not talking about maximum throughput above all else. We're talking about the differing throughputs of high-level languages, which Golang is, but C++ is obviously not.

That's not entirely true, not everything is garbage collected. With some care, you can get a whole lots of things allocated on the stack. Tooling can also help you figure if things escape to the heap.

You can also control the size and number of objects your program creates, which impacts the performance of GC. Sure you can't pick a GC from a stack of GCs, but you do have many things in your hand to control how memory will behave.

will beautiful like python and faster than Go do? then Nim - http://nim-lang.org will do just fine. Anecdotal benchmarks like this one https://github.com/kostya/benchmarks will show you that Nim comes out at the top (so does Crystal for some, FWIW).

As always, take benchmarks with a grain of salt: https://github.com/kostya/benchmarks/pull/26

Or this discussion (which is two months old, so unsure what exactly has changed since. Looking at commits, not much?) http://www.reddit.com/r/programming/comments/30y9mk/go_vs_ru...

Or this one, though it goes off track a bit: http://www.reddit.com/r/rust/comments/38s1n3/why_is_rust_muc...

> I'm looking for something beautiful like Ruby but fast like Go.

Have you tried InfraRuby? InfraRuby is a compiler and runtime for statically typed Ruby: http://infraruby.com/blog/why-infraruby

What makes InfraRuby better than Crystal (or vis-versa)?

InfraRuby code runs in Ruby interpreters.

Cool. I assume that makes interop with Ruby easier?

Yes! Ruby language features supported by InfraRuby have the same behavior as Ruby interpreters.

InfraRuby projects are created with rake tasks to run your code with either the InfraRuby runtime or Ruby interpreters. You could implement a file format or protocol in InfraRuby, and use that code in a Ruby project too. You could use Ruby interpreters for development and the InfraRuby runtime for production.

And if you decide that InfraRuby is not for you then you can take your code with you!

CrystalGems would be really a boon!!

From your description, it sounds like Crystal is exactly what you want. But yeah, I don't know if it has a library ecosystem yet, or anything like that.

I'm a go user right now, but I really want to ditch it because I'm in total disarray with the way the go language is managed and the deafness of the go team. Been looking at D,Nim and Crystal.

- D is neat but I'm not interested at all in unsafe stuffs and don't want to have to debug programs or 3rd party libs that relies on that, I want a safe language.

- Nim looks really good, although some features like (foo_bar = FooBar ) are just disgusting

- While Crystal is new and libs is non existent, it feels like a good candidate for the long run. I hope it will have the same concurrency capabilities as go. Good luck with the language.

Quick fix, in nim:

  foo_bar = fooBar
  Foo_bar = FooBar
  foo_bar != FooBar
That is, it's case sensitive with the first character of the identity.

I've mentioned elsewhere that this one bugged me at first, but in practice it simply means that you get to use the language as if your preferred convention (whether snake or camel) is the "official" one- even when calling other libraries. Nothing more and nothing less. Cases where you need both styles but need them to be different things tend to be non-existent / code-smell.

I am assuming your requirements are statically typed with GC, no separate VM, and lightweight runtime. One language I have been watching with a keen eye lately is Pony[1] though the docs are not quite complete.

1 - http://ponylang.org/

D has a memory safe subset (@safe). You could argue that Rust 3rd party libs rely on unsafe blocks.

The person you're replying to wasn't even talking about Rust. You'd also be vastly overestimating the amount of Rust libs that need unsafe code. For example, Rust's most mature web framework, the one powering crates.io, doesn't use unsafe code at all: https://github.com/iron/iron

Crates.io uses Conduit, not iron.

Ah, so it does. :) Fortunately Conduit doesn't use any unsafe either.

Have you looked at the new generation languages on the JVM such as Kotlin or Ceylon?

What about Rust though?

If you don't like unsafe stuff perhaps you should be a bit wary about Nim as well. The attitude seems to be that unsafe things aren't that big of a deal as long as you decent tools to debug them.

Well if you have decent tools to debug them, why should he be wary?


The Null pointer analysis is great, hope this stuff pollinates some of the more mainstream languages

I don't see a reasons for null to exist in a new language in 2015.

How do you model a "zero or one" relationship without null?

Maybe your answer is "with Optional" (or Option, or Maybe). We just choose to use union types and have "Nil | T" (Nil or T) be the same as "Option(T)" in other languages.

Yes, Maybe (Optional) is the way to go. The difference is, with nil you basically make every type optional allowing it to have nil as a value.

Well, in Crystal Nil is a separate type that can be combined with others. But, say, a String is always a String, it doesn't implicitly have the Nil type. Same goes with every other type.

Maybe you are thinking of Java/C#, where reference types can also be null, but this is not true in Crystal. It's also in a way similar (but not quite) to Swift, where optional types are different than types that can't be null.

It's a nice approach, I like it. There is a difference between Option[T] and (T | Nil) that's worth mentioning, however.

Option[T]'s are composable. For example, let's say we have a "get" method to get the value for a given key, whose type looks like:

    get :: String -> (T | Nil)
If we were using Option[T]'s, it would look like:

    get :: String -> Option[T]
So let's say we have a map, and want to lookup a key (syntax is made-up):

    let m: Map[String,(Int | Nil)] = make_some_map()
    let result: (Int | Nil) = m.get("some-key")
If result is nil, was the value of the key nil, or was the key not in the map?

With Option[T]:

    let m: Map[String,Option[Int]] = make_some_map()
    let result: Option[Option[Int]] = m.get("some-key")
Here result will either be None, in which case the key wasn't in the map, or Some(None), which means the value of the key was None.

So there is an observable and potentially useful difference between (T | Nil) and Option[T].

Do you have a technical write-up of how Crystal does that? Or otherwise some links/papers that explain the principle in a language accessible for someone who is basically self-taught and lacks a formal CompSci education?

Sounds exactly like union types in Ceylon (http://ceylon-lang.org/documentation/1.1/spec/html/introduct...), which is an interesting approach.

I find the distinction important as it allows to establish a contract without falling into defensive programming. Null pointer analysis is great, I admit, but how it would help to write a library function without checking explicitly if its parameter is nil?

It's ONE way to go. There are others.

Sum types supported in the language such as in Ceylon is another one.

And yet another one is safe dereferencing operators (?.) such as in Kotlin.

And to be honest, (T | Nil) is pretty difficult to distinguish from (Maybe a = a + 1). Complaints feel difficult to motivate to me anyway.

They both accomplish the same thing: no unexpected null. The exact mechanism was never the interesting part. Maybe being monadic does offer some other benefits but the killer advantage is the responsible handling of nil. Crystal doesn't seem to have a foundation in monadic programming so the type unions seem like a reasonable approach there.

The advantage of the monadic approach is that it's easy to abstract over, because it's just an ordinary type in the language. So e.g. in scala I can call the same "sequence" method on a List[Option[Int]] as I do on a List[Future[Int]] or a List[ErrorMessage \/ Int]].

Unions seem like more of a language-level feature, so I'm not sure you could abstract over them in the same way.

I totally agree. Ridding yourself of nils via the Maybe monad offers incredible abstraction potential, but would feel out of place in Crystal's Rubyishness without deeper thought into bringing other monads into play too.

Nonetheless, I am thrilled that we are seeing more and more languages that don't have implicit nullability on any type.

I re-read my initial comment and can see some ambiguity in my phrasing "Maybe being monadic ...". That wasn't me saying "maybe it is true that being monadic..." but rather "the fact that Maybe is monadic..."

That just depends upon whatever type abstraction capabilities Crystal grows. It probably won't be getting higher-kinded nominal types so generalizing over Monad is sort of dead in the water anyway.

> How do you model a "zero or one" relationship without null?

With something that leaves "Nullpointer analysis" obsolete/useless.

A tiny observation here: we don't do "null pointer analysis". We do this kind of analysis for every type: if you invoke a method on a type and that method doesn't exist, you get a compile error. In the case of a type union, all types must respond to that method. "Nil | T" is just one particular type union, but you can also have "Int32 | String", etc.

Thanks, that clears things up. I assumed that Nullpointer analysis was a special case.

To be replaced with an analysis that finds unmatched cases (like None when you assumed Maybe)?

Why would a constructor be confused with a type constructor? I guess I just don't have any experience with this particular approach to "sums". Untagged variants?

I love that we're getting new languages lately, but almost all of them seem to be ignore the significant new requirement of our age: parallelism & concurrency.

Specifically, you need lightweight processes and no-shared-memroy architecture.

While the number of cores on a machine is remaining relatively low, the number of machines in a system are going up.

Erlang got this right and build a lot of infrastructure around it (OTP) and while you can't replicate that infrastructure quickly you can get the fundamentals right.

Simply getting this wrong rules out a lot of languages from consideration (because why learn a new language that is going to be obsolete, or only chosen by people who don't understand how to build systems?)

It's a lot easier to get this in when you're new and can make major changes to the language. Once you start to solidify it would break things- this is why Go's fake concurrency is a tragedy and a huge missed opportunity.

I have to disagree with you on this.

A language should be expressive enough that it can easily support correct concurrent programming, and readily expose runtimes (OTP is a great example) that have support for concurrency and distributed programming.

However, patterns like message passing instead of shared memory are not a panacea. Deadlocks and non-deterministic behavior can still happen in such systems if programmers do something stupid.

Enforcing these patterns at the language level smells of the old "let's make a language that doesn't allow programmers to write bugs!" trap.

> Specifically, you need lightweight processes and no-shared-memory architecture.

No you don't. I mean, that's one way to do it, but its not the only way and is often not an option if you want real performance. CUDA (a language specifically designed for SIMT parallelism) supports shared memory for good reasons; the threads are also not lightweight in that way (though they rely on memory and branch coherence).

It can be argued that shared memory should be explicit-when-needed rather than everywhere-by-default though.

It can also be argued the other way, of course. Do you like eating vegetables or ice cream? It's like garbage collection, some people love it because it frees them from worrying about allocating and freeing memory. Others hate it for the same reason.

You are right, it wouldn't be wise to create a new language without having concurrency in mind.

Starting from the last version there's spawn (lightweight processes) and channels for communications, similar to Go (although it's in a very experimental stage right now). We aren't sure immutability everywhere is the solution for everything, some algorithms and programs are much more efficient with mutable data.

I think Go is quite popular and have an amazing concurrency support. It's true, you have to make sure to communicate using channels and not via shared memory, but if you follow that rule than you won't have problems, and you can get pretty efficient programs.

>> I love that we're getting new languages lately, but almost all of them seem to be ignore the significant new requirement of our age: parallelism & concurrency.

Yes. I expect this to be the norm in future languages. App developers shouldn't have to worry about these kind's of things, and we can just focus on making apps. Parallelism and concurrency should just work out of the box.

I largely agree with what you are saying, but doesn't it only apply to languages that will be used in back end systems? If you are just making a phone app or something, it doesn't seem like parallelism is nearly as important.

Phones are getting more and more cores. It won't be long before languages for mobile need to solve the same issues - parallelism and concurrency.

http://elixir-lang.org - ruby-like syntax on the erlang VM, -> massivly scalable, really fast and fault tolerant.

Just curious, why Crystal? It looks just like Ruby. What problem(s) are you addressing with Crystal?

We like the way Ruby lets you quickly prototype things, but its performance isn't very good (it's just good) and it also lacks static type checks (for example "undefined method '...' for Nil" is a very common runtime error).

So, we are trying to create a language with all the nice aspects of Ruby but with static checks and better performance. Of course that comes at a price: no dynamic aspects (no eval, no instance_eval, no methods or classes created at runtime, no methods redefined at runtime, etc.). But we try to compensate those with macros and compile-time reflection.

One thing that is definitely not one of our goals is to replace Ruby: every tool has its place.

People talk about how Crystal's performance is better because it's statically compiled and removes Ruby's dynamic features, but I'm not sure static compilation is the best way to achieve performance, and I don't think the dynamic features need to damage performance.

For example, JRuby+Truffle runs Crystal's own sample programs around twice as fast as Crystal does, without static compilation, and while still supporting all the metaprogramming and dynamic features of Ruby such as monkey patching, send, method_missing, set_trace_func, ObjectSpace etc.


However Crystal does start faster, and I'm sure it has lower overhead.

Did you compile the Crystal program with the --release flag? That turns on optimizations. On our machines we see Crystal is faster in this benchmark, it always takes about 1.6s while JRuby+Truffle reaches 2.38 at most.

No, I didn't, sorry!

I don't see that option documented anywhere except the changelog, and one passing reference in the docs that says it sets the release flag, but neither say it has any effect on optimisation.

Yeah, we need to document this better, sorry!

Get used to this, watching people run benchmarks after having forgotten to compile with optimizations is basically a meme in the Rust community. :)

I don't think it's a big deal, benchmarks are just toy programs. Once you learn about the `--release` flag you never forget it for production-ready code.

C has -O3, why isn't it the default? Because it takes a lot more time to compile. So I think no optimizations by default is the best choice. And I think Rust should do the same, you will be compiling more things in non-release mode than in release mode.

Compiling without optimizations is indeed the default in Rust, but we have seen such an unbelievably large number of people not realize that a `--release` option exists that there have been several debates regarding whether or not this should be changed. Instead we've stepped up our documentation, we've made the package manager tell you which mode it's compiling in, and we've made Cargo put the finished binaries in either a clearly-named `debug` or `release` directory... and yet still they crash upon our walls, waving benchmarks where Rust is seemingly three times slower than Ruby.

It may seem obvious to you and I that compilers have optimization levels, but consider how many people for whom Java, with no optimization level flags, is their only exposure to a manually-invoked compiler.

Right, but if you want the static checks for safety, then even if you run on something like Truffle, you will lose a lot of dynamic behavior. Either that or your static checks won't be able to check as much, and will give you weaker guarantees.

I recommend adding something about static type checking to the list at the top of Crystal's homepage. I saw "never have to specify the type of...", and believed it was another dynamically typed language.

I actually didn't know it had static type checking until I had closed the tab, and glanced at this comment you posted here. (I know, I didn't read very far in the linked page.)

That aside, it looks neat! :)

You are right, it's far from obvious after reading that list. I updated it. Thanks!

I came across this a few months ago and ignored it for similar reasons: the homepage doesn't talk about the fact that nil is a type not a value (which is AWESOME, btw). It wasn't until a coworker encouraged me to dig into the docs that I realised Crystal is actually pretty sweet.

I recommend stating something like "No unexpected nils at runtime" and for those curious, a link for where to read more. Not having nil is basically the most important feature in most new languages I take the time to play with. Now that I know Crystal treats nil responsibly, I'm really keen to start playing with it.

Macros is something I really miss in Ruby. But static classes strikes me as a bit of a limitation. E.g. a Crystal REPL can't create classes if I understand it correctly.

It's a compiled language.

A statically-typed compiled language.

The important thing is that it’s statically-typed. All languages are compiled nowadays, except Bash. Ruby, Python and the likes all use VMs, and the code you write is compiled in VM bytecode before being executed.

Awesome! Did you consider synchronizing your syntax with Mirah / typed ruby? Also what's your opinion on the Vala/Gini toolchain?

How long did it take to become self hosting? I checked the first tag on GitHub and it was still implemented in Crystal. Did you originally implement it in C or another language?

It took about one year. We originally implemented the compiler in Ruby and because of its similarity with Crystal (or the other way around ;-)) porting it to Crystal was pretty easy in the end.

There's this tag: https://github.com/manastech/crystal/releases/tag/ruby

You can read about the bootstrapping moment here: http://crystal-lang.org/2013/11/14/good-bye-ruby-thursday.ht...

From what I understand in another post of their's, Crystal used to be written in ruby, before they successfully compiled it using Crystal.

I've been looking at the benchmarks https://github.com/kostya/benchmarks and am pleasantly surprised about the speed. It definitely smokes Ruby, but also is usually faster than Go. I know that all benchmarks are relative but Crystal seems a great language from a performance viewpoint.

Indeed it's becoming quite promising (using crystal already for few pet scripts which were too slow even for rubinius). Where it's most lacking at the moment is gc - it uses stop-world off the shelf boehmgc which is ok but not exactly great for memory heavy tasks.

imho, starting a new language with Boehm GC is a very bad design choice. It means, that one just allocates memory, and does not care for managing it. Even worse, it prevent linking any library, e.g. a 2nd thread running Lua+C, that cares for its own memory, because Boehm GC runs over the complete memory, not only the one the language has to manage.

You basically need 3 types of memory: First for the objects in your language, 2nd for foreign light weight objects, where you only know a pointer, and 3rd for foreign heavy weight objects, that gets their memory from your GC.

> it uses stop-world

A fully concurrent GC is impossible, if variables are mutable. Regardless how tricky your GC delays the problem, there will come a point, where it has to stop all threads to collect the edge cases. This creates the GC dilemma, because currently only number of cores and amount of memory becomes cheaper, while single core performance stayed same for nearly 15 years.

In the beginning Crystal didn't free memory. We needed a GC and Boehm was a super easy way to get that. It worked out of the box with very little effort.

Eventually we can write our own GC or use another one. It's only a matter of time. But right now there are more important things, we think: finishing the language rules, stabilizing things, fixing bugs, completing the standard library and writing documentation.

Nothing is set in stone in a language, things can always evolve and improve.

> A fully concurrent GC is impossible

That's a common misconception, of course mutability gc barriers can be made atomic. But it comes at significant synchronization cost, plus using full shared world like in C does not seem like a good design decision anyway in high level language like crystal.

Which is why I'd be more in favour of refcounting, and let the user make the choice - a simple stop-world gc mark&sweep is alright for tasks which can afford the higher memory usage and pauses (one gains good throughput), or rc - good for low latency, low memory usage (and low throughput and high cache pollution).

Regarding multi core threading, crystal has next to none. All modern gc design decisions depend on how exactly multicore threading will be eventually implemented. hence why fixing gc is not a priority, but rc could be readily useful.

Wow! Ruby was the first programming language I learnt and I still love it. Crystal is going to add features I've always missed: static typing and compilation. Thank you guys, it will be awesome!

You missed compilation?

He probably meant the results of compilation: superfast binaries ;)

In that case, what he missed was AOT optimization :)

Many Lisps have had compilation to binaries. But having a binary doesn't imply being faster than an interpreted language.

Theoretically, a JIT optimization can end up with faster code than an AOT optimizer can produce, thanks to runtime profiling.

In fact, I'd be interested in seeing benchmarks, which Crystal is probably not ready to share, but may have conducted: https://github.com/manastech/crystal/blob/master/samples/fan...

Edit: this[0] seems to indicate that the optimizations are in the same ballpark as Go and D, which is really impressive.

[0]: https://github.com/kostya/benchmarks

> Edit: this[0] seems to indicate that the optimizations are in the same ballpark as Go and D, which is really impressive.

not sure about D, but Go is known to have very little in the way of optimisation, that's a big factor in its speedy compilation.

At first look, I like it. Would be great if we can build a ruby-to-crystal bridge so all the ruby goodness can run on top of it without requiring programmers to learn a whole new language. No GIL, faster than ruby but syntactically close enough, the dream.

Current channels implementation uses fibers, ie everything runs on single thread and only stack is switched. There is no GIL.

If you want true multithreading, simply use pthreads like you would in C. Just make sure to mark gc roots of new thread stacks (gc is thread aware). But this also comes with a lot of headaches as you're now responsible to synchronize everything and manage mutexes by hand.

I made a Crystal Language Web/Git information collection pages. https://github.com/nob-suz/crystal/wiki/1.-Read-First-%28gen...

See Wiki 1. to 9. If you enjoy your first try for Crystal Language, it's nice.

Wow, I started designing a language called Crystal in the 90s, which a) looks similar to this, and b) I'm pretty sure I even bought the domain crystal-lang.org at one point, but I never did anything with it! I had a very similar logo too :)

Python is to Nim as Ruby is to Crystal.

More like Crystal could become to Ruby what RPython is to python

That makes no sense. RPython is not a general-purpose development language, it's a strict subset of Python for writing VMs. From what I understand Crystal aims to be a general-purpose statically-typed native-compiled language with a syntax similar to Ruby in much the way Nim aims to be a general-purpose statically-typed native-compiled language with a syntax similar to Python.

Neither Nim nor Crystal exist solely to write VMs and auto-implement JITs, and programs in neither is expected to run unmodified on a VM for the inspiration language.

Now someone convince Matz to reimplement Ruby in crystal, add optional typing to CRuby, and you have won big time.

Actually, that's already happening[1]. Also note that there is a library for "ruby contracts[2]" (which I didn't try yet, but I'm very eager to do so) which could give the same result as a statically typed language.

[1] https://www.omniref.com/blog/blog/2014/11/17/matz-at-rubycon...

[2] http://egonschiele.github.io/contracts.ruby/.

Except for the parts where you've lost spent years reimplementing an implementation-defined language and you've lost the existing C API so most any C extension which doesn't use Fiddle (or similar FFI mechanisms) is incompatible.

Now. I just can't wait till Rails or something similar gets ported to Crystal.

Please don't be inspired by Ruby, please... aaaah god dammit!

Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact