Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Crystal 0.26.0 released (crystal-lang.org)
102 points by parvenu74 on Aug 13, 2018 | hide | past | favorite | 40 comments


Some grumpiness in quite a few comments. Pity. This is an awfully nice language to work with, and the toolchain is starting to grow up - the compiler appears much speedier these days than it used to. Also: A real grassroots effort unencumbered by megacorporation agendas (Yes, I know. Manas is a comparatively small Argentinian company, and kudos to them for having got the ball rolling). Give these guys a hand like they deserve. If Crystal keeps on the way it promises, parallellism will be there, and will be a joy to handle.


Pro tip: when posting to HN, include a summary of "what it is"! It's often hard for readers to figure it out from a changelog.

From the docs:

"Crystal is a programming language with the following goals:

Have a syntax similar to Ruby (but compatibility with it is not a goal).

Be statically type-checked, but without having to specify the type of variables or method arguments.

Be able to call C code by writing bindings to it in Crystal.

Have compile-time evaluation and generation of code, to avoid boilerplate code.

Compile to efficient native code."

See: https://crystal-lang.org/docs/


I keep thinking it's Crystal Reports every time it shows up on HN.


Me too. Or whenever I see it mentioned in this Slack I'm in. It was very confusing last December, when one guy kept solving Advent of Code in Crystal, and doing so with speed.


You are one click away from the home page, where there is a nice explanation.


The homepage description dives right into “syntax” without even explaining what the project is....!


> Be statically type-checked, but without having to specify the type of variables or method arguments.

My experience with both Go (type annotations always) and Python (type annotations sometimes) has me pretty skeptical about this misfeature (perhaps someone with Crystal or OCaml experience can tell me why I'm wrong).

When implemented properly (i.e., like Go, not like Java), code with annotations is more readable (this isn't controversial), and adding them is a matter of keystrokes. Note that this isn't a static vs dynamic argument; Crystal is still a static language and you still have to deal with type errors, so you don't get the "get the happy path working at the expense of bugs in other paths" rapid-prototyping benefits that dynamic languages boast about; you just hit fewer keys (maybe this is Crystal's workaround for supporting multi-char scope delimiters!).

I'm guessing in practice you have a mediocre editor that will show you the type annotations with some nonzero amount of effort, and when you're doing code review, you're just forced to jump around to figure out what types are? Or depend on docstrings that grow stale? These seem like a lot of costs, not to mention the effort from the dev team to support this "feature", all to save users a few keystrokes?


In case it's not clear from the above: Crystal supports type annotations, they are enforced by the compiler, and it's not uncommon to use them in practice.

However, there are some cases where the interface for an object is obvious but hasn't been named formally. For example, consider the following valid Crystal method:

    def json_vs_yaml(obj)
      puts obj.to_json
      puts obj.to_yaml
    end
In a more strictly typed language (that doesn't support intersection types), I couldn't do that without first defining some interface that includes `to_json` and `to_yaml` and then maybe convincing the compiler that my objects qualify for the interface (by extending their class definitions or casting at the call site), which is a lot of work to enable a trivial method whose purpose is already clear. In Crystal, the compiler just checks that whatever you're passing into the method has `to_json` and `to_yaml` defined and lets you get on with life.

In more complex cases, I almost always find myself including type annotations in method signatures (which will be enforced at compile time) anyway because it makes writing the method easier. And I certainly won't tell you you're wrong to prefer a language with more stringent requirements, especially if you have to work with other people's code a lot. But I do believe that there is a minority of cases where leaving off the type increases the clarity and conciseness of the code.

Another good case is removing redundancy on wrapper methods:

    def foo(x : Int32) : Bool
      x == 12
    end

    def add_and_foo(left, right)
      foo(left + right)
    end
Here, `foo` is fully specified and doesn't rely on type inference at all. `add_and_foo` automagically works on any types that return an Int32 when you add them. If I later update `foo` to operate on a different numeric type, `add_and_foo` still just does the right thing without needing any edits. Even though in practice `add_and_foo` as written is probably going to operate on other Int32's all the time, conceptually the method's parameters are of type "whatever foo takes" and its return type is "whatever foo returns" and letting the compiler enforce that via type inference communicates this intention more clearly than specifying those types concretely.


> In case it's not clear from the above: Crystal supports type annotations, they are enforced by the compiler, and it's not uncommon to use them in practice.

Yes, I considered that when I was writing my post. :)

> In a more strictly typed language (that doesn't support intersection types), I couldn't do that without first defining some interface that includes `to_json` and `to_yaml` and then maybe convincing the compiler that my objects qualify for the interface (by extending their class definitions or casting at the call site), which is a lot of work to enable a trivial method whose purpose is already clear. In Crystal, the compiler just checks that whatever you're passing into the method has `to_json` and `to_yaml` defined and lets you get on with life.

> Another good case is removing redundancy on wrapper methods:

I fully agree that there are cases where the relative number of keystrokes saved are considerable; however, these cases are quite rare and keystrokes are much, much cheaper than readability. Also, structural subtyping (aka protocols aka 'static duck typing') allows the compiler to determine whether or not a type implements an interface without an explicit `implements` declaration.

In any case, I don't want to put too fine a point on it. In the worst case it's a relatively minor inconvenience and perhaps it adds more ergonomics than I'm giving it credit for.


No note of parallelism on the roadmap?

https://github.com/crystal-lang/crystal/wiki/Roadmap


iirc, it was removed/postponed to make reaching 1.0 more obtainable


That just seems nuts in a world where AMD is putting out affordable 32 core chips.


I'm a big fan of shared memory parallelism from a performance standpoint, but I don't see any reason why v1.0 of a language must support this. Stable APIs enable a lot more than shared memory parallelism, which is, for the most part, a performance optimization. If you can delay the parallelism story by a few months to move the stability story up a year, why not?


I would call it much more than just a performance optimization.

A good story on parallelism is fairly high on anyone's list looking to move away from a GIL language like Python and I believe Ruby.

As I'm sure you're aware, parallelism isn't always easy to bolt on afterwards. Once a glut of your code is written violating your (unwritten) memory model you're never quite sure if you've gotten rid of that last race condition.


Crystal's programming model is about concurrent actors that don't share memory. Nothing should break when they move to multiple threads/event loops; it just needs the scheduler to schedule multiple fibers onto different threads. It's a solved problem.


It's actually pretty low on the list. Ruby already has a production-ready implementation without a GIL: JRuby.

Normal workloads like serving web requests or processing background jobs aren't impacted by the GIL. You simply run one process per core just like with NodeJS. Huge, huge sites successfully run on Ruby/Python. YouTube, GitHub, AirBnB, ZenDesk, Zapier etc.

Larry Hasting's has some talks online on the Python GILectomy. The major issues are around how to keep the VM simple and fast. IUUC the C extension issues are fairly minor in comparison because the GIL never guaranteed much anyway.


You're describing a performance optimization. Yes, if you write unoptimized code, it often takes some refactoring to support the optimization. Also, Python and Ruby are proof that languages can be tremendously successful at solving a huge array of problems without shared memory parallelism in v1.0. Besides, Crystal is presumably much faster than either of these languages with a single thread, so the relative advantage of (CPU) parallelization is much less.


There are very few cases where threading is essential, where you cannot work just with communicating processes. I cannot think of anything but FPS games (perhaps you can).

Crystal already has fibers, eventloop and CSP-style channels. See https://crystal-lang.org/docs/guides/concurrency.html

All languages don't have to be good at everything. See IEEE 2018 top programming languages list puts Python at the top. How's its parallelism story these days?


Do you have a reference to this? As far as I'm can tell, core devs are still aiming to ship parallelism w/ 1.0: https://news.ycombinator.com/item?id=13213619. But, Windows support might be held back: https://dev.to/dougeverly/comment/lob.


My head shakes with confusion


Why? If they wait with releasing new versions because of a big blocking thing (concurrency) even though there are other things to fix and improve at the same time, they might lose people from the community. If your language is in beta you want to keep polishing a language as it develops, and keep making it easier for the users.

I see no reason not to release these changes to the world just because concurrency isn't fixed yet:

> This release is focused on polishing APIs, bug fixing the compiler, keep working on windows support and some intermediate language changes for future releases. There were 119 commits since 0.25.1 by 24 contributors.


> just because concurrency isn't fixed yet

concurrency works great in Crystal. Its "parallelism" that they don't have yet.


No one's saying don't release what's good. They're just saying parallelism is worth delaying the big 1.0 for.


Ah, I understood it as "concurrency (actually parallellism, thanks devmunchies) is delayed until the big 1.0"


Erm, "Add multithreading support" is the very first item?


Multithreading and a GIL gives you concurrency, not parallelism. e.g you can spawn many threads in MRI but they'll still be waiting on each other. Hell I even implemented[0] Go-style CSP channels and coroutines atop of Ruby's Thread and Queue (which uses Thread), fully aware it would not magically give me parallelism.

[0]: https://gitlab.com/lloeki/normandy


That's all fine and dandy but irrelevant since Crystal doesn't have a GIL (or, indeed, an interpreter). Threading does imply parallelism in this context.


Indeed, and there seems to have been some work done to great effect[0]. Unfortunately the corresponding branch is quite stale[1].

Anyway, my comment was more about the fact that the "Concurrency / Add multithreading support" section of the roadmap was quite unclear about parallelism. Mentioning MRI's GIL was merely an example as to why any kind of thread interlocking could turn things into non- (or very limited) parallelism, hence patagonia's question being quite legitimate.

[0]: https://twitter.com/sdogruyol/status/833369972919382019

[1]: https://github.com/crystal-lang/crystal/tree/thread-support


> In 0.25.0 we tried to improve the inferred type of unions of sibling types. Although it worked well for the compiler itself, some codebases out there exhibited some combinatorial explosion of unions due to this change.

What is a union of sibling types? Why would improving type inference require unions to be _generated_? I skimmed the PR referenced in the changelog, but it assumes domain knowledge I don't have (I'm only nominally familiar with Crystal).


I'm not a Crystal developer but I think this is accurate:

The way Crystal's type inference works, if you have a variable that sometimes can be assigned a Foo and sometimes can be assigned a Bar, then that variable will have the inferred ("generated") union type (Foo | Bar). This usually works great, but the compiler runs into performance issues if these unions become very large (more accurately, it happens when the number of distinct such unions that gets passed to a particular method is large, for which a large possible union space is a precondition).

In practice, this tends to happen in class hierarchies with many descendants from a single root class. As a somewhat kludgey workaround, the compiler will simply forget about the union in these cases and use the parent class instead. For example, a method that looks like it should take a (Bat | Cat | Dolphin | Dog | KoalaBear | Hog) will actually just take a Mammal. This can be bad if Mammal doesn't implement all the common capabilities of those specific subtypes. For example, adding an Echidna subclass that doesn't implement the birth_live_young method can cause calls to `cat_or_dog.birth_live_young` to fail even in places that type inference "should" be able to determine that `cat_or_dog` will never hold an Echidna. However, it's deemed a necessary evil unless/until the compiler can be reworked to handle such cases without exploding compile times.


one possibility is to generate a "protocol" (a class implementing the intersection of the methods of all the types in the union) and fall back to that instead.


That makes some sense, but I don't understand why the explosion would be combinatorial and not linear.


If a method can handle n different types at runtime, then there are 2n - 1 possible union types that can be passed to the method: A, B, C becomes A, B, C, (A | B), (A | C), (B | C), (A | B | C), and that's without taking into account generics and nested type hierarchies. Not all of these unions will necessarily come up organically in code, but the class hierarchy of AST nodes used within the compiler was large enough that it was causing issues in real life.


Correct me if I'm wrong, but you don't need to generate every permutation to determine whether or not a given union type can be passed into a method, right? If the parameter is of type `A | B` and the function takes `A | B | C`, you don't need to generate the whole space to determine that `A | B` satisfies `A | B | C`. Also, for v1.0 of the language, it doesn't seem like it would be a big deal to put the onus on developers to recast their `A | B` to a `A | B | C`.


It's not just about validating types. If B and C are implemented sufficiently differently, the compiler output for a function that takes (A|B) will be different than for the same function that takes (A|C), and both of those outputs can end up coexisting in the final executable. Unfortunately I don't know enough about the compiler internals to go into further detail.

As for manual recasting, sure, it's possible, but it's not a great tradeoff. Either you do type inference correctly and have developers crashing the compiler unless they recast their variables in seemingly arbitrary ways, or you cheat on inference and have developers getting "undefined method" errors on classes they didn't think they were using until they learn the rules. Ultimately it was decided that the latter was less common and easier to deal with in practice (just add a PlacentalMammal subclass to your class hierarchy that includes Cat and Dog but not Echidna). It's not a decision anyone is thrilled with but I think it's reasonable.


I guess "larger binaries at v1 but correct" and "slightly less ergonomic at v1 but correct" both seem like pretty good tradeoffs to my mind, but at this point we seem to be pretty deep into speculation.


People interested in Crystal might also be interested in Inko.

It's Ruby-ish but less compatible, https://gitlab.com/inko-lang/inko

> "Safe and concurrent object-oriented programming, without the headaches. Inko is a gradually-typed, safe, object-oriented programming language for writing concurrent programs. By using lightweight isolated processes, data race conditions can not occur. The syntax is easy to learn and remember, and thanks to its error handling model you will never have to worry about unexpected runtime errors."


[flagged]


Geez, man. This isn’t high school. Perhaps Yorick has thick skin about it, but maybe he doesn’t. Either way, what good does poking fun at it on an Internet forum do? How insensitive are you? I don’t even need to get started on how an individual deleting a production database is significantly more of a company failing than any sort of individual mistake.


Is there some context I'm missing here?

Edit: Ok I figured it out. Apparently the author of this language Inko once made a mistake and deleted a database. Big deal.

https://thenewstack.io/junior-dev-deleted-production-databas...


Great reply. Also, it's very clear Yorick is a smart guy.

And who doesn't fuck up?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: