Hacker News new | past | comments | ask | show | jobs | submit login
The Crystal Programming Language (github.com)
306 points by sndean on Dec 19, 2016 | hide | past | web | favorite | 193 comments

I've been using and contributing to Crystal since 2015 and i can easily say that it's really great potential. It's simple, fast and productive from day 1.

I wrote a web framework (http://kemalcr.com/) and it's in production without any problem for more than 1 year on a big scale app. You can check my slides for some interesting graphs (http://www.slideshare.net/sdogruyol/kemal-rubyconfbrasil-201...)

Also be sure to check out http://crystalshards.xyz/ to discover Crystal projects(shards).

What a surprise to see crystal at the top of HN again! I've been using and contributing to crystal since 2015 too, and I've come to really love the language and community. A big thank you to all the community for being so welcoming!

What I've felt with crystal is that I find it an immensely practical language. Maybe that stems from the compiler being written in crystal. The stdlib has so many things in that make everyday work in the language a joy. For example JSON/YAML/DB.mapping which help you map objects between JSON, YAML and the DB (they're also really fast). Including a standard, performant http stack in the stdlib is a breath of fresh air. The performance of evented io in general is really nice. The concurrency story (channels + fibers) is simple, and doesn't show itself when not required (looking at you nodejs). Most of all, the type system gets out of your way and usually only shows itself when you're making a mistake.

What I've felt with crystal is that I find it an immensely practical language. Maybe that stems from the compiler being written in crystal.

The question is, how easy is it to make syntax-driven tools? One thing that Ruby never did quite as well as Smalltalk was having low cost of entry for creating language parsing tools on the language itself. (At one time, there were multiple online study groups of multiple people trying to parse Ruby working for over a year. With Smalltalk, you can start hand-coding a top-down parser in the morning and be mostly done in the afternoon.)

Most of all, the type system gets out of your way and usually only shows itself when you're making a mistake.

Sounds ideal!

> The question is, how easy is it to make syntax-driven tools?

You can actually require the lexer and parser portions of the compiler in crystal code, and get an AST from a file relatively easily. The crystal compiler then has tools for easily walking the AST.

I was just talking with a coworker about Crystal and didn't know of any web frameworks with it. Also didn't know about Crystal 'shards'. Thanks for linking these.

Very cool project. I'm looking forward to making something with it. The performance comparisons to Sinatra and Rails are impressive.

Go's PASCAL. :-)

Interesting language, looks especially interesting for rapidly throwing things together. Add a solid API to a window system toolkit and you've got a logical successor to Visual Basic.

Write an editor, cross compiler for some other language, and a shell in it. That will tell you if it is a good "systems" language.

Write a distributed service, a web application, and a database in it. That will tell you if it is a good "web" language.

Write a physics simulator, a responsive AI, and a real time user interaction model in it. That will tell you if it is a good "game" language.

Write a convolution, a linear regression, and a monte carlo simulation in it. That will tell you if it is a good "modeling" language.

In my Christmas wishlist is a language that I can write a good 3D CAD package in. The current choice is C++ but that can be really clumsy in many ways. Such a language would have a strong linear algebra library and a boolean solids package. Nobody is working in that problem space from a language perspective as far as I can tell.

There are some interesting projects for these

CPU Raytracer https://github.com/l3kn/raytracer

CRSFML multimedia/game library https://github.com/BlaXpirit/crsfml

> Structs in Crystal are are always passed by copy, so modifying them can be problematic. For example, my_struct.x = 7 is fine but array_of_structs[2].x = 5 will not work. To work around this, copy the whole struct, modify it, then write it back. Better yet, avoid the need to modify structs (work with them like with immutable objects).

I can see that becoming problematic for simulations (lots of particles, for example) and games especially. At some point you're going to want pass-by-reference and in-place mutations for better performance (ideally hidden behind some form of encapsulation of course). Is there no way around this?

Crystal has both classes and structs. Instances of classes are passed by reference, while those of structs are passed by value.

What would it need to be a good embedded system or distributed system language?

For embedded, I'd start with the language primitives easily producing efficient primitives of the machine, the cost (esp time/memory) of language primitives must be intuitive (ideally deterministic), be able to size/manipulate variables at the bit level, probably support pointers, have anywhere from tiny to no runtime, probably a good C FFI given existing stuff will be C-based, and ideally support inline assembly for stuff specific to MCU/CPU if not C-based.

Any embedded developers want to add to or dispute anything on that list? It's recent draft of a reference I'm working on to answer this common question.

I think that is a good start, I would add interrupt routines to it. Perhaps interrupt routines, arbitrary memory manipulation on a bit boundary, and tight code.

> language primitives easily producing efficient primitives of the machine

I think crystal already does this well, especially in release mode when llvm gets to optimise the program as a whole. This also seems like something that can be improved without any breaking changes, which is nice.

> the cost (esp time/memory) of language primitives must be intuitive (ideally deterministic)

I'm less sure of this, maybe having dynamic dispatch would be an issue here? I think that release optimisations would make the performance of any code pretty variable depending on what optimisations are applied. Asterite probably knows more about this than me.

> be able to size/manipulate variables at the bit level

Crystal supports bitwise operators for doing bit twiddling, and obviously llvm optimises that very well. As for sizing values at the bit level, unfortunately only the standard Int8, Int16, Int32, Int64 and their unsigned varieties are available for integers in crystal. There also isn't a great deal of fine control over padding and placement in structs, apart from padding on/off. This could probably be easily improved.

> support pointers

Crystal supports both pass-by-reference and pass-by-value structures (class and struct), and all pointer operations available in C should be easily doable in Crystal.

> have anywhere from tiny to no runtime

Crystal has quite a big runtime by default, but fortunately it's really easy to turn it off and import only the bits you want using --prelude=empty, which imports only the bare minimum [0]. You could then build your own standard library built without a GC, or implement a GC module (for example [1] which just uses malloc) which would allow you to reuse String, Array, hash, etc.

> a good C FFI


> support inline assembly

Crystal has this too, used here: https://github.com/crystal-lang/crystal/blob/master/src/fibe... although this too could be improved.

Overall I think Crystal has promise, and could be used for embedded programming without too much hassle, although its not a high priority of the core team. I know someone built a small kernel which just outputted "hello world" on x86 qemu.

[0] https://github.com/crystal-lang/crystal/blob/master/src/empt... [1] https://github.com/crystal-lang/crystal/blob/master/src/gc/n...

A Ruby-like language that's already that close to embedded needs? Wow, Im impressed. Might toy with it sometime in the future.

Linear Algebra is already a solved problem - there's no need to rewrite the many established C/C++/Fortran code that has been optimized for linear algebra over the last 40 years.

Why re-write what has already been done? Stand on the shoulders of giants.

> Why re-write what has already been done?

Because we can do better. For example, look at Mir GLAS, written in D.


At a first glance, it looks like the way they're optimizing the BLAS/LAPACK implementations is by making it CPU architecture specific - the same game that IMKL plays. That's probably why they are reaching the same performance as MKL as well.

Good to see they aren't reinventing the wheel, and openly expressing inspiration from Numpy is also a nice touch.

1000s Kudos for Mir and Dlang. My language of choice.

Because 10k lines of D is easier to maintain than 20mb of assembler fortran mix. Plus you get all llvm niceties.

Removing compiling issues, keeping everything in one language. Plus, when you do, you find bugs. We've found a bunch in lapack implementing gonum (godoc.org/github.com/gonum/lapack)

My understanding is you can call C code from Crystal bindings. It doesn't seem like you need to rewrite the wheel :-)

You already have bindings to BLAS, in https://github.com/mverzilli/crystalla , though I'm not certain how mature and full fledged it is.

You can the same for any programming language or OR, so why bother creating new things.

not sure if you'll see this but python has some nice CAD packages already


that's just one, some googling will help you find others

Thanks for the link! I will check it out.

Last year's discussion for those interested: https://news.ycombinator.com/item?id=10014178

This language definitely deserves your attention. It's fast, it's easy on the eyes, and it produces native code. A real workhorse language.

Want Go's speed but hate Go's if err nil verbosity? Check out Crystal.

I am very happy with Go, but I love having Crystal and Rust around to keep up innovation for modern, compiled languages. Looks easy enough to jump between the three (though Crystal isn't quite ready on the lib front).

A unicode core, useful core packages (MIME, ECC/RSA, Templates, etc...), and easy parallelism are so nice coming from 90's scripting land.

Oh, and I love Go's err bubbling (keeps libs from dropping important messages) after years of Exception catching.

Does it even have built in concurrency?

Crystal has an excellent concurrency story built into the stdlib which is on by default. All io is evented by default, with events being fed into the fiber scheduler. The nice thing is that the concurrency and evented io is built just using the stdlib in crystal itself, so it's readable and shows that crystal is quite powerful.

You can read more here: https://crystal-lang.org/docs/guides/concurrency.html

Lack of parallelism will cause similar problems to what python faces due to global interpreter lock, killing performance. It is 2017 (almost), parallelism should be available in any new programming language.

Parallelism isn't easy, so the core team is expending time to get it done correctly and released before 1.0. Crystal doesn't have anything like the GIL, and the compiler already enforces some simple rules, like type flow not being available on instance variables because they could be changed on another thread. You can't expect the world from any new project straight away.

Yes, it will be available before 1.0. It's on the roadmap.

Parallelism is available to any language through fork().

But its concurrency uses green threads only, no proper multicore support? That's such a pity. Lack of built-in parallel programming facilities is a big problem of many otherwise nice languages, many scheme implementations have this limitation, too.

A relatively new programming language should really use all cores, ideally it should distribute the green threads across the available cores with its own scheduler automatically and have synchronization primitives like channel work across them.

Yes, this is what Crystal 1.0 will look like. Parallelism is on the roadmap.

Go's built in concurrency is oversold.

If the language is powerful enough in the expressiveness it offers for building libraries that provide the same features as builtin types, that is also achievable.

This is the path being trailed by the languages more expressive than Go.

I don't agree. Having an event loop built in to the core of the language is a major advantage. Look at Python and async io: Twisted has been around for over 15 years now. asyncore even longer. Tornado exists because apparently Twisted and asyncore weren't good enough. I'm not even sure I know how many incompatible "coroutine" libraries are built on top of generators.

node.js may have a language and standard library that isn't nearly as nice as Python, but it has a built-in event loop. This sidesteps an entire universe of problems. Unfortunately node still has another, similar problem: Does the library use callbacks? Promises? Generator-based coroutines? async/await? If it doesn't use the same style as what your codebase currently uses, you're going to spend a lot of time converting between continuation styles.

Languages like Go and Erlang simply don't have any problems like this.

Now, one might argue that leaving it up to libraries allows greater exploration of the problem space and that blessing a concurrency method stifles innovation, and that might be right. But having a consistent concurrency story built right in to the language is a large advantage.

Totally agree. A inbuilt story for concurrency helps to create a solid and mostly unified ecosystem around the language, especially around anything that is IO related.

The contrast to these are C++ systems, where nearly each application and framework has it's own eventloop integration (and/or coroutine implementations, thread models, futures, ...). This means integration between various libraries gets super hard and code reuse isn't that big.

The only language I can think of which isn't super opinionated but get's now halfway standardized APIs is C#, due to the standardization on Task<T> APIs and async/await.

It will get pretty interesting what the future will bring for Rust and Swift, as these are not opinionated either.

I was getting ready to post the exact same thing. The built in event loop is major. Go did a lot of modelling from Erlang, but made tradeoffs that kept it closer to a C-style programming model.

It's a great language and the limits of its concurrency model aren't something that will really be apparent to you if you haven't learned Erlang/Elixir. For most people, it provides a concurrency model that's a cut above everything else out there.

The main differences in Go and BEAM languages (Erlang/Elixir) are cooperative vs prescheduling, shared memory vs immutable message passing, general garbage collection vs heap per process, and lastly general runtime vs isolation.

Cooperative scheduling means that the scheduler has to have control relinquished back to it (such as with I/O events), while prescheduling will take back control. Cooperative means that you can get better end-to-end performance for a benchmark but you run the risk of a code taking over the processor. Prescheduling will allow consistent performance for all operations in the run time without allowing anything to overtake it. That's one of the ways that it's possible to reliably run a database within the runtime alongside the rest of your code on BEAM.

Shared memory with pointers is pretty standard and definitely provides some performance perks, especially when dealing with large data structures. The flip side means that native clustering doesn't work. With BEAM languages that lack shared memory and rely on message passing, you can just as easily call a function on a different server in another data center as you can a function in your current heap space. This makes it possible to smoothly distribute everything without having to worry about updating shared memory on a specific machine or having the state get out of sync. The channels model helps to avoid this, but by even including shared memory in the language you make the trade off of losing natural clustering for distribution.

With shared memory comes garbage collection and GC pauses, although the Go team has done great work optimizing this. With BEAM every new process (equivalent to a go routine) gets it's own heap meaning that it can be independently cleaned up without pausing the entire stack. This also makes hot deployments possible, so doing things like deploying an update to a codebase with millions of active websocket connections can be done without trigger millions of simultaneous re-connections.

The general runtime vs isolation means that a goroutine that blows up and crash the entire system if it's not properly handled at the point of the problem. When writing Go code you find yourself writing a line of error handling for every line of functionality. With BEAM isolation, processes are kicked off with id's and the processes are so inexpensive that the standard method is to create two - one as a supervisor and one as the process. If the process ever crashes for some reason, the supervisor just restarts it immediately. This creates a granular level of isolation and reliability. There is a library for Go that I remember seeing that seems to create a supervisor pattern for reliability though (http://www.jerf.org/iri/post/2930).

Go will win benchmarks because of it the choices the language made, but the benefits for long term run time, reliability, distribution and performance consistency in the face of bad actors will be in favor of Erlang/Elixir.

That said, the steps that Go took toward implementing the closest thing to Erlang-like concurrency makes it the winner by far among the non-BEAM languages.

If you want both a built-in event loop and built-in async/await, you might want to give Dart a try.

This is the path being trailed by the languages more expressive than Go.

That path was the default prior to Go. Go's built-in concurrency, with the predecessor work by Rob Pike and others, was "the breath of fresh air" when it popularized its particular approach to concurrency several years ago.

If the language is powerful enough in the expressiveness

...it can create problems when programming "in the large." The challenge with programming languages has never been "expressiveness" in isolation. Few languages are more expressive than Lisp, and that's been around for a very long time. The problems have always had to do with various tradeoffs with performance and programmer effort in different contexts.

How is the debugging story?

Unfortunately its a compiled language so you can't drop into a pry repl like ruby. It does emit debug symbols, and both gdb and lldb work with it, I just haven't gotten around to working out if they're useful for more than printing backtraces on segfault. Puts debugging works great though!

What if you also targeted a virtual machine with a Ruby meta-level? Some care would need to be taken to ensure that the two were equivalent, but this should be commensurate with ensuring JIT and bytecode execution in some existing environments is equivalent.

I just played around with it. Wow!

Last time this came up on HN I shrugged it off but after playing around with it a bit as a former Ruby programmer I'm in love.

Super simple to compile a little binary and it's blazing fast when I run it. I think I'll be donating some money.

Bonus: Running `crystal init app <appname>` dumps out an opinionated starter project layout for you complete with travis.yml, test specs, and README. Maybe it's the old Rubyist in me that digs a little convention but this is certainly a refreshing departure from JavaScript madness.

You also have 'crystal tool format' for some more opinionating.

Love it! Go flavored ruby.

"The compiler is written in Crystal."

If there is any stress test worth noticing it's this. 98.2% of the code is written in Crystal.

I write aviation software (i.e. running real time in aircraft) - not safety related in any way - and I use Ruby.

I use it because iterating quickly and being super productive means I can do more myself and keep the team small. And it's worked brilliantly.

So throw this into the mix and my world just got a whole lot better. It's like these folk designed this for my project.

Good luck guys!

please don't switch to crystal until it's more developed. I don't want my plane crashing because of a language bug.

The day a plane you're flying on relies on my software to not crash it is the day I'm not letting my family out of an underground bunker.

On my god it's worse. I think this guy is writing the in-flight entertainment system. If this goes down we are all in for very bad flight!

I don't know what we'd do on those two hour flights to Orlando if the dozens of children had to entertain themselves!

Seriously, though, even the antiquated entertainment devices on planes these days make flying with kids so much easier than it used to be.

Sounds really interesting! Have you written anywhere about how you got involved in that? I'd be curious to know how one sells into the airline industry.

When you say "not safety related in any way", do you mean that your code runs on auxiliary systems, like the entertainment platform?

It's software to tell the pilot where to go (who in turn uses the autopilot quite often), and software to point the camera in the right direction and trigger cameras (pointing at the ground).

All up it's 9 processes talking to each other over zeroMQ.

Most are ruby, the camera and IMU are C++. Due to using a simple messaging system I can throw other languages in. Only one part uses any significant CPU and maybe Crystal can help with that. Should be easy to port over.

That sounds cool.

do you have a website ?

It's probably some kind of comms or navigational software.. not the autopilot

Huh, this is in interesting feature: https://crystal-lang.org/docs/syntax_and_semantics/union_typ...

If you have code paths that set a particular variable to different types, it promotes the type of the variable to a union. I'm not convinced that's a good idea, but then I don't usually use dynamically typed languages; perhaps this is the most natural way to do static typing for people who come from languages like Ruby.

In practice, it accomplishes two things. It is the tool used to catch null pointer exceptions at the compile stage and it is what allows you to write duck typed interfaces.

RE: NPE's - I have a feeling that'll be abused just like void * and interface{}. People will just start throwing Nil around everywhere as a catch-all. You need a stronger construct like Option<T>, Maybe, language-level nullable types, or a mechanism for the caller to handle it dynamically (a la unwind-protect/conditions).

The problem you end up getting is basically the dual of the 'expression problem'. You get easy type expandability with class augmentation by these semantics, but any client of the type is subsequently bound to deal with any augmentation of the type. (I.e., if in the method defining class 'a', you insert an elseif in between the consequent and the alternative of type Bool, you now just extended the class signature of 'a' to Int32 | String | Bool. Assuming all consumers of 'a' need exhaustive handling, you've opened your class and extended its functionality at the expense of code which had previously satisfied the dependencies to consume 'a' safely.

Crystal still remains a real interesting project to follow. I haven't examined the Crystal type system too carefully, but the concept of Parent+ as virtual types is an interesting concept that is certainly worth exploring.

Because crystal's nil type can have methods on, it already feels like an Option<T> with `#try`. We've had a lot of discussion about nil handling, but haven't come up with a better solution yet. Simple patterns like `return/raise unless foo` can protect the rest of a method body from nil without adding indentation.

Seems like there's a new language popping up every few weeks. This seems strange when the trend in human languages seems to be towards a single common tongue, or at least a common lingua franca.

The same seems true of mathematics. At least in the parts that get commonly taught, we trend toward a common representation.

In programming, it seemed like we might move toward a core "C-like" syntax with various derivatives, for a while.

But now we seem to be moving toward a world where everyone wants to create their own domain-specific language.

I can't help but long for a common tongue. One which allows developers in different domains to understand at least the functionality, if not the motivation without having to learn a new language.

I couldn't disagree more strongly. DSLs are at the heart of efficient programming because they encourage separation of concerns; that's why the Unix shell is so powerful, fundamentally it's a collection of different programming languages with well-defined interfaces for communication.

Secondly, personally speaking I've never had much difficulty understanding programming languages I'm unfamiliar with provided their syntax is sufficiently similar to those I use. For example, I've never much worked in Ruby but glancing at Crystal there's nothing really confusing about its syntax. Yes, we see new languages popping up here almost weekly, but we don't see new paradigms: usually most of the stuff here is procedural and on occasion it's functional. Furthermore, having seen the remarkable benefits the introduction of different paradigms has had for the community at large (think about a world without LISP for a moment) I for one believe we should encourage the development of new and strange ideas; most of them will fail, but every so often we might get a new world.

>>I've never had much difficulty understanding programming languages I'm unfamiliar with provided their syntax is sufficiently similar to those I use...

Oddly enough, I've just been thinking the opposite. I've been looking for a compiled language to learn, to add to my basic tinkering knowledge of Python.

From my shortlist of Crystal, Go and Nim, I opted to look at Nim first, as it has the most Python like syntax. I've actually found that to be a hindrance. Nim's close enough to Python that it fooled me into thinking it would be a doddle to learn, but just different enough that I constantly run into compilation errors from assuming [yes, I know!] that basic language constructs will be written the same in Nim as they are in Python and then falling foul of some subtle difference.

I'm actually considering moving my sights to Crystal or Go now, in the belief that they'll be different enough from Python for my brain to jolt out of 'muscle memory' mode and pay more attention to taking in the 'new stuff'.

Just learn C or C++. There are way more materials for learning them, and it's certainly different from python. You'll learn things that you would never learn while writing Python, which is the goal.

If you know C++ and Python, you can pick up most mainstream languages. If you learn a more functional language afterwards, you'll basically be able to pick up anything.

Then rather checkout pony, which has a better type system with guaranteed no deadlocks, is modern and currently the fastest native language, and much to write than C++, Go, Nim, Erlang or ruby. Only Elixir with its macros is easier, but ~10x slower.

I used to love DSLs. But recently I've become a fan of code I can look at and follow with my eyes. Debugging a nil error deep down in some nested macro magic isn't really as fun as it used to be (looking at you ActiveModel validations).

Foo.bar(args) is preferable to bar(args) to me.

> "Seems like there's a new language popping up every few weeks"

Crystal was started like 5 years ago.

> "the trend in human languages seems to be towards a single common tongue"

One of the clear goals of the creator was to make it as similar to Ruby as possible; the creator was striving for some kind of "common tongue", no doubt. Its not some strange eccentric new language.

Natural languages and programming languages are not good analogues. The switching cost to learn a new programming language is trivial compared to that of learning a new spoken tongue.

The proliferation of new languages is a symptom of the growth of programming as an occupation (paid or otherwise). The larger the pool of potential developers, the more it makes sense to develop languages that each excel at a specific niche.

IOW, the number of active users that a language needs to create a self-sustaining community is mostly an absolute threshold. If the pool of developers is small, then a language that appeals to only 1% of developers may not become self-sustaining. If the pool is large, a language that appeals to only 1% of developers may have 100,000 users or more, which is more than enough to create a vibrant ecosystem and adequate tooling to remain relevant.

That doesn't mean that a crunch won't eventually happen (possibly precipitated by a slowing growth rate) where the industry largely settles on a handful of common languages, but I bet that even then the niches will continue to be populated by special-purpose tools, even if their hype has long-since evaporated by then.

Why hasn't your toolbox evolved to a single hammer?

Besides, even if one language ends up being the only one needed, I'd be pretty confident that it's way too early in the study of programming languages, compilers and runtimes too try to stop experimenting and look some language to settle down.

It does seem like English is becoming more dominant and more dominant. There are cafes in Berlin now where staff speak no German as the city becomes more and more international, and one of the daily papers recently decided to do a regular English language version. As far as the expat community goes here, it's probably more important that you speak English than German (ideally both of course). I am curious to see how this plays out over the next 10-20 years, although I can't see English taking over fully.

Of course it's much easier to learn a new programming language, than it is to learn a new natural language. You can't pick up German over the weekend, until we figure out the technology to help us do that.

Your thesis on maths is incorrect. Indeed, the way it is th aught uses a single language (just like introductions to compsci uses c-like languages). But professionals in different areas of maths use very different languages and tools. Results in algebraic topology may be totally incomprehensible for people in probability theory (and vice verse), not because it's too difficult, but simply because they are not familiar with the language.

The line between languages is very blurry. Almost all languages support meta programming techniques like macros, code generation, decorators, annotations, etc. Advanced libraries are essentially new languages themselves, built upon the shoulders of their hosting language.

I think the distinction with human language is that programming languages don't just describe what is happening, they describe what they themselves are doing. The description is "complected" with the action being described, so changes in one may (and often do) affect the other. I often find this philosophically fascinating...

One possible reason that this is not the case is that natural languages and math have been around for a lot longer.

Another one is that programming languages have more in common with mechanisms that natural languages.

You could also say that we do have a few lingua franca programming languages which are also very similar to each other, to the point that someone fluent in one can easily read another.

Also, what do you think are the odds that a mathematician can understand the intricacies of a fluid engineering paper (in a natural language) and vice versa?

Another point is that a lingua franca is a common tongue for communication between people with different mother-tongues: not a replacement for native languages. For example in East & Central Africa Swahili is used for cross-communication, but individual ethnic groups maintain their cultural identities by holding onto their own languages. In this sense, I don't think you could easily find a developer who would struggle to understand anything (simple enough) written in Python, even though they don't use it on a day-to-day basis, so I think languages like Ruby/Python are already sufficient lingua francas for today's developers.

One more thing, maths notation is ridiculously diverse and overloaded and is in no way entirely uniform -- or even close. This is a problem when the disparity exists within a branch of maths, but an advantage when it's between very different areas, such as Analysis and Combinatorics.

> I can't help but long for a common tongue.

Me too. That's why I'm building – you guessed it – another language ;-)

revelant xkcd https://xkcd.com/927/

Seriously though - what's your take on the subject?

Programming languages are often not standards in any official capacity. They are created by and for a cummunity who is expected to learn on a constant basis. IMO the proliferation of new programming is a good thing, potentially creating new ways of approaching problems, or at least iterating on existing proven languages.

Sometimes the 15th standard finally wins. And if not, hey, what's the harm of yet another? They're fun to make!

There are always trade-offs. Sometimes you care about performance (C++) and are willing to sacrifice useful features that are costly at runtime. Sometimes you care about individual developer happiness (Python) and you're willing to sacrifice performance.

There's no silver bullet.

Hey, Crystal is the silver bullet! :) It's extremely performant (uses llvm <<and it's optimisations>> to generate binary) and makes happy dev (even better than python imho).

There are no silver bullets but do we need a complete arsenal of copper ones? I think many of the new languages would be better off as libraries for the ones that already exist.

The problem is, if you need an arsenal of bullets, C++, for all its messiness, fits the bill.

There is neither professional prestige nor financial benefit associated with trying to bootstrap a new spoken language.

Conlanging gets somewhat close in that regard.

Actually, similar to the way you can find DSL's galore at all the big tech companies, in my experience people in specialized fields or in close knit groups evolve their own jargon to speed up and increase the specificity of communication.

Diversity has its benefits. Too much of it has its bad things too.

Try searching the web to see if/how it works with Gems. Go on, I dare you... 'ruby crystal gem', 'crystal gem', ...

For that matter, crystal vs elixr.

Kind of wishing all these projects just had made up names.

Reminds me of that time I was trying to debug a TDID problem with cucumber and chef. Searching for prior discussion, all I got was pages and pages of salad.

Talk about real chefs...

I find that once a tech becomes somewhat popular and you use specific words to search, technical pages tend to come earlier than the original meaning.

Those chefs that search with same terms would suddenly get their results overtaken by voodoo pages.

As a Steven Universe fan, I got quite a chuckle out of this.

Crystal use something they call shards. They are extremely basic though and are really waiting for someone to write a more full-featured package manager.

It looks like the point still stands.

First google result: http://terraria.gamepedia.com/Crystal_Shard

Just add "programming" to the end of the search and it won't give you any gemstones or cartoons.

What exactly do you find missing with shards? Its simple, its decentralized, and it works great. Most of all its a standard.

Lets just say that I don't believe that decentralized dependencies scales well. Imagine a popular big application with hundreds of dependencies (think a mature Rails application) - that would either mean hundreds of simultaneous connections to github or an extremely slow serial dependency resolution. It would reach a state where github would tell us to go fuck ourself pretty fast.

But yeah, it works for now.

Github has had this type of problem before, and it seems that they handle it pretty well. I think as long as a single repository doesn't become a massive hotspot, Github don't really mind. I'm pretty sure that go checks out dependencies direct from github for every application, so the precedent is there. Shards also caches the repositories globally, which is nice.

you have to do "crystal lang" in your searches.

But yeah a more unique name would have been nice... or even just crystal spelled like "Krystl" or something.

Try this one:

>ruby crystal gem programming -gemstones

I wonder how Ruby will compete with this. Crystal basically seems like Ruby 3.0 to me (major improvement is speed, btw Ruby is currently on 2.*).

Pretty easily. Crystal looks like Ruby, but it doesn't feel like Ruby or work like Ruby. It seems fine, in a very Golang "it works for projects where you need to fling a blob of binary somewhere" sort of way, but it's not remotely as dynamic-as-a-language as Ruby (and that dynamism is precisely why I use Ruby), nor is it dynamic-as-a-platform--something like CLR assembly loading or Java classloading to get a statically-typed, reflective representation that gives me something similar. And you might not need that, and it's totally cool. But Ruby enables a lot of things Crystal, as designed, seems unable to.

It's not bad. It's fine. Not the pejorative, dismissive sort of "fine", but fine. But it's not Ruby. Ruby is slow in large part because Ruby does hard things. Some of them aren't really advisable, from a perf perspective--but they are from a human perspective, and Ruby preferring humans over computers is a huge feature for me.

You sum up my finding on Crystal really nicely.

I wish I could upvote your comment more than once.

Crystal is nice but I've found simple dynamic things to be a challenge. I haven't looked in a few months but I wasn't able to do this

a = []

a << 1

a << "two"

Same with hashes. This broke some things for me and ended up feeling like I wanted to program in Ruby but there were all these gotchas.

If I have to break the mental expectations I have for writing something to get the performance needed, then I'm fine to switch gears and write Java or C in order to get it.

You seem to make the assumption that crystal's purpose is to be for ruby developers, and as soon as it deviates from ruby, you've switched gears and might as well use Java or C.

I don't think that's the case. Crystal takes its syntax from ruby because it's easy to write, easy to read, and familiar, it doesn't have to be the same as ruby in semantics. In fact that's impossible because it's statically typed (which I personally prefer to ruby).

If you're going to resort to Java or c for speed, why not use crystal? Crystal beats Java and c on expressivity of type system, expressivity of code (less boilerplate) and performance per unit effort (at least on common tasks). So the question is why Java or c? They both certainly have larger and more mature ecosystems, and if you have compute-heavy workloads they'll probably win out on performance, but why do you instantly make the leap from crystal != ruby to "might as well use java/c"?

Actually, I'll expand on that and say that crystal isn't "static typing for rubyists", its more like "the best bits of ruby applied to static typing". There's a big difference in that the bits of ruby that crystal chooses to take may not be the parts that are most familiar to rubyists. That it is so similar to ruby is a huge complement to ruby in that there was so much that the team wanted to take.

The hardest part of learning crystal for rubyists is unlearning their ruby habits when working on crystal. Once you're over that hill you might find crystal a wonderful language in its own right.

I agree and that's my original point, in order to use Crystal I need to unlearn things. I guess I'm lazy enough that the performance benefit isn't a big enough benefit for me to trade off the flexibility of Ruby.

Nothing against Crystal, I think it's great but it would almost have been better coming to Crystal from something other than Ruby.

You replied downstream of me, but: being non-Ruby in terms of semantics is a pretty big minus for me, to be honest. If it looks like Ruby, I expect it to act like Ruby and the cognitive load of dealing with such similar syntax without having similar semantics is more than I want to deal with without getting a big win. And Crystal is not a big win to me; my choices aren't Java or C, my choices are Kotlin-or-Scala, C#, and C++ rather than C. And the next step for me would be Rust, which I think does a good job of taking lessons from Ruby rather than imitating it.

It's not a bad language. (I would rather see it than Go in just about any situation.) But I don't see a positive value prop, for me, as I don't particularly value syntax (I can write anything; even if syntax was a big pain point for me, this is why I use IntelliJ and ReSharper/ReSharper C++) and I do value ecosystem and tooling. Maybe if it was JVM-targeted or CLR-targeted, to piggyback off those ecosystems, but even then the value prop is muddy.

More mature, more libraries, more supported.

Maybe Crystal gets there some day, but it's not there yet.

  a = [] of Int32|String
  a << 1
  a << "two"
works, however. So while it isn't exactly the same, it is pretty close.

That's the point. It's close but different enough that I've got to context switch to get the benefits of Crystal. So I wouldn't use it as a replacement for Ruby since it's a downgrade. Since I'm relegating it to areas where I need it's benefits (I and others perceive it to be primarily performance if you're already using Ruby) then I'd just as easily switch out to C or Java or Rust for that matter.

> that dynamism is precisely why I use Ruby

Could you elaborate? I've never felt like I really needed a dynamic language but that might be because among dynamic languages I only know Python and a little Javascript. I still use them, but I don't feel like I really take advantage of the fact they are dynamic.

I don't need a dynamic language. But for many tasks, I want one. For others, I have languages that are, to be honest, more expressive than Crystal--C++ or C# or Kotlin or Scala. I would not attack Ruby problems in those languages, nor would I in Crystal. Don't get me wrong: it's more expressive than Go, so that crowd should totally pile on Crystal and make it better 'cause I'd rather have to deal with Crystal, which while I think it's not really anything special I don't think it's bad, and to me that's a big step up. But they won't, so, whatever. Anyway, moving on.

`include`, `extend`, and `prepend` are used a lot in Ruby because the very mutable nature of class definitions allows for powerful metaprogramming without having to do a lot of work--and all are, and can't not be, runtime-evaluated. I regularly build objects in Ruby out of dynamically generated mixins reliant on runtime behavior. As do most people. Consider ActiveRecord (which I don't care for, but it's a good example). You create a class that inherits from ActiveRecord::Base--and, at runtime, it queries the database and builds a set of methods for the class based on the schema of its table. Or something like dry-struct, which evaluates, at runtime, a set of attribute specifications and dynamically creates the API for accessing it (as well as validators, etc., which can be programmatically devised).

And that's not even taking into account a plugin architecture that is functionally "require this file" and being able to interpret it at runtime; Exor, linked above, is a lot of pretty finicky work to simulate a worse version of that in C#. Or the general usefulness and value of monkey-patching in a lot of contexts. Oh, and I can drop `binding.pry` into any spot in my code and have a REPL in which I can define methods on existing classes, inline, and build functionality that I can then dump out into my editor. Compiled languages that do that are, er, rare. (JRebel and hot reloading isn't as nice as just doing it.)

Ruby isn't the perfect tool. But when I want to write a little code to do a lot and don't have many other concerns, that's why I go to it. That's why something like Chef uses it instead of the programming mistake of 2014-2016 (using YAML for everything). Because nothing code-related expresses intent better than code itself, and Ruby is built around making that easy for me when I can think about classes and modules as things that exist rather than immutable blueprints off in the ether.

What if it had a really powerful debugger?

Having a "powerful debugger" is orthogonal to "can I dynamically define classes at runtime, when that makes sense to solve my problems?" and "can I load a library that's annotated to make for a simple extension system?".

Like, take a look at this[1]. I wrote it for a game engine in C#. It allows for dynamic loading of modules or static packaging on platforms that require it. This is what I would consider a low-end thing for a Ruby application that needed extensibility--just something you do out of hand. Crystal can't do that. Maybe it will in the future, but it's not even a 1.0 feature. So Crystal might be a challenge to a Golang-ish thing, but it's not a challenge to a Ruby or a JVM/CLR thing on that front.

[1] - https://github.com/eropple/Exor

Do you really need to define code at runtime? Why not at compile time? Need to pull in templates and macros made by someone, but not the code that uses them?

Plugins have a long tradition already, starting with the old dlopen and loadLibrary. Even in C++ (mostly) you can have it done, just look at Qt.

Well, for one, Crystal doesn't support PIC, so you can't write dynamic code in the first place. For two: yeah, I very regularly do need to define code at runtime. Consider ActiveRecord (or, like, all of Rails, but ActiveRecord specifically), which hits up your database for its schema at runtime to build its API. Consider that it's totally normal in Ruby to use an `if` statement in your class definition; for example, I will replace entire implementations of functionality if I know I'm running in JRuby rather than MRI, without having to add some weird thunking layer. Consider that it's normal to not only `extend` other people's classes, but to write `self.extended` hooks in your own classes to respond to it.

A class in Ruby is just an object. It is not a Class, it inherits from Class. Mutating them to achieve your goals is commonplace and valuable. It's the closest thing to brain candy as a meaningful, good Lisp I've used since college. (Clojure is not my thing. And obviously I know that Ruby has a lot of Smalltalk-ish stuff in it rather than Lisp, but I never used Smalltalk...)

You can fake a lot in a compiled language. I have done this. But you had better be getting significant wins for doing it, and something like Crystal does not offer them over something like Ruby unless you're doing something that doing it in Ruby was a bad idea in the first place. (If you wrote something in Ruby where you actually cared about perf, for example, you picked wrong.) Right now, writing Rust plugins and using them from Ruby is a more Ruby experience than Crystal is. Which isn't a shot at the language, I have to stress--just that it's not Ruby, it's something very different.

> Do you really need to define code at runtime?

The people who wrote Rails thought they did - it defines tons of code at runtime. We can't say that they needed to or not. Nobody needs a programming language abstraction. But it worked for them.

An interesting thing: https://github.com/crystal-lang/crystal/blob/master/src/conc...

Looks like they took the Fiber concept of ruby and made it into something like goroutines, with channels handling.

You can take a look at Crystal for Rubyists (http://www.crystalforrubyists.com/book/chapter-08.html) for more info on Channels and Fibers.

Actually, I'm now a golang developer, so there are no big surprise, here ;)

I'm by no means a very experienced programmer, but what was the motivation behind choosing `nil` over `null`? Seems like, from my inexperienced perspective, it was just designed to add unneeded idiosyncratic syntax/semantics to distinguish it from Ruby and/or C.

Can't comment on anything more than that right now. Have yet to dive very deep into it.

Generally, you see NULL when the concept of nothing is represented with the byte value of nothing or 0 (e.g. C).

Nil tends to be used in languages where the concept of nothing is represented by some sort of container. In lisp, it's simply the empty list (since all data lives in lists in lisp). In Ruby, it's a singleton object that inherits from the same base class as everything else. In both cases, the language is structured such that every container can easily be asked the question "Are you nil?" (as opposed to false or 0).

Thank you for that explanation! I had never thought about it in that way.

>since all data lives in lists in lisp

That's not true. Every modern Lisp implementation has many primitive data structures: vectors, hash tables, etc.

The Lisp 1.5 [1965] manual describes arrays in section 4.4.

So I didn't even need to say modern! Thanks!

There are probably as many languages using "nil" as there are ones that call it "null".

Nil: Ruby, Go, Oberon, Modula-2, Swift, various Pascal dialects, Lisp?.

Null: C/C++, Java, Rust, JavaScript, Ada.

Then, of course, there's Python with None.

Rust deliberately doesn't have anything in the language called "null" (there's `std::ptr::null()`, but that's for C interop). The only thing that fits the bill here is the unit type/empty tuple, `()`, which is sometimes called "nil".

I know, but it's "std::ptr::null", not "std::ptr::nil", which was the point of this discussion. (Clearly it has a concept of null pointers, even if they're used in a more limited manner than other languages.)

But you can't use `std::ptr::null()` in the same way that you use null in the other languages mentioned there. It's literally just there for C interop, and it's called that because that's the terminology that C uses.

For finding an analogue for the typical null use case in Rust, one would use the `None` variant of the `Option` enum.

Nitpick: `std::ptr::null()` is not just there for C interop. It's also useful when writing pure Rust unsafe code.

Can confirm, used null all the time in Rust. The raw pointers are important native Rust constructs, although they do skew to supporting C idioms as much as possible (e.g. `* const T` and `* mut T` is basically a worthless distinction within Rust itself).

So in other words, std::ptr::null is Rust's version of null for situations where you would actually use null in C, except in general Rust discourages trying to use it like it's C. Seems reasonable.

For what it's worth, Rust has both Null (for unsafe null pointers) and None (wrapping a type that may or may not exist at any given point). None is used far, far more frequently in my experience than null in rust though.

And perl with undef vs (). undef a special symbol such as null, and () the empty list as nil.

Neither nil, null, none or undef refer to (void*)NULL, they are usually a special symbol in the global symbol table. Only some languages use null/nil as NULL/0 internally.

Huh, didn't know it was split that way. I'd seen a couple examples in C/Java/Rust/JS/Java and assumed the vast majority of them used 'null'. Thanks for edifying!


Nil: Lua

Null: PHP

put SQL on the "null" team as well

Depending on the sql variant, null could be more like nil.

If it varies from NULL then its not SQL compliant https://en.wikipedia.org/wiki/SQL_compliance

Crystal started out as being more or less "compiled Ruby" (it's changed a bit since I first came across it). As such that means it uses "nil" just like Ruby does. And Ruby, being heavily inspired by Smalltalk, took "nil" from Smalltalk. You'd probably have to ask Alan Kay to get to the bottom of this.

Re: Smalltalk, Alan Kay borrowed nil from Lisp.

Matz (ruby) also borrowed nil from Lisp. (He was also inspired by Smalltalk.)

Ruby uses 'nil'.

Not a new terminology. Lisp uses "nil", which is '(), the empty list. Objective-C uses nil as a null pointer to which messages may be sent. Probably others too that I can't remember now.

Many languages use "null" in similar ways. I guess it's a matter of particular syntactic conventions.

The value, under conventional evaluation, of the expression '() is (), which is the empty list. This '() expression itself is the object (quote ()) which is anything but nil.

The nil-()-false correspondence is only specific to the Lisp dialects which are close to the original tree. In Scheme, an empty list is indeed (), and this object also the terminating atom of a list. However, the object isn't a symbol at all, let alone the symbol nil. Moreover, the object isn't a Boolean false; Scheme has #f for that.

Wow. Genuinely fast: benchmarks usually somewhere between C++ and Java, orders of magnitude better than Ruby. https://github.com/kostya/benchmarks/blob/master/README.md

Try something that is more its weight, like Rust.

I thought about using Crystal for some roguelike game I wanted to program (but alas with a new baby do not have the time for anymore).

The reason being that it is OOP. I eschew OOP (particularly for biz software which is my day job) but have to admit OOP still is a pretty good fit for UI and video game programming. I tried some reactive FP game programming and I just didn't make as much progress as I would have liked.

I tried googling for some roguelike crystal programmed games and didn't find any yet (it is difficult because crystal has lots of conflicts particularly with rpg like games)....

I am really invested heavily in ruby, I've been using it for years, like 10+. I should be natural target group for this, yet, while I appreciate all the features, I really feel that something like Elixir is better suited for future.

Also, even though I spent years writing ruby, I don't have problem switching to other languages.

Can you give details why you "feel" that way about Elixir?

I feel the same way about Elixir here's why:

- docs

- testing

- Slack chan, IRC, mailing list

- package manager (Hex)

- build tool (Mix)

- web framework (plug & Phoenix)

- db framework (ecto)

- books from major publishers (manning, pragprog, etc.)

- ElixirConf

- ancillary teaching (ElixirSips, LearnElixir, Elixir Fountain, etc.)

- helpful blogposts and stack overflow answers

All these things sum up to lower barriers of entry when trying a new language. Elixir has it in spades. It's really easy to pick up and learn the syntax and predicates of the language (BEAM, OTP, etc.)

Even though Crystal is closer to Ruby in syntax and Ruby can be valid Crystal -- Elixir still feels easier, because there is less work to be done with learning everything else.

Without taking anything from Crystal, because there is nothing wrong with Crystal...

Elixir has so many superstars working on/with it.

Definitely, future is in multi-processor architecture, something Elixir is naturally built for. Plus, even though it is nominally looks like Ruby, it is very different beast, functional and elegant.

I'm sorry if I'm missing this, but what's the distribution story? Does it produce stand-alone binaries? In other words, can you just distribute the generated binary without requiring the user to install a runtime package or a bunch of libraries or whatever?

It produces standalone binaries. I'm not certain the cross compilation story is in place yet though.

To expand on that, it produces dynamically linked binaries so you do need libevent, libpcre, libgc etc. when you want to run crystal binaries. We hope to get the number of runtime dependencies down in the future.

But, Also you can try static linking :-)


The output binary is bigger than dynamic linked and it would needs some dev libraries before compile.

Sure, it's going to be bigger. Not a problem, as long as it works. :-)

Requiring the user to install a separate bunch of dependencies adds a huge amount of distribution friction vis-a-vis just shipping a stand-alone application bundle, especially in this day of app stores. Being able to avoid that is very important, IMO.

Probably anyone reading this isn't going to be daunted by having to install some extra stuff, but for the average user it's a real pain.

Sounds like it has most of the same goals as Swift. I wonder how they compare?

They are both LLVM languages. But I remember reading that Swift was just barely slower because of the reference pointing or something. Obviously I'm not a reliable source for performance metrics.

I find ruby plenty fast enough for sites in the 1k/min requests range, but I have serious problems getting it to run fast enough with websockets. Even 40 concurrent websockets connections through ruby is enough to bring down my hosts.

Does anyone have any experience using this with websockets? How does it perform?

Crystal has websockets built into the standard library, and as with almost everything built in crystal it uses evented io. Here's a benchmark: http://serdardogruyol.com/benchmarking-and-scaling-websocket...

It looks very nice, I like that you can have beautiful syntax and performance at the same time.

But it lacks a modular interface for modules and libraries that does not import hidden/global variables. They should look at NodeJS for inspiration.

Ruby is quite a large language with a large ecosystem of libraries, but I have never seen or heard of major issues with module name clashes. Libraries tend to have quite unique names and stick to their modules. Crystal works pretty much the same, so I personally don't think a change is needed.

Looks good.

I don't like all the 'end' statements though.

I remember thinking that when I first learned ruby. It reminded me of truckers talking over a CB radio. Over.

"there's no point in trying to suggest changing the language to an indentation-based one"

Ok. Bit of a bummer though.

Very nice. Something to keep an eye on.

Can I create a shared library with it? Can it be used to write a ruby extension?

The answer is technically yes but you probably don't want to. Crystal comes with a managed runtime and gc, which are particularly unsuited for use in a shared library. However you can export c functions (it's a hidden feature) and they do work.

Creating a ruby extension in crystal also works and there are a number of libraries which attempt to allow you to do this (too many in fact). They do the hard work of getting crystal to play nice with the ruby extension api for you.

See https://gist.github.com/spalladino/10c829db3191a89a8ba73bb00...

We'll probably build it into a more stable shard eventually, but at least we know it's doable

What are the issues or features that crystal needs to solve before 1.0?

One would be parallelism. Not sure if the maintainers want to target that though before 1.0 .

We do, and we will. We are working on it :-)

How does this compare with Go? tech and non tech things?

The first item is the syntax, Go use braces while Crystal is Ruby based syntax.

Go has full parallelism support. Crystal is WIP.

Go has Windows support. Crystal is WIP.

Crystal and Go performance are very similar.

Go is backed by Google and his community since 2009.

Crystal is backed by Manas.tech and his community since 2012.

Go has full featured IDEs like https://www.jetbrains.com/go/

I hope to have Crystal IDE support after 1.0 :-)

So I'm happy with Go as long as Crystal gets more features

Thoughts on the conservative garbage collector? How often is it a problem?

Crystal uses libgc, from the readme: [0]

"Since the collector does not require pointers to be tagged, it does not attempt to ensure that all inaccessible storage is reclaimed. However, in our experience, it is typically more successful at reclaiming unused memory than most C programs using explicit deallocation. Unlike manually introduced leaks, the amount of unreclaimed memory typically stays bounded."

Having used the language for a year, I've never really had an issue with the gc other than with really large (think 100mb+),infrequrnt, single allocations which get "stuck" and don't get returned to the OS after. This is a by product of the gc being unable to move objects to compact the heap. This situation can usually be rectified by loading a large file as a stream instead of loading it all into memory. Crystal has excellent tools for working with streams (see IO module) however, so it's not that big of a deal.

Why language differ syntactically? I understand different platform, runtime, optimisation needs etc. But why syntax ? C is sufficient for most logical implementations, at most we will need hierarchy of implementations but why parallel indecipherable syntax?

> C is sufficient for most logical implementations

How do you know this to be true? You don't - it's just an opinion. And that's why we have different syntaxes - because they're driven by different opinions. If you could prove that one syntax was more effective than all the others - for every purpose that people needed - then you could ask why everyone wasn't doing it that one way.

> C is sufficient for most logical implementations

Write me a dtor that works perfectly and in all situations without allowing for users to break out of a scope without triggering them.

Thanks in advance.


No, actually Crystal.

Floats defaulting to Float64 instead of Float32 seems wasteful.

Untyped number literals is the second issue with more number of +1 reactions.


Not strong enough of a package. Add some features to sell it. Safety, design features, testing support, concurrency and more...

I'm sorry what?

Crystal's type system is both expressive and safe compared to most languages. It has generics, tagged type unions and dynamic dispatch.

It has a built in spec runner and testing suite which are highly usable and proven on the compiler.

Most of all, it is already concurrent. Crystal has lightweight fibers implemented in assembly for x86, arm and arm64. It has channels to communicate between fibers. All io is evented by default, where the http server manages over 100,000 requests per second per core.

Its obvious to me you don't know what you're talking about.

You must not have taken a very close look because the concurrency story is already great (and will one day turn in to a great parallel story), the built in testing library has a very useful set of assertions and an RSpec like BDD syntax, and safety far exceeds Ruby -- the compiler knows the type of all variables and won't ever let you make an "undefined method foo for Nil class" type of error.

It is a library, not a part of the language. I can have a great testing library in any language. By language I mean actual language, keywords, syntactic constructs. Formal logic descriptions even.

I haven't looked closely enough, the concurrency looks decent but it's not as integrated as in certain other modern langs (e.g. Go, Erlang or certain other FP languages, even certain ones riding on JS)

It is sad to see something this outdated starting up now...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact