Hi everyone, I'm the creator of Claro. Thank you for taking the time to take a look into the language! It's been very satisfying and encouraging seeing so many largely positive reactions, it's been a 3 year labor of love (and learning) so I appreciate the feedback.
I'll do my best to finally respond to as many comments here that I can get to.
Seems like an interesting language with a lot of new ideas, especially the declarative concurrency approach. I am very disappointed by the decision to go with UTF-16 for strings though [0] and strongly urge the author to reconsider. UTF-16 is the worst of all worlds: it is inefficient, it is endianness-dependent, it is still variable-width like UTF-8 for code points outside the basic multilingual plane (like emojis!), and it adds an O(n) penalty to processing most text from the internet. It also means that you need a whole new fundamentally different data type outside of char for encoding byte sequences.
Please, for modern software UTF-8 everywhere is the way to go!
Understandable, though JVM strings can also use the UTF-8 charset under the hood. In fact, if you initialize a Kotlin string from a byte array, it'll default to assuming the UTF-8 charset [0]. (Kotlin chars are still 16-bit code units though, and it is true that you can't use the native JVM char type if you do this. Personally I think that's still acceptable.)
> Please, for modern software UTF-8 everywhere is the way to go!
I think the real answer is to have a String interface and then allow for different implementations. There are times when it should be ASCII 8-bit. Or UCS 4. UCS 2 is reasonable. It's about trade-offs and how much constant width is worth. Just like you can use across different list and set implementations for the algorithmic profiles, you should be able to chose string types. Something UTF8#"Hello World" or EBDIC#"IBM".
Inserting, appending etc, a character that cannot fit into a given string should throw an exception.
I don’t think deadlock free is possible without some very serious limitations. Like not even share-nothing, communicate via messages only is safe from deadlocks.
From glancing over their guide, it seems like it achieves this by essentially requiring you to manually specify a state machine that it can verify won't cycle by composing together "graph functions" that return futures (which are not allowed to block on the results of any futures directly or transitively). If I understand correctly, combining this with only allowing immutable data to be shared (like you described) makes it possible to parallelize graph function calls when that don't depend on each others results.
Well, that’s true. Though then it is pretty similar to the “build tool” problem, which is actually an interesting parallel :D Nonetheless, I do think that big limitations at most places, with optional, explicit escape hatches is a good pattern.
Definitely a clear similarity to the problems that incremental build tools (i.e. Bazel) are solving! At the moment, Claro doesn't allow any "escape hatches" but the logic there has really been to start with a strong/safe foundation, and then afterwards evaluate whether escape hatches are necessary, and if so what they should look like.
At the moment, with specific regard to Claro's concurrency story, I'd say it's unlikely that there will be any escape hatches for writing obviously thread-unsafe code whatsoever explicitly available to user code.
It's worth taking a look at this section of the docs to see Claro's current stance on the StdLib itself being given the unique right to "bless" certain mutable data structures to be shared between threads after they've been explicitly verified to be thread-safe. https://docs.clarolang.com/guaranteed_data_race_free/guarant...
Yeah, I didn't mean to imply that immutability was needed for avoiding deadlocks; I just thought it was interesting that it "composes" with their deadlock avoidance to provide the follow-up "optimally scheduled" and "scalable by default" features.
I mean, that’s how it works in Java as well (other threads keep running), but you are not much helped if your “critical business logic” deadlocked and doesn’t do its job.
Though I guess you can try restarting it with Erlang.
Thanks for your interest in this! Claro's Graph Procedures are certainly something I'm very proud of. I think they provide a powerful concurrency abstraction that limit the spread of concurrency-related complexity in a way that I find very enjoyable to work with.
Hey! I wrote Claro so I can speak to this :). There are certainly some constraints to get these properties.
To eliminate deadlocking, Claro tracks any explicit "blocking" on the completion of a future<T> and requires the containing procedure (and any dependents) to be explicitly annotated as "blocking", and then these blocking procedures aren't allowed to be called within a Graph procedure OR be scheduled on the Executor directly in the form of a lambda for example.
Data races are prevented by simply making it impossible for two threads to share a reference to mutable data. The most obvious consequence here is that mutable data can't be passed into a Graph Procedure, and none of its internal nodes can evaluate to a mutable data type. Less obvious is that lambdas cannot "capture" mutable data. If they were allowed to capture mutable data, then they would effectively become stateful "objects" themselves in a way that the type system doesn't track (a `function<int -> int>` has the same signature regardless of what it captures) so this would effectively be a backdoor to allowing mutable data to leak between threads.
I went reading to see if this was some evolution of Pharo. It is not.
I paged through about 20 of the pages. Have we reached a point where “new computer language” means variation on a theme, same pigs, different lipstick? Nothing negative meant. It all looks decently thought out and pretty otherwise conventional. Kind of like yet-another-marvel-movie. It’s different, but it isn’t really.
I miss the days where “new language” meant different ideas and models. Like when it was Scheme and Forth and C++ and Pascal and Ada and ML and Fortran and Smalltalk and Lisp and Eiffel and Beta and Prolog and Self. Those were much more varied days.
The universe of good - or even seemingly good - ideas may or may not be infinite, but it probably thins out as you get further away from the centre. It's unsurprising that the field would coalesce around certain demonstrably great ideas, and that revolutionary game-changers would start becoming harder and harder to find.
FWIW, even in your list you see lots of variations on themes rather that unique ideas/models. Scheme and Lisp are closely linked. Pascal and Ada and Fortran are all conceptually similar, and there's a good amount of cross-pollination between them and Beta and Eiffel and C++. Self is an evolution of Smalltalk. And so on.
I think it's a case of deciding to focus the innovation budget somewhere else. Claro on the surface level looks a lot like Go, and it's probably intentional. Crafting an unique syntax, making sure it's efficiently parseable, and nice to use is a whole lot of effort, and the solutions we have today in this space are basically fine.
Claro seems to focus its efforts elsewhere - the main innovation seems to be its graph-based multithreaded work scheduling which is a novel concept, and has major implications for program organization, just not syntax.
Thanks, this is exactly how I'm thinking about this! And actually, beyond just having a certain "innovation budget" (which is true) I also just personally want a language that's aggressively simple (a subjective measure). I'm trying hard to make something that will fit in my head in the end.
I dunno, @mdaniel's comment points to a potential gamechanger in the evolution of rust-inspired languages. I'm not a rust expert but fine-grain mut seems like it could reduce the performance cost of safety, or at least, make it much easier to avoid paying that cost unnecessarily.
Big motions in language theory should probably be expected to diminish in frequency as the field matures. Not every new language needs to do that. Just about every language developer has their own pet language, often more than one. It's a good thing. It's how the field advances.
I would say, generally speaking, that most new languages these days do not solve the big problems of writing software. For the most part, language level ideas that change the big picture of how one interacts with the system are few and far between, and these ideas have mostly already been discovered in older languages. New languages usually even ignore these. For example, Rust's lack of a REPL from the start. And languages like Rust are spending a lot of time refiguring out the wheel and stalling midway through when it comes to library development.
A missing REPL in Rust is actually a great example of why languages ignore (some) ideas that other languages have.
A programming language is defined as much by what it chooses not to include as by what it chooses to include. Functional languages choose to not include mutation and statements, but we don't complain about them "ignoring" those features, it's a very intentional decision to drop them. Golang looked at exceptions and decided to not include them. Not every language should have every feature, both from a practical perspective (how would you maintain such a language?) and an ergonomic one (how would you work in such a language?).
Rust's lack of a REPL wasn't just them ignoring an important innovation, it was them choosing to not implement a feature that they know their primary audience won't use and that doesn't do much to help in the compiler-centric development flow that they're working on supporting. Rust doesn't have a REPL because REPL-oriented development doesn't really suit Rust, not because they forgot that REPLs are a thing.
> Rust's lack of a REPL wasn't just them ignoring an important innovation, it was them choosing to not implement a feature that they know their primary audience won't use
Rust moving away from its more ML-leaning beginnings is a downside, to me at least.
Sure, but it was a conscious choice to do so, and Rust is actually a terrible example of a language that doesn't provide anything new.
Rust managed to normalize higher order functions in the systems space, provide a compelling answer for high performance memory safety, and mount the first successful assault on the C/C++ duopoly since C++ was conceived. That's a pretty impressive track record for disruption, and discounting it because it doesn't have enormous groundbreaking ideas and it doesn't have a REPL is pretty short-sighted.
Rust succeeded in pushing back the Overton window of programming languages because it didn't try to innovate in places that didn't matter. A more radical language might be more aesthetically appealing to someone like you but won't change the experiences of any significant number of developers in their day to day work the way Rust has.
Interaction between lifetimes in REPL would probably diminish any advantages REPL has to offer. REPL is easy to do in GC languages, trade off Rust made by choosing not to use GC. Best next thing you can use are tests which are part of Rust's tooling.
It's just become too expensive to develop a new language based on a novel model if you also want some serious usages since the expectation for a production level system has skyrocketed. We still have lots of new interesting languages in development (especially in academia) but those won't ever see wide adoption outside their own paper though.
I think there's a difference in culture between panic and exceptions. Panics are usually reserved for violated invariants, programmer errors. Exceptions are used for normal errors.
"What are exceptions used for" is similar to tabs versus spaces. Many languages just "leave that part to the user". Which I think is wrong. Or at least, inefficient. Different people will make different assumptions and then we will have a clash of opinions.
Well I definitely disagree on that. I fall more on the side of “use them for regular errors too” because IMO anything that reduces boilerplate is a good thing. But I also don’t like “magic” so it’s still a bit in tension in my head
It's all about definitions. For me, the difference between exceptions and panics is the former is recoverable. You can write a language with panic but no catch to ensure that panic crashes the whole program every time it is invoked.
let x = if foo then bar else panic("unhandled case !foo") end
Panic allows you to enumerate cases without implementing them, which can be very helpful in development. As I build out functionality, my code crashes exactly where I need to implement the new case, but I can implement the initial happy path all the way to completion.
It's either "fearless concurrency for highly scalable applications" or panic.
Your threads will panic. They will throw exceptions. You will end up dividing by zero at 3 AM on a Christmas.
If your own answer to exceptions is "yeah, the program just crashes and burns", you're doing everything wrong. It took Go and Rust over a decade to realise that and add poor man's excuses for handling panic. But exceptions are inevitable, and your language must provide good ways of dealing with them. See Erlang's approach for how to do it right.
There's some really cool ideas in this, and I love how you're straight into the documentation with examples. Great job.
I raised my eyebrows at this part though where the type constraints part was being explained:
> Coming from an Object-Oriented background, you may be tempted to compare Contracts to "Interfaces", but you'll find that while they may be used to a similar effect, they are not the same thing. The intention of an "Interface" is to encode subtyping relationships between types, whereas Claro has absolutely no notion of subtyping. [1]
IME with e.g. C#, interfaces are exactly for this kind of thing? Abstract classes and classes are more used for polymorphism?
But I admit it's been a while since I worked with C# and I may be remembering this stuff wrong.
C# interface is kind of like Rust's trait except the default passing convention by interface is similar to Box<dyn Trait> whilst passing structs implementing interface via generic arg is identical in the form of Accept<T>(T value) where T : ISomeInterface.
Arguably, today the line is somewhat blurred because interfaces can have default member implementations (which are used to extend interfaces without making it a breaking change) but generally you are correct. An abstract class and subclasses that inherit from it signifies what the type is while the interfaces a class or struct implements signify what the type does.
Very interesting and I love how much thought the author has put into the build system.
I don't like the separation of of interface and implementation at the file level. I feel like having to type the same thing twice header-file-style really slows down development iteration speed.
I can understand where you're coming from with the split between API and impl. It's certainly something I'm keeping on my radar. Personally I care more about readability than writability so it's not a big issue for me, but I can understand that others would land differently on this point.
One reason that I'm holding out on this (for now) is that I'm actually excited about how this one decision makes it quite easy to express some quite complex patterns using "Build Time Metaprogramming": https://docs.clarolang.com/metaprogramming/code_reuse/reusin....
The whole section is really more of an exploration than it is any sort of strong statement that this is what all Claro code should look like. But it's all fundamentally enabled by this "physical" separation between API and impl.
This is very interesting, but I don't see any section in the docs about JVM interop. I am curious what is the effort needed to use a library written in java (and potentially other jvm languages) in claro.
Great question! This is actually something that will take more work to pull off "right" - but it's definitely planned.
For now, Java code can call into Claro code in a fairly straightforward way. The main hurdles are:
- the namespacing/naming of the Claro code you're calling into is funky because Claro doesn't use Java's "package" namespacing system
- manually constructing non-primitive data to pass to Claro procedures is currently very annoying and, more importantly, unsafe (you could break Claro's type system rules)
In the other direction, Claro's only (current) mechanism for calling into Java directly is restricted to the stdlib's implementation. For example the deque Module[1] exports an `opaque newtype mut Deque<E>` that is actually just a `java.util.ArrayDeque<E>` underneath[2]. The reason this isn't exposed to Claro programs outside the stdlib (yet) is because:
- the type systems are very dissimilar and would need mapping
- Claro doesn't use Exceptions, so you'd have to ensure that any calls into Java code that can throw manually catches and models an error return value[3]
All this said, it's very possible that in the future these limitations can be addressed!
If the author doesn't read this thread, maybe you can open an issue with this topic. I think other people would be interested in this, so maybe the author prioritises it.
Interesting, although it seems to currently be missing a license. While Bazel and I are for sure not friends, I found this funny https://github.com/JasonSteving99/claro-lang/blob/v0.1.495/W... I guess it's similar to having a maven build under Nix but my relationship to Bazel is why I got a chuckle out of the russian doll setup
I'm extremely impressed with what the they were able to accomplish. The language seems to be elegantly designed and with strong opinions in interesting places. I'm really looking forward to see how it develops in the years to come.
Our goal is NOT a general-purpose turing-complete language like this one is, but we do some amazing lock-free, DAG concurrency things to achieve the processing wins.
I'm all for a hobby projects, this looks good. No disparagement on the effort.
But I think the community would be better served if all this brain power could get behind a smaller set of languages. We have hundreds of languages now, none of them are going to get enough market share to ever be adopted.
i dont think this one qualify as a functional language, since variables are mutable by default
on the second point yes, one the one hand too many languages and too few will see large adoption
one the other hand, competition and activity will drive the popular language to continue innovation and not stagnate, look at C# continuously adding functional features, java also to a lower degree
while many of those new small languages will never see wide use, their existing is pushing the tides
I just want to echo your point that even though any individual new language is very unlikely to ever see significant use, I would still find deep satisfaction in some of the valuable ideas contained within finding their place in a more mainstream language :).
> As Claro is still firmly in development, it has only been tested on macOS. You may run into trouble running it on another OS as there are some known portability issues building the Claro compiler from source (currently the only supported way to consume the compiler).
Pretty interesting! signa11, are you the author of the language? If so, is it fair to say the “closest competitors” for this language would be Kotlin and Scala? If so, what do you see as the pros/cons of Claro vs. these two languages?
I would have guessed it took inspiration from rust with the `mut` keyword, and then told rust to hold its beer since mut is not transitive so `int[][]` would require `mut mut` to be able to write to int[0][1] if I understand TFM correctly : https://docs.clarolang.com/static_typing/builtin_colls/built...
Interesting concepts, but dependency on Bazel is meh :(
Also, there are some passionate statements about other languages:
> As it's currently defined, there's nothing requiring the two arguments to actually have the same type. In this trivial example, that may be fine, but if I were to actually want to ensure that two arguments both implement an interface and they both actually have the same type, then I'm out of luck - there's no way to statically encode this constraint in Java!
This is simply not true:
static <T extends Stringify> void prettyPrintPair(T x, T y)
I personally really dislike when people talk about any kind of language feature with strong negativity, while providing their solution as the superior one. There is a high chance that they are wrong, since different features in the same area have different benefits, and usually no one thing is “better” than another, whatever this means.
well, i dont see any explicit license information, but the source is available on their github repository
what does that legally mean, if a project on github share the source publicly, but doesnt state the license, is there like a defacto or default license that applies then
I'll do my best to finally respond to as many comments here that I can get to.