> I've never used Kotlin or Ceylon myself, but over the past few months I have had a chance to look at some of what they did, and I couldn't believe how many choices that they had made were very similar to things that we had done.
I would hope that someone who decides to create a language would spend some time studying a lot of languages, especially those that came out in the past ten years.
Additional posts state similar things :
> I've never gotten to work in either Smalltalk or Lisp
and same for Haskell or OCaml.
This makes me worried that Ecstasy is being created in a vacuum with little knowledge of the current and past state of the art.
Yes, I would hope the same thing.
Before designing Ecstasy, we spent a period of time evaluating existing languages (and language runtimes) for the specific use case that we had in mind (related to serverless cloud infrastructure and applications, but with some very specific goals).
So why didn't we look at Kotlin? Well, probably because I had the wrong idea of what it was, and once you get a wrong idea in your head, the wrong idea is hard to displace. Patrick Linskey was the developer who, after looking at the Ecstasy design at the time, suggested that we look at Kotlin. (He and his team use Kotlin a fair bit in their work.)
I had read some stuff years back that Gavin wrote about Kotlin, but had not thought to look at the language itself. I was under the impression that it was dormant. (I've since gotten in touch with Gavin ... I think it's pretty cool that we had similar concepts as he did.)
In retrospect, the language that I wish that I would have spent time looking at is Elixir (not necessarily to use it, but because some of the same things that it is attempting to address are relevant).
The challenge is that there are many creative people building many creative things, and we only have so many hours in the day to read and learn and use these things! I read a few hundred pages of technical topics every day, and I still can't keep up with even 1% of what is happening in our industry.
Also, over the years that I was designing the concepts that eventually formed Ecstasy, I spent a great deal of time talking with friends in the PL and language-runtime arena like Guy Steele, Rich Hickey, Doug Lea, James Strachan, various members of the Sun/Oracle Java team, Gil Tene (Azul pauseless GC), and so on. Many of the topics that I was investigating were ones suggested by these folks.
But in general, like many people who enjoy this kind of thing, I know what I like when I see it, and I collect ideas and thoughts over time. If you're a fan of poetry, I'd suggest Rilke's "For the Sake of a Single Poem", to understand how the first word of Ecstasy came to exist.
Yes. The fairly rigorous requirements from "serverless cloud" are what originally drove us to design a new runtime model, and that new runtime model then drove us to design a new language. The language itself, though, is general purpose, and -- as an interesting result of the original use case -- extremely embeddable, i.e. relying entirely on resource injection for its connection to the "real" world.
That embeddability has an interesting devolved use case: When an Ecstasy app is executed as a normal OS process, with the full set of unrestricted OS services available to the app via injection. This use case is equivalent to how a language like Java or C# works today.
In other words, there are new use cases supported that weren't possible before, but additionally, the traditional use cases will continue to be well supported.
> "... a need to write hybrid applications that run close to physical things down here on the ground. Will XTC be extensible to this use-case, with some backend deployed on top of conventional on-prem architecture?"
Like other general purpose languages, there is no requirement for a "back end" whatsoever.
On the other hand, to manage complex environments, a managed back end service is extremely useful, and we are (as the article alluded to) building one of those -- using Ecstasy, and specifically to manage Ecstasy deployments -- as part of our business (xqiz.it), and it will be available in different forms (e.g. as a hosted PaaS, and with both OSS and commercial options for non-hosted/self-hosting options). As with other languages, there are likely to be plenty of options in this area.
That's all still a couple of _years_ out, though.
I'm really curious to hear what you saw was missing in Kotlin for serverless cloud that made you go "We need a new language".
I'm trying to be objective here but after having read all the blog entries and the documentation of the language, not only do I see hardly anything that XTC brings that doesn't already exist in Kotlin and Ceylon, and even less so what makes XTC more appropriate for serverless cloud computing than these languages.
Besides, I also think a language priding itself on being designed for serverless computing is going to miss out on a huge chunk of cloud computing: the "serverful" part, which is essential and complementary.
I should start by explaining that when we say "serverless", we're not talking about "lambdas" or something like that. When we talk about serverless apps, we're talking about full-blown, scale-out, stateful applications with their own database(s) and so on.
To answer your question, Ecstasy is _not_ really a new language, just like coming from C++, Java wasn't really a new language -- it was just a significant (and positive) revision that developers could easily grok. Similarly, coming from Java or C#, Ecstasy should be viewed as a language revision. But why revise at all?
That's a great question.
There are two significant reasons that drove us in this direction. First, that the runtime model (Java, C#, Kotlin, etc.) was the wrong runtime for what we were setting out to do. It has virtually unlimited surface area (hey! I found a JAR file at some special place on the file system, so yeah, its API is automatically available to you if you ask for it!), no closure over its type system, and unlimited access to its environment -- the OS, the file system, the network, and so on. This may seem minor, but for our goals, an application had to be truly sandboxable, and its environment 100% injectable.
(To wit: Security cannot be _added_ to a system. A system is either entirely secure by design, or it is insecure. Security is not a "manager".)
Gene saw your question and wanted me to explain how that surface area (anything outside of the application) is unseeable, unknowable, and undiscoverable by design -- even when using Ecstasy's quite powerful reflection capabilities. If you look at testMaskReveal() on the reflect.x test -- https://github.com/xtclang/xvm/blob/master/xsrc/tests/manual... -- it might give a glimpse into the thinking in the design, which is that an object _reference_ can injected into a container such that its type is _masked_.
Conceptually, a reference in Ecstasy is a combination of a pointer to an object (i.e. the object's identity), and a separate pointer to its type. This is fundamentally different from C++/Java/C#, in which a reference is a pointer to an object, which in turn begins its structure with a pointer to its own type (e.g. a "v-table"). You can see some of this discussed in the documentation on the maskAs<Masked>() method of Ref (i.e. an interface for any reference), found here: https://github.com/xtclang/xvm/blob/master/xsrc/system/Ref.x
To make a long story short, injected resources are automatically masked to their injection (interface) type, and even using reflection, nothing beyond that mask can be made visible. The only ability to interact with the outside world is injected, and those injected references are limited to the surface area of the injection type, which is dictated by the injector. Nothing in the type system points to (or can point to) anything outside of the type system; there are no "intrinsic" or "primitive" types, and no secret native things hidden inside an implementation (trying to hide inside some private field or method).
(At this point, I'll stop going down the path of this specific discussion, just to make sure that I haven't made the topic even more confusing. Follow-on questions are welcome.)
We also spec'd that an application also had to be truly manageable. An application runtime that allows the application to "run out of memory" or "create too many threads" or "burn too much CPU" is not manageable.
To accomplish this, we designed the runtime as a hierarchically subdivided (n-ary tree structure) software container model, each associated with a type system (a fixed set of modules) with transitive closure. Within a container are any number of services (including the container itself, which is a service), each of which represents a Turing machine with its own dedicated memory. Services are conceptually asynchronous vis-a-vis other services, and can only exchange (through "the service boundary") immutable data and service references. Calls through a service boundary are potentially async; one can use a "future" (aka a "promise") to manage that, or (by default) the async/await mode is automatic. The result is akin to a message-based or actor-based model for achieving threading -- but yet without even having a “thread” object.
One result of this design is that GC is done at the service level, not at the runtime level. (A service is equivalent to an Erlang process, if I understand the Erlang architecture correctly.) Service concurrency (threading and synchronization) can be managed by the runtime; it can be profile-driven, at runtime, much the same way that Java's Hotspot compiler can choose what methods to inline. Native code can be generated as if there is no concurrency, because all mutable state is bounded within each service, so no two threads will ever be writing to the same memory area.
(If you have worked on assembly level optimizations, cache optimizations are the biggest possible low-level optimizations for most programs today. Going to main memory today is about as _relatively_ slow as doing a dynamic memory allocation was back in the 90s -- and it was considered verboten! Avoiding cache flushes and critical sections and CAS operations that are side effects of memory model concerns is a _huge_ win!)
Lastly, the recursive software container model has some significant benefits, such as (i) being able to safely load and execute _untrusted_ code, (ii) being able to safely "delete" an entire software container at runtime (or even an entire sub-tree software containers), with almost no cost of resource reclamation, (iii) being able to reload an entire software container and transition it into a running state, almost instantly.
> When learning a programming language; it takes several years [for you] to discover the deficiencies
A valid response. Also a sad commentary on our impoverished and siloed/balkanized professional literature and communities around programming language design and implementation.
This is an awfully high number. Anyone care to give their best guess as to what the actual number is for companies in general?
The numbers that managing directors have shared with me are much higher than that, and the only "official" number I've seen (years back) was also slightly higher than that (96%? 98%?), but when I tried to find the article (so that I could cite it), I failed to find it, so I made an unsubstantiated claim with an arbitrarily more conservative number.
Please keep in mind that, at the time, I worked at Oracle, which is at least 23% responsible for taking that 95%+ of the IT budget. I carefully documented that here: https://www.quora.com/What-would-Oracle-database-cost-for-a-...
(Please don't ask me to defend the 23% number; I just made it up to be humorous.)
I love the name, and I love the concepts talked about, but you didn't give me enough to convert me into a believer now.
Hi Tim - I think that's a good thing at this stage. We aren't ready to have a large number of developers actually programming _with_ Ecstasy today. What we're looking for is feedback, ideas for improvement, criticism (constructive, please), and -- for those engineers who love this kind of project -- contributors.
What I'm personally worried about is having someone try to use it, as if it's as mature as a production-quality language, and walk away upset that they wasted their valuable time on something that is still being developed. (I value my own time, and I try to consciously and conscientiously value the time of others; that seems only fair.)
So we're still at a stage where most of the industry should ignore the language, except perhaps to steal any ideas that they think would help in their own day-to-day jobs. Each week that passes, the language matures slightly, and at some point (with a production-ready runtime that we haven't yet built), we will have a language that is worthy of people's time.
As someone else pointed out, there is a Hello World example at https://xtclang.blogspot.com/2019/08/hello-world.html ... but the automation of getting started just isn't there yet. In other words, having to download and configure a specific IDE to run a "Hello World" is not a great way to introduce a language to someone interested in just kicking the tires and checking things out.
We do have someone who has volunteered to simplify this particular "getting started" process, but that project isn't going to get going for another month or so (because like many of the people helping us, he has a day job).
If you're not google, name your garbage something without 2 million existing google results.
Thoughtful critique is fine, but there's no need to be an asshole ("hey look", "shitting the bed", "your garbage") and it break the site guidelines badly: https://news.ycombinator.com/newsguidelines.html. Would you mind reviewing those and sticking to them when posting here? We've unfortunately had to warn you not to attack other users or post flamebait before, as well.
For disclosure's sake, I once got to hang out with Cameron at a conference many years ago and can tell you that he's exactly the sort of user HN needs more of: someone with vast experience in the computing world and countless great insights and stories.
Honestly, I don't know what could change this trend. Apparently, software authors care more about having a catchy name over an easily searchable one.
Whomever named this language is a moron.
That would be this moron.
I would have chosen a different name, but they were all taken.
so, I think it's a pretty bad name, but OTOH, google & co are smart enough these days.