
The Concurnas Programming Language - blopeur
https://concurnas.com/
======
smithza
Never seen anything like this before in a programming language (VHDL/Verilog
supports this sort of stuff of course):

    
    
      onchange(x){ doSomething()}//on change of x, perform an action
      every(x){ doSomething(x)}//as above but including initial value
      await(x; x > 2)//pause execution until the condition is met
      trans{//a transaction acting upon two references
        a -= 10
        b += 10
      }
      z <= a+b//shorthand for every
      z <- a+b//shorthand for onchange
    

Is this novel to abstract languages? Would something like this work in a
language without a VM like C++?

I imagine needing registrations of these conditionals in the VM to watch and
execute.

~~~
dkersten
There are some languages that do this, especially reactive languages and there
are some libraries for other languages that work kinda like this, for example
Rx[1] works kinda like this or maybe Clojure's Javelin[2]. You could make an
Rx-like system that works somewhat like that in C++, or you could use
coroutines or a task-based system like Intel TBB to implement something kinda
like this. It of course wouldn't be a first-class thing like in Concurnas, but
I reckon you could implement the basic idea.

As far as languages go, the ideas you showed look similar to how Esterel[3]
works, or perhaps Occam[4].

[1] [http://reactivex.io/](http://reactivex.io/)

[2] [https://github.com/hoplon/javelin](https://github.com/hoplon/javelin)

[3]
[https://en.wikipedia.org/wiki/Esterel](https://en.wikipedia.org/wiki/Esterel)

[4]
[https://en.wikipedia.org/wiki/Occam_(programming_language)](https://en.wikipedia.org/wiki/Occam_\(programming_language\))

~~~
fulafel
Also
[https://en.wikipedia.org/wiki/Dataflow_programming](https://en.wikipedia.org/wiki/Dataflow_programming)
has a list of 30+ languages.

------
ajkjk
I love some of the ideas here, and I love that the landing page isn't
maddeningly scant on details.

I have to register the fact that it drives me crazy to not have spaces before
and after comment //s

------
notacoward
This looks _really_ interesting, but I don't see much information about error
handling. Since that's what differentiates good distributed programming from
either pretending errors don't happen or making the code 3x as messy to handle
them, that seems like a pretty significant omission.

~~~
jtatton
Hi thanks for raising this, as far as single machine computation is concerned,
Concurnas supports unchecked exceptions. Also, exceptions occurring within the
context of isolates are either handled by a (overridable) error handler (which
defaults to output to stderr) or more commonly, set to the returned ref from
the isolate execution. Then when that ref is accessed the exception is then
thrown by the accessor(s) and can be handled as appropriate.

As far as distributed computing is concerned, here the error landscape is much
larger than with single machine computation (network disconnections etc). The
distributed computing component of Concurnas allows one to define custom error
handlers which can deal with errors occurring specifically from distributed
computing, and there are a couple of pre-defined error handlers which can be
used to assist here (fail on first incident and retry up to x times):
[https://concurnas.com/docs/distComp.html#common-error-
handli...](https://concurnas.com/docs/distComp.html#common-error-handling-
techniques)

The documentation about distributed computing (and distributed error handling
in particular) could be better written. I will look to make that a priority
now that you've raised this point.

~~~
notacoward
That sounds great. Thanks!

------
brudgers
A tutorial, [https://jaxenter.com/introducing-new-jvm-lanaguage-
concurnas...](https://jaxenter.com/introducing-new-jvm-lanaguage-
concurnas-167915.html)

------
trishume
This looks like a nice language with lots of cool features. I especially like
the attempts at GPU and off-heap integration.

However I feel like at least the landing page dodges a lot of hard questions.
For example the biggest reason anyone should want to write a concurrent
program over a synchronous single-threaded one is performance, so talking
about how your features are implemented efficiently is important. Specific
examples:

Having my isolates semantically copy all their inputs sure might avoid race
conditions, but I hope that there's some fancy copy-on-write and sub-slicing
stuff going on so that if I spawn 10 threads to process 10 parts of my array I
don't make 10 copies of the whole array. For GPU support I want to

The off-heap support mentions "all objects are quickly serializable and
deserializable to and from this format by default". Anyone can claim
"quickly", I want to know exactly how efficient this serialization is. Does it
use zero-copy casting with some patching? Is it just a memcpy to the GC heap?
Does each type get code-generated serializers/deserializers or maybe it's done
with reflection? Whether this feature is useful entirely depends on exactly
how fast this serialization is.

How does the remote code execution transmit code? Do I need to make sure that
my work units are big enough that every remote JVM can warm up and JIT
transmitted bytecode? Is there caching so I don't transmit the standard
library every time?

~~~
jtatton
Hi thanks for raising this. I will add more details around benchmarking and
the points you have raised to the documentation. Some answers for now: \- One
may use the `shared` keyword in order to override the default copying semantic
of Concurnas for mutable data. In this way one may spawn 10 isolates operating
on an array without copying it 10 times. Of course this may introduce non
deterministic behavior if they overlap in terms of how data is proceed in that
array, so care must be applied. Also, a class may be marked as being shared,
again suppressing copying, this is handy if if one is importing already thread
safe Java code. \- In terms of off heap support all classes are augmented with
runtime generated serializers and deserializers
([https://concurnas.com/docs/offHeap.html#serialization-of-
obj...](https://concurnas.com/docs/offHeap.html#serialization-of-objects)).
Off heap memory support is an area where I'd like to improve Concurnas some
more. I think that some form of value type support would be beneficial here.
The goal being for it to be possible for one to implement an efficient RDBMS
or NoSQL system in Concurnas itself. \- For remote computing, yes there is
caching of code to avoid retransmission (the documentation could be better
regarding this, I will update the following in due course:
[https://concurnas.com/docs/distComp.html#request-
dependencie...](https://concurnas.com/docs/distComp.html#request-
dependencies)) \- The point about work unit size for remote computing is
interesting, I wonder if there is a framework/general method for solving this
sort of problem (feels like a solved problem).

~~~
krapht
This isn't really easy to solve. I'm only (a little) familiar with Chapel
which I think is the most widely used high level HPC language; how data is
distributed needs a lot of thought and is specified by the programmer
according to language constructs of "locale" and "distribution".

The other hard problem an HPC language needs to handle is error handling. What
happens when a processor goes down, data isn't received in a timely fashion,
etc. How a data processing job might get rerouted to a different server and
how intermediate data calculations might get checkpointed so the loss of one
result doesn't mean redoing the whole computation.

------
_bxg1
> designed for building reliable, scalable, high performance

So tired of leading blurbs like this

~~~
dkersten
At least they kinda explain it in the "Concurnas Axioms" section, a little
further down.

------
zmmmmm
One of the most interesting aspects to me is the native GPU support. I haven't
seen too many attempts at integrating GPU constructs as first class citizens
in high level languages, especially ones that target optimising data transfer
and pipelining etc. I am quite curious if anybody has used it and whether it
stacks up or not. The fact that it compiles to cross-vendor OpenCL is
especially appealing, if it works.

------
CyberDildonics
When I see things like this I can't help but think that WAY too much of the
solution to these problems is being hoisted into the language. What this
accomplishes could be done using libraries, but declaring a new language all
but guarantees it won't be used because it forces an enormous dependency of
learning a completely new language instead of learning the ins and outs of
what the library actually does.

~~~
notacoward
But _can_ it all be done in a library? Low-level constructs like futures and
coroutines don't ensure that code is free from races, deadlocks, starvation,
etc. They make things like memory leaks _easier_. So you have to add static
analysis, or limit their use to higher-level constructs that practically
amount to their own language. It's easy to do _awful_ concurrent/distributed
programming in a library. Doing _good_ concurrent/distributed programming
requires so much back-and-forth with the language that a new language actually
doesn't seem all that big a step.

~~~
CyberDildonics
> Doing good concurrent/distributed programming requires so much back-and-
> forth with the language

You are making a lot of claims and assumptions here. A much better way is to
think about what you actually are trying to accomplish. Futures and coroutines
aren't going to save anyone either.

Futures are a simplified tool for the larger concept of executing
asynchronously and synchronizing the results. The generalization is that you
will need to figure out what data you need to package together for each unit
of execution so that they can be used without dependencies. The dependencies
of different units will have to be figured out so they can run when they have
what they need. Futures are the same as having only one thing to execute or at
best, a single chain that doesn't split or converge.

Coroutines are a way to hide the fact that you need to keep some state in
between executing, just like recursive functions hide the fact that you are
using the call stack as a stack data structure.

Confronting the dependencies and IO of each unit of execution as well as
understanding where you need to keep state and where+how you will synchronize
that data created asynchronously is what gets you concurrency and parallelism
that makes sense as it scales. This does not require a different language, it
just requires an architecture that does the tricky stuff for you and maps
directly to what actually needs to happen.

~~~
notacoward
> just like recursive functions hide the fact that you are using the call
> stack as a stack data structure.

...and waddayaknow, every language people actually use has that built in.
Self-refutation at its finest.

> where+how you will synchronize that data

Using what primitives? Oh yeah, more library cruft that would have been more
concise and less error-prone than if the language gave direct support.

> an architecture that does the tricky stuff for you

What's the difference between tying yourself to a framework (what the context
makes clear you're really talking about) that solves the tricky problems for
you vs. a language that does? As soon as you're relying on somebody else to
solve the tricky problems, you're letting them define the language in which
you express your own logic - even if you don't think of it as such. And this
is where I'd generalize Greenspun's tenth rule to say that any ad-hoc
informally-specified implicit language is almost surely inferior to a real
language designed for the same purpose.

~~~
CyberDildonics
> and waddayaknow, every language people actually use has that built in

The point was never that people don't use coroutines or recursion. It is that
neither solves fundamental problems and are closer to syntax sugar in that are
something that people can reach for instead of doing what they want directly.

> Using what primitives? Oh yeah, more library cruft that would have been more
> concise and less error-prone than if the language gave direct support.

If you synchronize using data structures, that would mean you aren't using
primitives.

> Oh yeah, more library cruft that would have been more concise and less
> error-prone than if the language gave direct support.

Why would that be true? Lock free hash maps and queues can fit in single file
zero dependency headers. You think that's cruft? What about an entire language
to solve something you can do without tying yourself to a bare bones ecosystem
that no one knows.

> As soon as you're relying on somebody else to solve the tricky problems,
> you're letting them define the language in which you express your own logic

Meaning you want to not use a different language and solve all your
concurrency yourself? The whole idea is to not keep solving concurrency in ad
hoc ways. Also guess what, the moment you step in to a new language, you are
going to be solving a _TON_ of old problems yourself.

I am not sure why you are so upset and invested in this. You seem to be
arguing against yourself with respect to your conclusions and the flimsy
rationalizations for them. These aren't actually problems that take a huge
number of lines to solve, they just have to be done well. Throwing out
everything from syntax highlighting and completion to debugging, to IDE
support to libraries written in the same language is always a painful
experience. Including a few files for data structures and task graphs is much
less so.

------
edem
Why?

