Hacker News new | past | comments | ask | show | jobs | submit login
Alchemist – A non-deterministic programming language based on chemical reactions (esolangs.org)
113 points by afpx 4 months ago | hide | past | web | favorite | 27 comments



If we have the rule

    2H + O -> H2O
and the universe consists of 3 H and 2 O atoms, then the rule is applicable. After applying the rule our universe will contain 1 H2O, 1 H and 1 O atoms. At this point the rule would not be applicable anymore.

If we have the rule

    Alice + Bob + 0Eve -> AliceBob
and the universe contains 1 atom of each Alice, Bob and Eve. Then that rule is not applicable because it requires the universe to contain no atom of type Eve.

>>>

Oh man that is a disappointing definition, the fact that 2H + O can operate with 3H but 0H + O can't is extremely misleading and syntactically inconsistent. I am not certain how it'd be best to define a rule that predicates on an absence of a token but that syntax is terrible.

Also, assuming this language is meant to be chemistry minded I'm sad that there is no support for the grouped sub-quantities that tend to be described... any chemist would probably be confused when O2 wasn't considered equivalent to 2O


> I am not certain how it'd be best to define a rule that predicates on an absence of a token but that syntax is terrible.

That would be pretty confusing. Instead, transitions can always be allowed to happen if the left hand side is satisfied and then the "don't transition if there's an Eve" semantics -- when necessary/desired -- can be made explicit by adding assertions to each nondeterministic transition.

For example, if you want 1 Adam and 1 Bob to transition to 1 AliceBob regardless of Eves then you write:

    Alice + Bob -> AliceBob
but if you want to insist there are no Eves for this rule to hold, then you write:

    assert(0Eve);
    Alice + Bob -> AliceBob
in which case there must be 0 Evens in order for the Alicebob transition to happen.

These sorts of nondeterministic transitions with optional guards are expressible in dynamic logic; see see A3 + A8 in [0].

[0] https://en.wikipedia.org/wiki/Dynamic_logic_(modal_logic)


I think a set of two equations is enough:

    Eve + AliceBob → Alice + Bob + Eve
    Alice + Bob → AliceBob
I understand with non-deterministic behaviour and 10 Alices, 10 Bobs and 10 Eves I can get in a certain state (Alice, Bob, Eve, AliceBobs) = (1, 1, 10, 9), but if we stick to the evaluation order defined by the language, then the two rules above satisfy the condition.


> but if we stick to the evaluation order defined by the language

Is this true? According to the linked page:

"The program will continually pick rules at random from the set of applicable rules, until there are no such rules."

In your encoding, we could end up transition from state {3Eve, 1Alice, 1Bob} to state {3Eve, 1AliceBob}. I agree that, in the next time step, we might transition back to {3Eve, 1Alice, 1Bob}. But there are two problems with this formulation:

1. We often (in many domains always) really care about the intermediate behaviors of the model, not just the final state. Which means that an observation of the trace:

    {3Eve, 1Alice, 1Bob} ~> {3Eve, 1AliceBob} ~> {3Eve, 1Alice, 1Bob}
is often not necessarily equivalent to simply staying in state {3Eve, 1Alice, 1Bob}. This is especially true in physical systems (including chemical reactions).

2. We might end up back in the "right" state, but only for those two rules! In particular, we might not actually end up back in state {3Eve, 1Alice, 1Bob} after all if there are other transition rules. For example:

    Eve + AliceBob → Alice + Bob + Eve
    Alice + Bob → AliceBob
    AliceBob + Eve → WEIRDNESS
With some probability (which the docs don't specify) we might now observe the following transition:

    {3Eve, 1Alice, 1Bob} ~> {3Eve, 1AliceBob} ~> {2Eve, WEIRDNESS}
with no way to get back to {3Eve, 1Alice, 1Bob} :-(

One additional problem: this encoding is not equivalent to the guarded version if transitions can have side-effects (e.g., if transitions consume a global implicit "fuel" or some other abstract representation of cost).


After poking around the site, I can't see any specific reference—which makes me suspect it was an independent discovery of a cool idea—but this is very similar to a deliberately-constrained version of the Join Calculus https://www.microsoft.com/en-us/research/wp-content/uploads/... which uses a similar chemical-ish model of computation. There's an implementation of the Join Calculus in an OCaml-like language called (of course) JoCaml. Where Alchemist would write

    2H + O -> H2O
JoCaml would write

    def h() & h() & o() = h2o();;
One of the big differences is that JoCaml allows you to embed payloads inside your atoms, e.g.

    def a(x) & b(b) = c(if b then x else 0);;
will consume an a with an integer payload and a b with a boolean payload, and produce a c with an integer payload. It also lets you include computations as reactions happen, e.g.

    def a(x) = ( Printf.printf "Reacting!\n"; b(x + 1) );;
which lets you use this "chemical soup" model as the basis for concurrent programming.

There is a very detailed tutorial (which appears to have sadly half-broken formatting) which explains JoCaml in more detail here: https://sites.google.com/site/winitzki/tutorial-on-join-calc...


Yeah, Stochastic Pi-calculus and chemical reaction networks simulators are doing this this too. They actually precisely define the probabilities of reactions based on the current amount of different molecules and their reaction rates. For instance, I used the tool called Beta Workbench (BetaWB) and hand-written simulators for modeling systems like this. It's a really cool computational model, I think.


Adding relative weights to each rule would be a simple extension to the grammar. If only integer weights were permitted it could be a simple preprocessor.


Grammar yes, but sampling reaction with appropriate rates makes the choice of the next reaction a little harder than arbitrary selection.

The time to each possible event must be drawn from the exponential distribution (https://en.wikipedia.org/wiki/Exponential_distribution), and the event with the smallest waiting time fires next.


I believe this language is based on the "chemical programming" model as described in http://pop-art.inrialpes.fr/~fradet/PDFs/RULE04.pdf and https://www.researchgate.net/publication/225418037_Higher-Or....

I hadn't thought of the connection to the join calculus - that's pretty cool!


Here's the author's github with the reference implementation.

https://github.com/bforte/Alchemist


For those interested in the subject, the field that formalises the computation of chemical-reaction-like grammars is called P systems (or Membrane Computing) [1], which are known to be super-Turing devices [2].

Well, given the resistance to accept the practicality of super-Turing devices, a simpler model exists, MP systems [3], with simplified semantics and practical for biological applications.

MP systems are proved to be Turing-equivalent and there is a nice paper [4,5] (shameless promotion) that shows how to convert each type of "chemical reaction equation" into basic, sequential "assembly" (set of instructions for register machines).

In [5], a compiler and a simulator is implemented, but the source code is not yet available (I will eventually do so).

[1]: http://ppage.psystems.eu/

[2]: https://en.m.wikipedia.org/wiki/Hypercomputation

[3]: http://www.scholarpedia.org/article/Metabolic_P_systems

[4]: https://link.springer.com/chapter/10.1007/978-3-319-28475-0_...

[5]: https://arxiv.org/abs/1505.02420


Please explain to me how these systems can solve the Halting problem --- Everything I've tried to find on "Hypercomputation" has either been philosophical, or computability theorists playing with halting oracles for fun. I don't know of thing that seriously claims that hypercomputation is /physically realisable/.


I am very sorry, but I cannot: I am not an expert in super-Turing P systems.

(I can answer questions on MP systems, however.)

On the other hand, there are two papers that can answer your questions:

1. Bio-steps beyond Turing [1], from Calude and Pǎun.

2. Membrane system models for super-Turing paradigms [2], from Gheorge and Stannet.

From [2], referring to [1], there is this excerpt that might interest you:

> Calude and Pǎun (2004) introduced the first super-Turing model of computation rooted in biology rather than physics, and it is this work that we extend in this paper; their model also uses accelerated computation to solve the Halting Problem (and others), based this time on the observation that reaction rates are essentially proportional to molecular concentrations, and hence inversely proportional to the volume of the containing compartment when the number of molecules remains constant.

[1]: https://researchspace.auckland.ac.nz/bitstream/handle/2292/3...

[2]: https://www.researchgate.net/profile/Mike_Stannett2/publicat...


You left out the part where the paper states that it's still an open problem whether hypercomputation is physically realisable.

If some device can solve the Halting Problem, then it can solve every single unsolved mathematical conjecture that can be encoded as a Turing machine that halts if the conjecture is true (so about all of them?). This is an insanely bold claim, so please forgive the scepticism.


> This is an insanely bold claim, so please forgive the scepticism.

I am interested in the concept of hyper-computation and I would like it to be physically constructable, but in the present moment I also see it as an intellectual construct only.

Thus, I am as sceptical as you. :)

If, someday, super-Turing devices will be built, I imagine it will (first) come as analog devices (e.g. Siegelmann's Analog Neural Networks [1,2]), not as biological ones.

But this discussion is slightly off-topic, I guess.

[1]: https://binds.cs.umass.edu/anna_cp.html

[2]: https://www.researchgate.net/profile/Arthur_Younger2/publica...


It was not my intention to argue that the entire concept will never fly (I'm nowhere near qualified for that), nor that it shouldn't be researched. It's just that the post by bollu that you've responded to specifically asked if this is physically possible, and the paper you linked quite clearly answers "we have no idea yet".

Personally, I wouldn't be actually all that surprised if mathematics and computer science ended up solved some day by a system of chemical membranes, since, to put it extremely crudely, it was developed by systems of chemical membranes in the first place ;) That would raise some serious philosophical questions, though.


I don't understand the meaning of nondeterminism in this context. Definition of a nondeterministic language from the linked wiki:

> Languages with significant operations (such as execution order) that are predominantly nondeterministic; the same answer cannot always be expected in the same circumstance.

What are the significant operations in this language that cannot always be expected to produce the same result?


If you have two rules like this:

    H -> O
    H -> 2O
then the interpreter will pick one at random. It is possible to write entirely deterministic programs, but some programs are non-deterministic.


From the linked page:

> "The program will continually pick rules at random from the set of applicable rules, until there are no such rules."


Found an interesting "esotheric" probabilistic language called Entropy: http://andrew.wang-hoyer.com/experiments/entropy/


Nice. If you would combine '->' operator with copy element 'I' and terminator element 'O' you could manipulate the syntax with category equations ( pure impelementation here https://github.com/kummahiih/python-category-equations and if you like to think as cats and balls, then usage example here: https://github.com/kummahiih/python-domain-equations ).


I mean the notion here is of type "connect this source to this sink". Source -> Sink. That kind of setting is called category in mathematics, but the notation is a bit clumsy and can be made a bit more clear by introducing those copy 'I' and terminate 'O' -symbols to the mix. The python library above does that and it could be perhaps possible to plug it into this syntax easily.



Thanks for the pointer to esolangs.org. Didn't know such a place existed on the internet.

Alchemist looks fun, but only by looking over the Truth Machine page did I understand exactly what it is about.

https://esolangs.org/wiki/Truth-machine


This is good. A proof that current computing model isn't the only one for computation.

Appreciate for sharing your work.


A feedback so that others will think clear before providing a comment, because I saw lot of comments asking for proof on how your project is better, whether it is computationally correct or not.

Here is my suggestion. Let me know whether it makes sense or i misunderstood the project. I tried my best to explain it, please don't hesitate to correct my understanding.

Wiki page can have an introduction in form of an animated video or images that depicts comparison between typical computing model and yours.

1. What's the current computing about ? - binary -> input (processing )output -> input + (neural processing) + output -> quantum.

2. How is your computing changed from that ? The way I see it, we can build and solve problems in a different paradigm.


[flagged]


Please don't post unsubstantive comments here.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: