The difficulty I have with his manifesto is that he proposes many ideas that sound quite similar to existing ideas (e.g. graph reduction model of computing, dataflow execution models, dataflow languages), but there isn't a direct comparison: I can't tell if this is a remix of older ideas (nothing wrong with that necessarily), or if there is some additional idea he has that mean the language would be a major advance over past work. It's just difficult to know what to make of it.
He has additional ideas - the problem is they're mostly nonsense. For example, if you look at the page where he tries to explain what he means by program lattices and lattice based computing, he gets the definition of a lattice wrong! (despite actually linking to a correct definition!)
A lattice is a partial ordering with a single unique "top element" (least upper bound) and a single unique "bottom element" (greatest lower bound). Every node in the lattice is reachable by following some directed path from the single least upper bound, and there is a directed path from every node in the lattice to the single least lower bound.
A lattice is a poset where binary joins and meets exist. You define a bounded poset not a lattice. In general a lattice need not have either a top element or a bottom element, although all finite lattices are complete and thus do have top and bottom elements. The second sentence is also redundant, because for any element x of a lattice x ≤ ⊤, so why talk about paths?
Cyaegha: Since we're talking about graphs generated from source code, they will always be finite. These are not abstract infinite mathematical objects. All finite lattices are posets, but not all finite posets are lattices, so I stand by the usage. And "Program lattice" sounds a heck of a lot better than "program bounded poset" :-)
The second sentence is intended to be redundant, it gives further explanation as to the implications of the first sentence. (This doc was never intended to be a tight mathematical definition, only to give a quick introduction to the concepts, with links to the strict mathematical definitions.)
The reality is that the lattice or bounded poset formulation is only needed to understand that the specific language constraints described ensure that the dependency graph of any valid program can be statically determined to be a partial ordering. The compiler doesn't even need to build a DAG internally, it just needs to ensure the specific language constraints described are in place, and as a result, the language will be able to be parallelized in any manner consistent with the partial ordering of data dependencies.
All finite lattices are posets, but not all finite posets are lattices, so I stand by the usage.
Yes obviously, but nowhere is it clear that you actually have a lattice of any kind. Specifically, I see no reason why pruning DAGs of unused computations would yield a lattice, and this is a result you seem to rely on. Program lattice does indeed sound nicer than program bounded poset, but I would be more wary about misusing terminology than having a nice name! :)
The least upper bound can be thought of as a virtual node consisting of the set of static constants (and the set of all inputs) fed into the rest of the lattice. The greatest lower bound is a virtual node consisting of the set of final output values that are "used" (written to an output file or database). If an intermediate value is computed but is not referenced anywhere, the data dependency DAG has a local lower bound node that is not connected to the greatest lower bound since the value is not written anywhere. If these unused nodes are pruned away, the result is a lattice.
For that construction to work, these 'virtual' nodes need to exist in the poset. Your poset is just the vertex set of your DAG ordered by reachability, so they don't exist. I think your intuition is good, but the mathematics just doesn't add up. Perhaps you should look into things like trace monoids, pomsets and petri nets?
Please understand that I'm not trying to be mean here. You mention you've been applying for grants to work on this, but if you don't explain the mathematics adequately or demonstrate how it differs from existing formalisms then I imagine people are just going to reject you instantly. I would suggest seriously trying to formalise some of this stuff in a proof assistant like Coq. That'll quickly tell you if your theory works or not.