He has additional ideas - the problem is they're mostly nonsense. For example, if you look at the page where he tries to explain what he means by program lattices and lattice based computing, he gets the definition of a lattice wrong! (despite actually linking to a correct definition!)
A lattice is a partial ordering with a single unique "top element" (least upper bound) and a single unique "bottom element" (greatest lower bound). Every node in the lattice is reachable by following some directed path from the single least upper bound, and there is a directed path from every node in the lattice to the single least lower bound.
A lattice is a poset where binary joins and meets exist. You define a bounded poset not a lattice. In general a lattice need not have either a top element or a bottom element, although all finite lattices are complete and thus do have top and bottom elements. The second sentence is also redundant, because for any element x of a lattice x ≤ ⊤, so why talk about paths?
Cyaegha: Since we're talking about graphs generated from source code, they will always be finite. These are not abstract infinite mathematical objects. All finite lattices are posets, but not all finite posets are lattices, so I stand by the usage. And "Program lattice" sounds a heck of a lot better than "program bounded poset" :-)
The second sentence is intended to be redundant, it gives further explanation as to the implications of the first sentence. (This doc was never intended to be a tight mathematical definition, only to give a quick introduction to the concepts, with links to the strict mathematical definitions.)
The reality is that the lattice or bounded poset formulation is only needed to understand that the specific language constraints described ensure that the dependency graph of any valid program can be statically determined to be a partial ordering. The compiler doesn't even need to build a DAG internally, it just needs to ensure the specific language constraints described are in place, and as a result, the language will be able to be parallelized in any manner consistent with the partial ordering of data dependencies.
All finite lattices are posets, but not all finite posets are lattices, so I stand by the usage.
Yes obviously, but nowhere is it clear that you actually have a lattice of any kind. Specifically, I see no reason why pruning DAGs of unused computations would yield a lattice, and this is a result you seem to rely on. Program lattice does indeed sound nicer than program bounded poset, but I would be more wary about misusing terminology than having a nice name! :)
The least upper bound can be thought of as a virtual node consisting of the set of static constants (and the set of all inputs) fed into the rest of the lattice. The greatest lower bound is a virtual node consisting of the set of final output values that are "used" (written to an output file or database). If an intermediate value is computed but is not referenced anywhere, the data dependency DAG has a local lower bound node that is not connected to the greatest lower bound since the value is not written anywhere. If these unused nodes are pruned away, the result is a lattice.
For that construction to work, these 'virtual' nodes need to exist in the poset. Your poset is just the vertex set of your DAG ordered by reachability, so they don't exist. I think your intuition is good, but the mathematics just doesn't add up. Perhaps you should look into things like trace monoids, pomsets and petri nets?
Please understand that I'm not trying to be mean here. You mention you've been applying for grants to work on this, but if you don't explain the mathematics adequately or demonstrate how it differs from existing formalisms then I imagine people are just going to reject you instantly. I would suggest seriously trying to formalise some of this stuff in a proof assistant like Coq. That'll quickly tell you if your theory works or not.
Maybe I'm wrong, but it seems to me that most of this guy's ideas are either a bit half-baked or simply flat out wrong.
Consider the argument for his proposed language being Turing complete on the flow manifesto page. He claims that this follows from the Böhm-Jacopini theorem. However the Böhm-Jacopini theorem requires variable assignment is allowed in the subprograms (which does not appear to be the case in flow), and without this the theorem does not hold, as this paper by Kozen demonstrates: www.cs.cornell.edu/~kozen/papers/bohmjacopini.pdf. Then he goes on to argue that order duality implies the existence of a selection operator in flow, which doesn't seem to make any sense.
I think it's telling that he has been working on this since 2010, yet has no decent proofs of his claims, nor any working implementation.
Variable assignment is not strictly necessary, what is necessary is that one loop iteration can modify the initial state of variables in the next loop iteration. This is the second alternative to setting the current value of a variable presented in Flow (specifically, recurrence relations, e.g. x' = x + 1, which are syntactic sugar but allow "variable assignment" but only allow assignment to variables accessible in the next loop iteration). So the Flow Manifesto doesn't cover the hole in the Böhm-Jacopini theorem specifically, but that doesn't mean this computational model is not Turing complete.
As far as proofs or working implementation, sometimes life gets in the way of progress :-)