It's about the use of interaction nets, which gives an optimal evaluation strategy for the lambda calculus. I'm not an expert on it, but from my understanding it allows extensive sharing of computation across different instances of an enumerative search.
Parallelism of the computation is another big selling point, except modern hardware design is not well suited for the calculus. The author of the video recently tried to get the system to work well on GPUs and ran into issues with thread divergence. I think their current plan is to build some sort of cluster of Mac Minis due the good performance of the CPUs on that platform.
If this computation paradigm advances far enough and shows enough promise, I would expect to see companies start to prototype processors tailor made for interaction nets.
It's about the use of interaction nets, which gives an optimal evaluation strategy for the lambda calculus. I'm not an expert on it, but from my understanding it allows extensive sharing of computation across different instances of an enumerative search.
Parallelism of the computation is another big selling point, except modern hardware design is not well suited for the calculus. The author of the video recently tried to get the system to work well on GPUs and ran into issues with thread divergence. I think their current plan is to build some sort of cluster of Mac Minis due the good performance of the CPUs on that platform.
If this computation paradigm advances far enough and shows enough promise, I would expect to see companies start to prototype processors tailor made for interaction nets.
reply