Hacker News new | comments | show | ask | jobs | submit login

the way i imagine this working (please correct me if wrong) is that there is quite a bit of speculative computation, and that various things get thrown away. given that, what happens if you measure speed in terms of results per watt, rather than per second? does the per-watt value tend towards a limit, even as the per-second value increases? if so, will the re-introduction of determinism (via some new mechanism) be necessary to improve efficiency, down the line?

I have not encountered any need for throwing away results, speculative computation as you describe it. An example of programming non-deterministically is the swarm model (ants). You can for example write the text editor of a word processor this way, as described in http://www.vpri.org/pdf/tr2011004_steps11.pdf

Our SiliconSqueak processor is in an FPGA stage right now, we will make the first ASIC version this year. At this time I have no hard numbers on the watt/performance and price/performance of the ASIC. The energy use of the FPGA per result is an order of magnitude below that of an Intel x86 core and we expect the ASIC to be much better.

Regardless of the efficiency of the code, I see no need to re-introduce determinism down the line.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact