Hacker News new | past | comments | ask | show | jobs | submit login

I'm a bit confused. The architecture is just abstracted away. Why does a programmer care if it's optical or DNA or whatever? See, for a less extreme example, the tools that people use to develop for a single x86 vs CUDA vs massive clusters vs huge FPGA clusters vs ARM vs etc etc. I honestly think that revolutionary architecture will just lead to some new libraries and dev tools which get kludged onto existing dev systems. But I welcome more information because I know I could be horribly wrong.

I think you're on the right track, software tries to layer itself as much as possible. New architectures and capabilities will only make the concepts and tools exposed by the glue between the layers change.

Unless the new thing isn't Turing-complete and can't be implemented with a Turing-complete system, it will be abstracted away at first just so we have an environment to start building with, and can start using it without reinventing every single wheel we have.

What does Turing-completeness have to do with it?

Turing-completeness means that the path of least resistance is to create a compatibility layer.

Without radically changing the paradigms on such a fundamental level a start-over just wouldn't happen.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact