For contrast, one could look at a much more compact way to do this that -- with more foresight -- was used at Parc, not just for the future, but to deal gracefully with the many kinds of computers we designed and built there.
Elsewhere in this AMA I mentioned an example of this: a resurrected Smalltalk image from 1978 (off a disk pack that Xerox had thrown away) that was quite easy to bring back to life because it was already virtualized "for eternity").
This is another example of "trying to think about scaling" -- in this case temporally -- when building systems ....
The idea was that you could make a universal computer in software that would be smaller than almost any media made in it, so ...
However since PC revolution, the mainstream seemed to take on the "data" path for whatever technical or non-technical reasons.
How do you envision the "coming back" of image path via either bypassing the data path or merging with it in a not so faraway future?
This also obtains for "thinking" and it took a long time for humans to even imagine thinking processes that could be stronger than cultural ones.
We've only had them for a few hundred years (with a few interesting blips in the past), and they are most definitely not "mainstream".
Good ideas usually take a while to have and to develop -- so the when the mainstream has a big enough disaster to make it think about change rather than more epicycles, it will still not allocate enough time for a really good change.
At Parc, the inventions that made it out pretty unscathed were the ones for which there was really no alternative and/or no one was already doing: Ethernet, GUI, parts of the Internet, Laser Printer, etc.
The programming ideas on the other hand were -- I'll claim -- quite a bit better, but (a) most people thought they already knew how to program and (b) Intel, Motorola thought they already knew how to design CPUs, and were not interested in making the 16 bit microcoded processors that would allow the much higher level languages at Parc to run well in the 80s.
On the other hand due to the exponential growth of software dependency, "bad ideas" in software development are getting harder and harder to remove and the social cost of "green field" software innovation is also getting higher and higher.
How do we solve these issues in the coming future?
But e.g. the possibilities for "parametric" parallel computing solutions (via FPGAs and other configurable HW) have not even been scratched (too many people trying to do either nothing of just conventional stuff).
Some of the FPGA modules (like the BEE3) will slip into a Blades slot, etc.
Similarly, there is nothing to prevent new SW from being done in non-dependent ways (meaning the initial dependencies to hook up into the current world can be organized to be gradually removeable, and the new stuff need not have the same kind of crippling dependencies).
Part of this is to admit to the box, but not accept that the box is inescapable.
It is the best online conversation I have ever experienced.
It also reminded me inspiring conversations with Jerome Bruner at his New York City apartment 15 years ago. (I was working on some project with his wife's NYU social psychology group at the time.) As a Physics Ph.D. student, I never imaged I could become so interested in Internet and education in the spirit of Licklider and Doug Engelbart.