What are you talking about? You pulled this out of nowhere.
Boltzmann machines are undirected architectures, and have been around since the eighties, courtesy of Geoff Hinton and collaborators. Here is some new work on the topic: http://www.cs.toronto.edu/~hinton/absps/dbm.pdf
The more interconnected architectures are much slower to train and use for inference. Hence, we use restricted architectures to improve speed. (RBMs = restricted boltzmann machines, which are a component of any current NetFlix-prize top contender.)
AI does need faster number crunching. Matrix-multiplies are really slow. I can't train more than 10K neurons on desktop hardware. Faster hardware has driven a lot of AI innovation.