
Possible unconventional computing techniques of the future - diego898
http://nautil.us/issue/21/information/moores-law-is-about-to-get-weird
======
ChuckMcM
I was expecting a bit more from nautil.us. There are a lot of alternate
computation mechanisms but they don't lend themselves to the environments
where we put computers (you're not going to run a satellite on slime mold).
Building better biologic diagnostics? Sure. But in the computer space where
Moore's Law is often cited, graphene or other carbon structures, or even
silicene seem more likely and 3D structures even more so.

------
aidenn0
A little rewording, and we get:

> Transistor computing is naturally parallel, aidenn0 says, with computations
> taking place simultaneously at every logic gate

------
gooseyard
Reminds me of another recent article, which I think I might also have found
here at HN:

[http://www.damninteresting.com/on-the-origin-of-
circuits/](http://www.damninteresting.com/on-the-origin-of-circuits/)

------
farresito
Boy, I should stop reading HN. The more I read about the scientific
advancements that are going on in this field, the more I think about getting a
second degree in, say, biology, to try to work on biological computers or
stuff like that.

------
logicallee
in my opinion moore's law is already fine for another 15 years.

(15 years / 18 months = 10, so 10 iterations). moore's law is about transistor
count.

the reason it's fine to have up to 1024x as many transistors (2^10, since
moore's law is about doubling):

>"Moore's law" is the observation that, over the history of computing
hardware, the number of transistors in a dense integrated circuit doubles
approximately every two years.

oh, I see it says 2 years ,not even 18 months as i'd thought.

anyway the reason it will continue just fine is that we are using one tiny
few-millimeter slice (plane) to print transistors onto whereas obviously by
2035 there will be multiple layers (3d). this is totally obvious since

[https://www.google.com/search?q=c+%2F+4+ghz](https://www.google.com/search?q=c+%2F+4+ghz)

and we're already at 7.49481145 centimeters travelled in a clock cycle.
etchings are at 14 nm already. a molecule of carbon nanotube has its width at
4 nm. carbon itself (the atom) only has an atomic radius of 0.14 nm, and
you're not going to be inserting your etchings into the quarks and protons of
atoms and hoping to do your calculation there.

it's completely obvious that before very long we'll take those 8 centimeters
light is travelling per cycle @ 4 ghz and have it other than zigzag across a
plane, a single slice, when we can stack thousands of them.

there's nothing wrong with moore's law except the fact that people are too
good at shrinking die size, so it'll be a while for them to think inside the
box. (outside the plane.)

~~~
nhaehnle
You're right that going 3D is a natural thing, but there appear to be pretty
serious manufacturing challenges. The most obvious one: chips _already_ take a
pretty long time to manufacture - apparently the "latency" of a typical fab is
on the order of weeks. If you double the number of layers, you double this
latency.

So the only way to get serious about 3D is to produce thin slices in parallel
and then put them together somehow. As a corollary, this means that we're most
likely never going to see a logical unit such an entire core being distributed
across two layers of transistors.

This doesn't invalidate your point, of course. We're likely to see designs
where multiple dies, each with several cores, are stacked on top of each
other.

~~~
logicallee
(I don't know where you get off concluding "never" but whatever.)

but you're right, there are very (extremely) serious manufacturing challenges.
the fact that people were so good at shrinking die size is what has led to a
lack of the third dimension. I guarantee, if they couldn't figure it out they
would have to - regardless of mfg challenges. it's the only direction to go.
(other than more core numbers, which they also have done.)

~~~
nhaehnle
Well, there is a "most likely" there ;)

But yes, I highly suspect that technology will never reach a point where
multi-layer cores make sense. Part of the reasoning is that even if you reduce
distances, you're still fighting with transistor switching speed, and
individual cores are already very small. I do believe that designs with cores
on one slice and L3 cache on another slice may happen, by the way. Another
part of the reasoning is that (even if manufacturing can largely avoid the
multiple-slice-then-glue method), the density of inter-layer connections will
likely be much lower than the wiring density on the lower metal interconnect
layers [0], which makes it unattractive to have closely tied logic spread
across different layers. There are theoretical academic papers talking about
the physical design of e.g. adders spread across a small number of layers, but
they're frankly not very impressive, and their assumptions about the
electrical properties of the inter-layer connections appear a bit on the
optimistic side (understandable because hey, you've got to publish
something!).

Overall, it's just a big headache, compared to the alternative of going 3D by
stacking relatively large logical components such as cores and cache blocks.

[0] To be fair, I haven't seen numbers on this. I don't know how familiar you
are with the typical stacks of interconnect metal, which consist of a very
small number of layers for the thinnest wires, and then additional layers of
increasingly (order of magnitude) larger wires for longer distance
connections. In a hypothetical multi-transistor-layer design, how do you
arrange those interconnect layers? To have a high density of connections
between the layers, you want to only use layers with thin wires between the
transistor layers, but then it's unclear how the longer distance wiring should
work.

Edit: Let me formulate my position as a more concrete prediction. There will
never be a major commercial microprocessor in whose design an automated tool
is used to decide the assignment of a significant fraction[1] of transistors
to their layer on an individual basis[2].

[1] Meaning a significant fraction of the random logic transistors; with
growing caches, the fraction of transistors which is placed by an automatic
tool is decreasing anyway.

[2] Meaning that the tool makes individual decisions about fundamental
building blocks such as NAND gates. It is slightly more conceivable that an
automatic tool is used to assign larger blocks (such as an FPU) to different
layers. I would still predict that this won't happen, but with lower
confidence.

~~~
sitkack
You should read "The Hazards of Prophecy" [http://www.sfcenter.ku.edu/Sci-
Tech-Society/stored/futurists...](http://www.sfcenter.ku.edu/Sci-Tech-
Society/stored/futurists_hazards_of_prophecy.pdf) by Arthur C. Clarke

------
jhallenworld
"A ternary lookahead algorithm does not yet exist, and other algorithms that
would be needed for ternary to be practical are also missing."

Really? I'm pretty sure carry select look-ahead would work for ternary.

~~~
DavidS89
Could you expand? :)

------
jerf
What a strange hook. "Moore's law is running out and we soon won't be able to
squeeze more performance out of transistors, so... here's a bunch of computing
technologies that may have a niche but stand no chance of having better
performance than transistors even when optimized." It's a fine article, but
the hook doesn't match it.

~~~
tacotime
I agree with you 95%. The 5% being an admittedly far fetched and idyllic
notion that just maybe one day we will design the mythical x86 bacterium and
all of a sudden instead of needing meticulously crafted silicone wafers and
logic gates... all of a sudden all we need is cheap simple hydrocarbon to grow
the world's most elaborate super computers. They wouldn't be faster but more
efficient and highly parallel. Like a more distributed version of a brain
where bacterial (or whatever) cells take the place of traditional neurons.

~~~
aetherson
While it's true that there's an unlikely but nice scenario in which it becomes
ultra-cheap to make biological processors, I take some issue with the idea of
them being "highly parallel."

What stops us from using more highly parallel chips right now is not the
expense of silicon, it's that it turns out that highly parallel chips just
aren't as useful as fast chips. Nothing about the ultra-cheap biological
computers scenario would change that. That would be a world of _omnipresent_
computing, not one of _parallel_ computing.

------
mwcampbell
I think I would prefer it if progress in hardware efficiency just stopped when
Moore's Law finally becomes invalid. Then hardware wouldn't become obsolete so
quickly. That would surely be good for the environment, and for the poor.

~~~
comboy
And once again we would learn how to write efficient software, yay.

I can't imagine this happening though (before another paradigm shift).

~~~
JoeAltmaier
Come on over to embedded software! Its all about efficiency, space, power,
bytes and bandwidth.

~~~
mrec
You have no idea how good that sounds. "Enterprise" programming is soul-
destroying, and the current vogue for mile-high node-based JS stacks looks
even worse.

I miss the good old days of futzing around in 68k assembler for fun and...
well, just fun, really.

