
Difficulties in Saving Moore’s Law - jonbaer
http://www.nextplatform.com/2016/06/24/alchemy-cant-save-moores-law/
======
boznz
No Mention of moving to 3D ?

I am assuming that having RAM and CPU (and other functions such as GPU or
other hardware accelerators) on same die on different levels (physically close
ie nano or micrometers apart) would cut latency by orders of magnitude.

3D is already done with NAND flash so I am assuming it is heat that is the
problem.

Just my 10c

~~~
T-A
Average distance from points in a unit circle to center is 2/3; average
distance from points in a unit sphere to center is 3/4\. Or if you prefer
squares and cubes, you are essentially replacing a sqrt(2) diagonal with a
sqrt(3) triagonal.

Even worse, surface area available for heat dissipation grows quadratically,
while volume and therefore heat generated grows cubically. That's why 3D
memory is a thing (most memory cells are not active at any given time), but
seriously 3D processors (trigate notwithstanding) are not.

~~~
Razengan
Well, biological brains are 3D processors, and have been a thing for billions
of years..

~~~
T-A
You might be surprised how little 3D they are. Look at the top line model: the
human cortex is only about 3 mm thick, and comprised of only six layers of
neurons. It may look like a big lump of meat, but that's because it's folded
to fit more area into the limited volume available in the skull. If you unfold
it, you get this very thin sheet with a whopping area of 2500 cm^2.

------
_yosefk
Interesting that a guy from Intel says cache coherence doesn't scale; I kinda
thought it scales pretty well, especially if actual data sharing is the
exceptional case that must be made to work correctly, and non-shared data is
the rule and must be made fast.

~~~
amelius
> especially if actual data sharing is the exceptional case

The problem is that cache coherence does not scale, _given_ (note emphasis)
that a certain amount of data is shared among CPUs.

You cannot circumvent the problem by choosing to not share much data between
cores, because then you have changed the problem!

~~~
_yosefk
What example workload cannot be implemented efficiently if your cores
communicate using coherent caches, but can be implemented efficiently if they
work in some other way?

------
Artlav
The real question is - can we really know that there is nothing better than
CMOS?

Imagine an old star that is undergoing a collapse - the matter infalls under
it's own gravity, accelerating faster and faster. At some point, the matter
runs out of density - the neutrons slam into each other and the whole mass of
the star going inwards at a good fraction of c tries to undergo an abrupt
stop.

The leisurely free fall turns into the violence of inertia. Then, either that
violence is enough, and the collapse continues towards a black hole, or it's
not enough, and all we get is a boring neutron star.

Will the economical inertia behind the Moore’s Law push it past each
technology's clang of limits all the way towards some sort of singularity, or
will it one day come to an abrupt halt against an immovable wall of the
structure of the universe?

The latter, obviously.

Would that wall be CMOS? We can't really know until we hit it.

~~~
jacquesm
There are quite a few options besides CMOS all with desirable properties and
some of those with potential feature sizes smaller than Silicon. The problem
is that none of these are economical at scale and that is where the problem
lies, it's an economical issue, not so much a technical issue.

Bismuth, Gallium-Arsenide and other variations on that theme, various
superconductors used for processing elements all have the potential or have
already surpassed Silicon for various parameters. But none have done so in a
way that would allow massive adoption.

So for now the 'economical inertia' as you put it so eloquently seems to be
exactly where the problem is, inertia is only useful to get you past a hump
when you're still moving forward, when you're already stuck it is a hindrance.

~~~
adrianN
Any new technology will have a really hard time competing economically with an
industrial process that has been refined for five decades. The question is
whether there are people willing to pay a hefty premium for the new technology
to finance its development.

~~~
jacquesm
It's a bit like the combustion engine. A completely different tech (such as
optical) may be a better bet.

------
Qantourisc
So time to write efficient code and start working hard on easy multi-
threading? Cause if we hit "the wall", all consumers can do is buy the fastest
CPU with the most cores they can get, and then never in his live again will he
be able to upgrade. So those resources is what we will need to do it with.

~~~
Qwertious
I think when we actually hit The Wall, we'll keep on improving (at a slowed
pace) for a while afterwards based on the not-so-low-hanging fruit we've
ignored up til then.

~~~
cbd1984
Frankly, I'm kind of interested in the prospect of specialized hardware coming
back.

The trend since the mid-1980s, when the Intel 80386 was introduced, has been
for specialized chips to be replaced due to commodity chips beating them
comprehensively in single-core speed; problems which once required specialized
hardware and core designs were either solved or obviated by massive
improvements in scalar hardware.

Now that this brute force method is starting to peter out, we might see a rise
of new, specialized designs once again, to solve specific problems which gain
substantial speedups from very specific kinds of parallelism. Systolic arrays
are one example. We're already doing something like this by pressing GPUs into
service as computational hardware, but it can go farther.

~~~
Inthenameofmine
This is definitely already starting to happen.

Modern server CPUs have FPGAs in them. I was able to speed up Blockchain
transaction signing and verification by 2 order of magnitude by utilizing the
Intel in-build FPGAs.

Mobile phone SoCs already have semi-specialized processors in them.
Microsoft's Hololens has something what they call "HPU" Holographic Processing
Unit. Google just made public that they have a TensorFlow ASIC. Google's phone
radar thingy has it's own ASIC if I'm not mistaken.

------
Sylphine
I have been wondering for a long time if it would be possible to make a
computational device out of subatomic particles. But i guess if possible that
would take hundreds of years.

------
claystu
I wonder if hardware restraints might actually improve software. Instead of
"what the hardware giveth, the software taketh away," the industry will
instead begin to say, "well, this is all the performance we've got so let's
make the code half the size it is right now and eliminate 50% of the bugs."

------
narrator
The Singularitarians aren't going to be too happy about that. They've been
spending all their time getting ready for the Strong AI future built on the
inevitability of Moore's law and now it may never come.

Maybe biotech will lead the next tech revolution. I can see people doing all
sorts of unholy stuff with CRISPR. It's a technology though that, at least in
the U.S, I can't see the regulatory agencies really supporting development of.
It's almost too powerful of a technology.

It's sort of how not much has happened in the way of nuclear power design
lately. So much can go wrong with that technology that the regulatory agencies
never approve anything.

I wonder if we'll reach a point where all the socially acceptable technologies
have been developed and technologies like advanced nuclear or radical CRISPR
biotech that could take things to the next level are permanently forbidden
because they are too powerful and dangerous.

~~~
FeepingCreature
Speaking as a singularitarian: The only thing that has stopped is free scaling
of photolithography.

Our most advanced computing systems fill the equivalent volume of a large
snowflake. Scaling down is over; the future is scaling up and sideways. (Power
consumption, price...)

The only unit that ever _really_ mattered is amortized flops per inflation
adjusted dollar.

