

Intel's "Tick Tock" Model for Innovation  - skmurphy
http://www.intel.com/technology/tick-tock/index.htm

======
mnemonicsloth
This is BS in terms of engineering, but it says a lot about the problem Intel
has to solve to stay in business.

According to Hennessy & Patterson, 90% of performance gains come from better
architecture. That's the tock. The other 10% came from clock speed [1], which
was a side effect of better fab, the tick. But you mostly want better fab to
get more transistors so you can build a better architecture.

So here's Intel's problem. They sink a huge amount of money (costs also follow
Moore's Law) to upgrade their fabs before they release a new chip. Then they
have nothing new to sell for a year or more while they fit it to the fab.

In the nineties they smoothed out demand by selling up-clocked versions of old
chips, training consumers to think that more MHz = more faster. That's less
effective now that clock speeds have stabilized, so they've been pitching
lower power consumption instead. It's not really the same, because "faster
CPU" in the 1990s really meant "it can run new software."

It will be interesting to see how this model will fare in the cloud era. If
most CPUs live in data centers, most purchasers will choose based on power
consumption and pay less attention to architecture.

[1] Until clock speed stopped changing. Clock speeds may even drop for a while
to make parallel engineering easier. cf. [http://www.amazon.com/Computer-
Architecture-Quantitative-App...](http://www.amazon.com/Computer-Architecture-
Quantitative-Approach-3rd/dp/1558605967)

~~~
newfolder09
As a general rule of thumb Performance for a given program for any CPU is :
frequency _times_ Instructions completed/cycle _times_ number of instructions
. The number of instructions is sort of fixed based on your ISA (ie. whether
the machine is CISC like the x86 or RISC-like like the MIPS/ARM). What
happened in the Pentium4 era, was that the focus was almost completely on the
frequency part of the equation rather than the Instructions completed per
cycle.

Intel's focus on new fabs is not just for a higher clock speed- that's a
useful side benefit. The real reason is significantly lower cost/die. The same
wafer can now produce many more cpu dies (that are slightly faster),
increasing their profit/unit.

With regard to data centers, we are already seeing a move to power efficient
architectures (with the Core family of cpus) versus pure performance. However,
especially in the data center model, performance is still a critical metric
that probably is not going away any time soon.

~~~
mnemonicsloth
Datacenter operators pay attention to cost of ownership per increment of
performance. They'll be very interested in faster chips but only when they can
run an extra virtual instance on each server without raising power and cooling
costs, for example.

On clock speeds you're mistaken. Yes, Intel went from 20 to 1000 MHz during
the nineties, and their _marketing_ was all about clocks. But during that
period, they also added 20-stage pipelines, out-of-order execution, 3 levels
of caching for instructions and data, branch prediction, and hyperthreading.
That's the substance of the Hennesy and Patterson claim: during the 10-year
up-clocking binge, 90% of performance gains still came from architecture.

I'm not sure I understand what you mean about cost per die. Can you elaborate?

~~~
newfolder09
So it wasn't clear to me initially which phase of cpu development you meant. I
agree with your statement regarding cpu development in the 90s. Looking at the
performance equation I gave earlier, the second portion has gone from several
10s of cycles/instruction in the 486 timeframe to 2-3 INSTRUCTIONS/cycle for
the Pentium III.

My point was that your statement about 90% of the gains being from
Instructions/clock (ie. architecture) was not always true. The Pentium 4 being
a prime candidate, where the number of pipestages was dramatically scaled up
(reducing instructions/clock) to increase frequency.

WRT cost/die: Cpus are created on circular silicon wafers.
<http://arstechnica.com/hardware/news/2008/09/moore.ars/2> Every move to a
lower process node, reduces the area for each cpu die. For a given cpu, this
means that more of them can be added to each wafer, driving down the cost for
each unit.

Ofcourse, as you mentioned, by keeping die size a constant , they get get more
transistors/die, allowing them to cram more features on a chip. Lowering costs
v/s adding features is a tradeoff that every cpu design team has to make.

------
fragmede
Given the number of people Intel employs, my guess is that Intel looks far
enough ahead and teams have 2 year external deadlines. They've just staggered
those deadlines so the public sees either a 'tick' or a 'tock' in a year.

------
AngryParsley
Extrapolating feature size reductions every two years:

2010: 32nm, 2012: 22nm, 2014: 16nm, 2016: 11nm, 2018: Nanoelectronics/magic

Eight years away from the end of CMOS? That's mind-boggling, especially since
the ITRS roadmap puts 11nm feature size out in 2022.

~~~
mnemonicsloth
Wait until you see what comes after.

Everyone thinks DNA is just genetic material, but researchers can already
build rigid 3D structures with it, or weave it into flat sheets and then
address the individual nucleotides like pixels.

The computers we use in 2025 will probably be self-assembled and might not be
electronic, but they'll keep getting faster.

[http://metamodern.com/2009/05/22/a-third-revolution-in-
dna-n...](http://metamodern.com/2009/05/22/a-third-revolution-in-dna-
nanotechnology/)

------
10ren
Why those nm numbers? Hmmm... surface density (nm squared) roughly doubles
every two years. Slower than Moore's Law (double every 18 months).

~~~
hga
So I thought, but according to Wikipedia Moore wasn't that precise in his
prediction. Initially in 1965 every year and later every two years. Without a
citation they say he insists he never said 18 months.

Also it's a bit more complicated that just a doubling; again from Wikipedia,
the original formulation was "about the density of transistors at which the
cost per transistor is the lowest."
([http://en.wikipedia.org/wiki/Moore%27s_law#Other_formulation...](http://en.wikipedia.org/wiki/Moore%27s_law#Other_formulations_and_similar_laws))

Intel is of course not quite playing that game, with speed and power being
somewhat more important than cost.

------
kristianp
I wonder why AMD doesn't use the 'tick-tock' model? It seems to work very well
for Intel.

~~~
reitzensteinm
They kind of do, eg, 65nm was introduced with Athlon X2 before they came out
with Phenom, then the Phenom was shrunk to 45nm, etc. I've always seen tick
tock as purely a marketing thing, although the 1 year goals may drive
engineering somewhat.

Interestingly, Intel is considering doing tick tock tock for 32nm.

~~~
hga
Hmmm, I've never seen it as purely marketing, it seems to be very engineering
oriented:

Tick: Take an existing, known working and understood design and shrink it for
the new process, using that to work out the kinks in the latter.

Tock: Now that the new process is well understood, get a new design to work on
it.

Intel is probably in a better position to do this due to their very strong
emphasis on manufacturing and their ability to solidly plan their spending on
new fab lines and processes.

~~~
reitzensteinm
I meant the whole 'tick tock' branding, so to speak - it seems to be marketing
giving a name to something which engineering has been doing for sensible
reasons for quite some time.

~~~
hga
I'm sorry, but I got the impression from your saying "although the 1 year
goals may drive engineering somewhat" that the marketing was driving the
engineering and not the reverse.

I think that if you're doing something smart, something that shows how ...
solid? reliable? your company is, something that's easily understood by most
with an engineering background, _and something your competitor can't do_ , why
not make some hay from it?

~~~
reitzensteinm
Ah, sorry, I meant that the only real change seemed to be that they drew a
line in the sand and said we'll do a tick or a tock each year. But even then,
it hasn't exactly happened that way (see
<http://en.wikipedia.org/wiki/Intel_Tick-Tock>), and they must have had goals
like that internally anyway.

I don't blame them for getting some marketing mileage out of it - just that I
don't really see it as a very important change in terms of engineering.
Although I'm just a casual observer of the industry so I may well be wrong
about that.

~~~
hga
Very interesting. Clearly, it takes them more time to do a tick than a tock,
and that makes sense, the big unknown unknowns are in moving to a new process.
E.g. look at the reported fun with TMSC's 40nm process (iffy vias and
excessive variation in transistor construction), with ATI negotiating the mess
in two steps and nVidia failing to.

Once you've done the tick, a new microarchitecture should be relatively easy,
especially since you can simulate it ahead of time and towards the end do that
in the light of what you've learn about the new process.

