
Intel Skylake sample breaks 6.5GHz mark on LN2 - DiabloD3
http://www.dvhardware.net/article62888.html
======
Xcelerate
I really miss the days of increasing clock speeds. I attended a conference on
molecular dynamics simulations two weeks ago (FOMMS), and more than one person
lamented the fact that long timescale (> millisecond) simulations are going to
be difficult to attain now that clock speeds have stalled. GPUs are great for
simulating _more_ atoms, but they are in fact worse at performing _longer_
simulations. If no more effort is put into making processors faster, then the
only ways forward are to either build special purpose machines (like Anton) or
to invent some clever mathematical techniques.

~~~
montecarl
Long timescale atomic simulations of liquids is a really challenging problem.
Most of the well developed accelerated dynamics methods only work well for
solids where there is a well defined time scale separation between atomic
lattice vibrations and reactive events. In liquids (e.g. protein folding) the
timescale separation is not as well defined. There seems to be complexity at
every level (vibrations, small rotations, dihedral angle changes, etc). Liquid
like systems have a very rough potential energy landscape with an enormous
number of shallow minima that must be explored to understand the dynamics.

I look forward to the clever algorithms that will be developed that will
enable modeling liquid systems on the experimental timescale. This is recently
become possible in some solid-state systems, but it will take some time and
effort before it happens for liquids. Then it will be much easier to do first
principal studies of organic/bio chemistry.

~~~
j-pb
Maybe a stupid question, but isn't it require to run these long-time
simulations multiple times anyways? So one could evaluate these all at once by
introducing randomness (vibrations, small rotations, dihedral angle changes,
etc) once an interesting point in time is reached, and then simulating in
parallel from then onward.

------
loser777
For reference, here's what the current (CPU-Z validated) OC records look like:
[http://valid.canardpc.com/records.php](http://valid.canardpc.com/records.php)
(it's pretty safe to assume most of these used LN2 as well)

~~~
PebblesHD
How, under the current laws of physics, did someone manage to squeeze 8500MHz
out of a Celeron D????

~~~
dmishe
It's based on Pentium 4 core which was extremely overclock-able, plus it's a
single core, which makes it easier to get to that stable enough for a
screenshot state.

------
stephengillie
> _The CPU was overclocked from its default clockspeed of 4000MHz to 6531MHz
> without deactivating any cores or the Intel Hyper-Threading feature. The
> voltage was increased to 2.032V and the chip was chilled using LN2 cooling._

It's even more crazy that they managed to do this without disabling any of the
cores. I wonder if the OS parked any of these cores. This blurb doesn't
mention if they put the CPU under any load, or if the 6531MHz was achieved
without load.

Also, liquid nitrogen? Is this common in the overclocking community now?

~~~
bpicolo
In the extreme-overclocking community yeah. (There are tournaments for this
sort of thing afaik)

------
cmdrfred
I'm not a bare metal guy so excuse my ignorance, what is the bottleneck for
higher speed processors? Is it simply cooling?

~~~
johncolanduoni
It is a combination of things, of which cooling is one. A more fundamental and
challenging physical limit is increased loss due to higher frequency (which is
why chip voltage needs to be increased). It gets to the point where too much
signal loss occurs across "long" lines between parts of the chip. This is why
using plasmonics in ICs is an active field of research; the hope is to be able
to use optics for long distance interconnects inside the processor.

~~~
lscharen
The fact that we're at a point where millimeter distances from one side of a
CPU die to the other is starting to be considered "long distance
communication" is bemusing and awesome at the same time.

