

Intel Broadwell Architecture Preview: A Glimpse into Core M - ismavis
http://www.anandtech.com/show/8355/intel-broadwell-architecture-preview

======
jfoutz
14 nanometer process. I vaguely recall how impossible people said that would
be to achieve back when the original pentium was an 800 nm process.

very cool.

~~~
listic
Still, it looks like 5 nanometers is the end of it.
[http://en.wikipedia.org/wiki/5_nanometer](http://en.wikipedia.org/wiki/5_nanometer)

Not that it will necessarily be the absolute end of Moore's law, as hardware
manufacturers are trying alternative approaches to keep ramping up power, e.g.
Samsung already sells its 850 Pro series SSDs
[http://www.amazon.com/s/ref=nb_sb_ss_c_0_6?url=search-
alias%...](http://www.amazon.com/s/ref=nb_sb_ss_c_0_6?url=search-
alias%3Dcomputers&field-keywords=850%20pro&sprefix=850+pr%2Caps%2C404) made
with its VNAND memory
[http://www.samsung.com/global/business/semiconductor/html/pr...](http://www.samsung.com/global/business/semiconductor/html/product/flash-
solution/vnand/overview.html) which fell back to 40 nm from 840 EVO's 19 nm,
while going 3D, which seemed to improve both speed and reliability. Now they
have a bit more runway in their Moore's law, but still not much in sight.

~~~
Igglyboo
I hear this end to moores law all the time but couldn't we just make the CPUs
physically larger?

I know this probably wouldn't work for a mobile device but for
desktops/servers theres a ton of room for larger dimension chips right?

~~~
ethbro
(Disclaimer: I'm not an EE)

To be technical, Moore's law is about the number of transistors on an
integrated circuit.

So your point isn't too far off the 3D comments elsewhere.

Simply making the die bigger doesn't get you much: larger dies (without
additional redundancy) have lower yields (as you're more likely to have a
defect given a constant defect/area rate) and fewer can be stamped out of a
standard sized wafer.

However, if you carry that idea to its logical conclusion... we may turn from
shrinking the transistor to shrinking the packaging as the path of least
resistance. 3D transistors, chip stacking (aka PoP), and through-silicon vias
(aka vertical connectivity) all help get us more processing / area (while
remaining within fundamental thermal, manufacturing, etc. physical limits).

Again, this is a CS major with an architecture interest, so anyone please feel
free to correct me if I'm off-base.

~~~
XorNot
3D doesn't help very much. We're already at the limits of thermal capacity for
consumer-chips. 3D just exacerbates that, since now you've lost a whole
dimension you can shunt heat through.

If you go 3D, you need a very large drop in heat dissipation to keep your
junction temperatures down.

------
bitL
Please Intel, release it in 2014! Don't make it slip into 2015! I really
really want a 10W NUC with Broadwell i7 as my main desktop machine!

~~~
higherpurpose
It seems Intel's misleading marketing is already working, even on the HN
crowd. The whole point of the "M" chip is to be low-power/low performance. If
you want "Core i7", get a version that actually has a lot of power behind it,
not a "10W i7". The 10w Core i7 won't be faster than a desktop Core i3 SNB.
Heck I bet it won't be more than 50 percent faster than the latest ARM chip
either.

~~~
bitL
I don't really care about super high performance - I need a small NUC that I
can cool passively with reasonable performance for typical business stuff I do
on the desktop. I understand the difference between mobile i5 vs i7 is almost
none even on Haswell (both are dual-core with hyperthreading unlike desktop
versions). I already have a 17W IvyBridge i7-3517U in one of my notebooks so I
know what I am talking about. I also can't wait for 19W Kaveri for another
tiny desktop setup...

EDIT: i7 details

~~~
DuskStar
From Wikipedia, Intel also has quad-core + hyperthreading mobile CPUs, with
model numbers 4700 and up. The top mobile i7 is about equivalent to a 4770S
desktop chip, but with a bit lower TDP.

So don't lump all the mobile parts in with the ultrabook-optimised chips.

~~~
bitL
Sure, there is a 47W i7-4960HQ in rMBP which has significantly higher power
consumption than what I am looking for. If you look at low power U versions of
i5 and i7, they seem almost identical (with i7 having larger cache and a bit
higher frequency; both are dual core and have hyperthreading). In desktop, you
can see i5 is without HT contrary to i7.

If Broadwell can bring 20W quadcore i5/i7 - that would be even better!
Currently only BayTrail Pentium J can offer that.

------
justinph
AKA: The chip that will finally let you buy a retina Macbook Air.

~~~
wmf
If the GPU is fast enough to drive a retina display.

~~~
zanny
Even the "hd graphics" or low end Kaveri GPUs from AMD can drive 4k.

Hell, Adreno from Qualcomm can probably drive 4k.

Desktops, compositing, and light effects like transparency are nothing next to
video game or 3d model rendering. In practice, any GPU manufactured today is
enough for non-3d purposes, unless you try running 3 4k monitors off the
numberless hd graphics or something.

If you aren't getting 60 fps scrolling, it is usually due to the cpus being
low powered and the application not using GL accelerated drawing for the
scrolling.

~~~
mycookie
Also, just because the GPU can power 4K doesn't mean it will for a reasonably
small amount of power, such that the MBA can continue to pull a 12-hour
battery life.

~~~
FootballMuse
If you are plugged into a 4k monitor, you are probably reasonably close to a
power outlet.

~~~
lloeki
Theoretically, Thunderbolt is capable of being your power outlet.

~~~
estel
Yeah but if you're powering your 4K monitor from Thunderbolt, I doubt you'd
get that 12 hour battery life either :D

------
Zenst
One thing I'm looking forward too more than the power saving advances are the
arbitrary precision maths they are starting to finally add. Albeit just
integer , add and multiply instructions for now, but a good start.

~~~
zwegner
Cool, I hadn't heard of this. I couldn't find much info on a quick search, so
here's Intel's reference:
[http://www.intel.com/content/dam/www/public/us/en/documents/...](http://www.intel.com/content/dam/www/public/us/en/documents/white-
papers/ia-large-integer-arithmetic-paper.pdf)

MULX was introduced on Haswell as part of the BMI instructions. Basically it
allows two explicit destinations (still an implied rdx source though).

ADCX/ADOX are pretty weird. They both operate basically the same as ADC, but
only read/write the carry or overflow flag respectively. The idea is to allow
two interleaved independent chains of adds, putting less pressure on the out-
of-order logic on chip. This seems kinda weird though: the renaming logic
would have to detect that the bit of the flags register that one chain of adds
reads aren't written by the other chain, so those instructions wouldn't be
hazards, and thus both chains can be executed in parallel. OTOH, IIRC there
are other instructions that only write certain bits of the flags, so I guess
the logic for this bit-level renaming of flags is already there.

------
hyperion2010
Sounds like Intel went after a lot of one time optimization improvements for
these low power chips, stuff that they aren't going to be able to be able to
call on multiple times to improver performance. Maybe I'm wrong, but it seems
like they are really pulling out the stops here.

------
nextos
I love ultra low power CPUs that still are able to drive a modern minimal
Linux system, based mostly on CLI applications. I thought my Intel NUC was
already great, but this might take it to the next level with a completely
fanless design, yet cooler and very capable.

------
dschiptsov
New Apple's CPUs are ready. Macbooks will be ready for a holiday season.
Nothing to see here.

