
How the SoC Is Displacing the CPU - ingve
https://medium.com/@magicsilicon/how-the-soc-is-displacing-the-cpu-49bc7503edab
======
aaronbrethorst
This seems like a silly title. I interpret it as 'How the CPU, when packaged
with a bunch of other stuff, is displacing the CPU.'

Or am I missing something?

~~~
rwmj
You're not missing anything -- the article is dumb. Of course we're eventually
going to integrate everything, for the same reason that no one uses separate
interrupt controllers or separate Northbridges. We have too many transistors
we don't know what to do with.

Edit: I think the interesting question is what _won 't_ be integrated? RAM and
Flash because they use different processes. GPU? Intel and ARM integrate it,
but there is a large company selling GPUs.

~~~
raverbashing
Also I would like to know how much die space is wasted on IBM PC backwards
compatibility crap in Intel systems (think A20, cascading interrupt
controllers, 8086 support, legacy boot, etc)

~~~
throwaway2048
An intel 8088 processor had a 22mm^2 die size at a 3000nm process node size. A
modern x86 processor has a ~200mm^2 die size at a 14nm node size, a die size
that is 10x larger with features 200x smaller.

Even if a modern x86 chip had to put an entire 8088 on die for legacy
compatability (which it definitely would not need to) we are talking less than
0.1% of the die space being taken up by legacy features.

The idea that x86 is somehow doomed to be inefficent because of legacy baggage
is quite simply massively overblown.

~~~
rwmj
While you're right that it's not a huge deal, architectural issues are not
solved by putting "an entire 8088 on die for legacy compatability" unless
you'd want to run the old software at very low speed. Intel solved it with
lots of R&D. Their solution involves a layer that translates x86 instructions
to native ops[1], plus many little hacks and improvements.

[1]
[http://www.realworldtech.com/nehalem/5/](http://www.realworldtech.com/nehalem/5/)

------
sz4kerto
"Apple A9X SoC offers 64 bit desktop-class computing enabling a handheld
tablet to go toe-to-toe with a state-of-the-art laptop CPU from Intel"

Yes and no. Most of the innovation in top-end x86 CPUs goes towards multicore
and multi-socket scalability. My desktop is almost 10x faster than an A9X --
and nobody manufactures an ARM SoC that has 100 W thermal envelope but
delivers the same speed as a Xeon v3.

Intel manufactures different state-of-the-art laptop CPUs. Apple SoCs can
compete with the ultra-low voltage Intels, but not with high-end i7 HQ/MQ
series.

~~~
rwmj
_nobody manufactures an ARM SoC that has 100 W thermal envelope but delivers
the same speed as a Xeon v3_

Look at Cavium ThunderX: 48 ARMv8 cores, very high performance, 95W.

~~~
e5f34f89
The key question is which workloads will actually benefit from many low
single-thread performance cores. While there are some where such a node
coupled with a GPU compute cluster will probably be the useful but it seems
like a vast majority of server deployments just want good single-thread perf
with 2 or more hardware threads.

~~~
rwmj
Don't necessarily assume that all ARM cores will have low single-thread
performance. Especially with 64 bit server-class ARM SoCs, the performance can
be rather good.

------
dantillberg
For anyone else wondering what "SoC" means, here you go:
[https://en.wikipedia.org/wiki/System_on_a_chip](https://en.wikipedia.org/wiki/System_on_a_chip)

------
digi_owl
One should also note that the x86 platform has taken on some SoC like
structures over the years, with AMD as a driving force.

For example they lumped the memory controller onto the CPU die with the Athlon
64, and then a GPU on their A series.

~~~
mschuster91
Yeah, and the integrated GPUs, at least the entry level ones, do suck. My Acer
netbook is barely able to play 360p videos, 720p if you're lucky and 1080p -
no hope. My cheap smartphone can play 1080p without a single dropped frame.

~~~
monocasa
On neither of those platforms is the GPU doing the video decode. On your phone
there's a dedicated DSP. On your netbook the CPU is pulling up the slack.
There's a lot of serial work in decoding videos (like huffman decoding) that
doesn't translate to standard GPUs without a little extra silicon.

~~~
ownagefool
An entry level AMD apu will give you a dedicated video decoding unit.

[https://en.wikipedia.org/wiki/Bobcat_%28microarchitecture%29](https://en.wikipedia.org/wiki/Bobcat_%28microarchitecture%29)

I've never had an APU but on paper they sound really nice. Intel probably has
similar but you may have to be more careful picking your CPU because of market
feature segregation.

~~~
greggyb
AMD tends to dominate the GPU half of the APU battle. Intel has made great
strides, and I'm interested in their newest pieces, but from what I've seen so
far, AMD still wins this niche.

------
mschuster91
I'm really, really longing for one of these "modular" phones where I can
exchange and upgrade components at will...

~~~
digi_owl
Fairphone 2 is shipping now, iirc.

~~~
bch
The site says "Designed for use and service in Europe only.", but regardless
of what it was _designed_ for does it work with any North American carriers?

~~~
digi_owl
[http://shop.fairphone.com/fairphone2.html#technical-
specific...](http://shop.fairphone.com/fairphone2.html#technical-
specifications)

That lists the various bands for the mobile radios.

~~~
bch
I saw that, but it's strange that they'd make it w/ a capable quad-band radio
and -still- declare it's only designed for Europe. I'm not a cellular expert
-- are there other considerations outside the frequencies that it supports ?
If not, it seems like a good global phone, no ?

~~~
digi_owl
GSM quad band, UMTS and LTE seem Europe only. So getting anything more than
EDGE in USA seem out of the question.

In this day and age i suspect finding GSM radios that are not quad band are
rare to say the least.

Never mind that this phone is unlikely to ever be sold with a carrier subsidy,
as it will be damn hard to lock in something that can have its SoC assembly
replaced by removing a few screws (and ship with the required screwdriver).

------
revelation
This is a bit silly. A9X and other mobile SoC are in absolutely no way
comparable in performance, on any metric, with desktop CPUs. You do not get
150W performance from a 10W chip, particularly not when you are lagging in
process node.

~~~
thoughtsimple
I don't think there is much lagging in the node size with the A9 family in
comparison to Intel's mobile chips. A9 is a 14-16 nm FinFET design. Not much
different than Intel's 14 nm FinFET for core-M. The A9X is clocked at 2.26 GHz
compared with a Skylake core-m7 clocked at 1.2-3.1 GHz.

~~~
nickpsecurity
Despite what you said, you in no way addressed or contradicted his point.
Clockrate and nm don't tell you performance. Intel's internal architecture has
tons of tricks to get more bang for buck out of their cycles. Plus lots of
accelerators with extra instructions. Plus modifiable with microcode update.
So, a proper comparison would be a number of workloads on each that people use
for desktops.

Note: I still haven't seen a mobile that outperforms my old Core Duo 2 laptop.
Always slower in both delay and throughput.

------
rasz_pl
Intel biggest inroads in SoC market happened when it was giving away Atoms
below cost ($5, Mediatek/Rockchip price point) while losing $3-4 Billion. Year
after year since 2013.

Their plan for 2016 is selling LTE modems below cost to Apple.

~~~
petra
How is it even legal ?

~~~
rasz_pl
Its totally cool. Learning from Qualcomm experience (milking Chinese companies
for IP resulted in $1B penalty hehe) Intel "partnered" with Rockchip and
Spreadtrum (read: was forced by Chinese regulators to give away its Atom IP to
Rockchip) and "invested" in a bunch of companies, not to mention will spend
almost $6B building a fab there.

