Hacker News new | past | comments | ask | show | jobs | submit login

I don't know if outdated "RISC vs. CISC" thinking is what motivated Gruber's statement, but I think there are much more convincing ways to justify it.

If I could change Gruber's statement, I would say that "The future belongs to custom-designed SoCs"

The advantages that Apple derives from the A-series SoCs is not due to any inherent advantage of ARM vs. x86, but because Apple has full control over the design and manufacturing.

- Apple can design an SoC for a specific product given the manufacturing process available at the time: see last year's one-off 3-core A8X, because adding a 3rd core was a better tradeoff than increasing clocks. This year, the A9X is back to 2 cores but much higher clocked than the A9.

- Apple gains a competitive advantage by building processors only for themselves, and can catch the rest of the industry off guard (see: ARMv8 A7). They also get to follow their own principles (two fast, wide cores) rather than being forced into everyone else's marketing hype (8 heterogeneous, slower cores)

- If Apple wants a stronger GPU they can just license it from PowerVR, rather than having to lobby Intel and hope the resulting silicon is better (or worse, having to add an external GPU)

- Apple can even hedge its bets w.r.t fab processes: see the dual-sourced TSMC/Samsung A9

ARM is a threat to x86 because anyone can design/buy an ARM core, design an SoC around it, and manufacture it anywhere they want. Intel/AMD can't come close to that flexibility, and on platforms where Win32/Intel binary compatibility is irrelevant, x86 will decline/stay irrelevant.




The GPU issue is a good one. Every desktop system my friends have has two GPUs in it: one built into the processor that no one ever uses, and one they add separately so that they can play any games. AMD at least has half-decent GPUs built into their CPUs, but their CPUs are only half-decent anyway. Meanwhile, Intel refuses to license any Thunderbolt external GPU docks or anything of the sort, despite how popular they would be with laptop-toting would-be gamers, because they want to promote their own, awful GPUs instead.

Likewise the core count issue. Jeff Atwood's recent blog post[1] about how Android JS performance has stagnated because Android SoC single-core performance hasn't improved in years blew me away, but in retrospect it makes sense; it's presumably more engineering work to design a faster CPU core than it is to just put more of the onto a die and call it a day.

[1] https://meta.discourse.org/t/the-state-of-javascript-on-andr...


Thunderbolt 3 includes official support for external GPU docks. Intel's reasoning for not allowing them sooner was due to hot-plugging issues but they could be spinning it around.

Source: http://www.anandtech.com/show/9331/intel-announces-thunderbo...


I think this is a really important point that is overlooked. Other solutions get what Intel offers, sure they can suggest but they can't tell Intel "Hey don't waste transistors on that feature, make the cache bigger." Or other things that Intel does to widen the appeal of its machines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: