A quick sunspider test with a US Samsung Galaxy S3 1.5GHZ snapdragon on Jelly Bean's likely highly-optimized browser shows performance very comparable to a first generation intel 1.66GHZ Atom 230 single core on the latest Firefox. Granted it's a mostly single-threaded test anyway but the ARM has both cores available and the test is pretty cpu-bound after it starts.
I'd estimate the latest i7 is at least 3x faster per-GHZ on this lightweight but fairly general (cpu-wise) test.
For heavy lifting, a recent i7 with it's cache size, memory bandwidth and accompanying i/o would probably compare to an ARM that is running at about 5x the clock speed.
I don't think that ARM can be suddenly declared the best at anything other than maybe performance-per-TDP.
Performance-per-cycle is the more difficult problem to solve... ask AMD how hard that's been since the original Intel core series appeared on the scene in 2006. Prior to this and after it wasn't just a chip clone maker, AMD dominated this metric.
I've downsized from a Core 2 MBP to an iPad. It has 1/4 the RAM and runs at half the clock speed. Do I care? No! Web browsing is fast and fluid, photos load plenty fast, editing documents in pages is plenty fast. And it lasts through my whole 12+ hour workday, letting me leave the charger at home and often not even bothering to charge it every night. That's huge and much more important to me, and I'd imagine most people, than whether it can be imperceptibly faster.
Yeah, the ARM cores have the power advantage generally at the moment, but under load there won't be much difference looking at what Haswell's ULV seems to be promising compared to the A15, and you can bet that the Intel chips will leak at idle much less and will probably have better sleep states.
Unfortunately, it is exactly by the advice of such "strategically thinking" MBAs our industry is often run :-(
This economic concept of 'dimishing returns' (small competitors can operate more efficiently than large ones) is to a big degree what enables and inspires the HN tech startup scene.
ARM has no such risk. Right now Intel has much more money, so they can pump more into R&D, but their company structure can be a liability really fast.
Yes, he sounds like someone in the late nineties comparing the nearly bankrupt Apple with competitors worth ten times more, like Sun and Dell.
The incumbent players (Intel, Microsoft, Dell, HP) are all competing on the established metrics of performance & price, but those are no longer the metrics that matter. ARM is pushing the power efficiency angle. Apple is winning on industrial design.
The entire computer industry (excluding phones, which has obviously already been disrupted) is right on the verge of being flipped on its head. There were hints of that with the netbook wave, but they weren't quite good enough. The iPad and subsequent high end Android tablets are close, but not 100% there. But we are just about at the point where ARM vs x86 is equivalent for the mass market, and that really is going to shake things up.
Managed language runtimes represent the bulk of programs people are running on servers (think: Java/Scala/Clojure, PHP, Python, Ruby). These environments not only lack "mechanical sympathy" but also have requirements above and beyond what x86 can do.
To take Cliff Click's word for it, managed language runtimes consume 1/3 of their memory bandwidth on average zeroing out memory before handing objects to people. If x86 supported an instruction for doing just-in-time zeroing into L1 cache, this penalty could be eliminated, and that 1/3rd of memory bandwidth could be used for actual memory accesses instead of just zeroing out newly allocated objects. In an age where RAM is the new disk, this would be huge.
Unfortunately the amount of time it takes to get a feature like this into an Intel CPU is a bit mind boggling. Azul started talking to Intel about hardware transactional memory early last decade, and Intel is finally shipping hardware transactional memory in the Haswell architecture in the form of transactional synchronization extensions.
I think this is an unfair feature comparison. Zeroing L1 cache is way simpler operation than TM which has been designed to support two modes of operation [legacy -- which only speeds up traditional LOCK-based synchronization -- and true TM], must support transaction aborts and restarts, etc. Also, 10 years ago, TM was still a very active research area -- i.e., people had no clue about which ideas were performant and scalable and, not the least, feasible to implement in HW.
People talk about RISC vs. CISC, and how ARM can be lower power because RISC instructions are easier to decode, but I don't hear that from anyone who's actually implemented both an ARM and an x86 front-end . Yes, it's a PITA to decode x86 instructions, but the ARM instruction set isn't very nice, either (e.g., look at how they ran out of opcode space, and overlayed some of their "new" NEON instructions on top of existing instructions by using unused condition codes for existing opcodes). If you want to decode ARM instructions, you'll have to deal with having register fields in different places for different opcodes (which uses extra logic, increasing size and power), decoding deprecated instructions which no one actually uses anymore (e.g., the "DSP" instructions which have mostly been superseded by NEON), etc. x86 is actually more consistent (although decoding variable length instructions isn't easy, either, and you're also stuck with a lot of legacy instructions) [X].
On the other hand, Intel has had a process (manufacturing) advantage since I was in high school (in the late 90s), and that advantage has only increased. Given a comparable design, historically, Intel has had much better performance on a process that's actually cheaper and more reliable . Since Intel has started taking power seriously, they've made huge advances in their low power process. In a generation or two, if Intel turns out a design that's even in the same league as ARM, it's going to be much lower power.
This reminds me of when people thought Intel was too slow moving, and was going to be killed by AMD. In reality, they're huge and have many teams working a large variety of different projects. One of those projects paid off and now AMD is doomed.
ULV Haswell is supposed to have a TDP ~10W with superior performance to the current Core iX line . Arm's A15 allegedly has a TDP of ~4W, but if you actually benchmark the parts, you'll find that the TDPs aren't measured the same way. A15 uses a ton of power under load, just like Haswell will . When idle, it won't use much power, and will likely have worse leakage, because Intel's process is so good. And then there's Intel's real low power line, which keeps getting better with every generation. Will a ULV version of a high-end Intel part provide much better performance than ARM at the same power in a couple generations, or will a high performance version of a low-power low-cost Intel part provide lower power at the same level of performance and half the price? I don't know, but I bet either one of those two things will happen, or that new project will be unveiled that does something similar. Intel has a ton of resources, and a history of being resilient against the threat of disruption.
I'm not saying Intel is infallible, but unlike many big companies, they're agile. This is a company that was a dominant player in the DRAM and SRAM industry that made the conscious decision to drop out the DRAM industry and concentrate on SRAMs when DRAM became less profitable, and then did the same for SRAMs in order to concentrate on microprocessors. And, by the way, they created the first commercially available microprocessor. They're not a Kodak or Polaroid; they're not going to stand idle while their market is disrupted. When Toshiba invented flash memory, Intel actually realized the advantage and quickly became the leading player in flash, leaving Toshiba with the unprofitable DRAM market.
If you're going to claim that someone is going to disrupt Intel, you not only have to show that there's an existing advantage, you have to explain why, unlike in other instances, Intel isn't going to respond and use their superior resources to pull ahead.
 I'm downplaying the advantage of ARM's licensing model, which may be significant. We'll see. Due to economies of scale, there doesn't seem to be room for more than one high performance microprocessor company , and yet, there are four companies with ARM architecture licences that design their own processors rather than just licensing IP. TI recently dropped out, and it remains to be seen if it's sustainable for everyone else (or anyone at all).
 Ex-Transmeta folks, who mostly when to Nvidia, and some other people whose project is not yet public.
 Remember when IBM was bragging about SOI? Intel's bulk process had comparable power and better performance, not to mention much lower cost and defect rates.
 Haswell hasn't been released yet, but Intel parts that I've looked at have much more conservative TDP estimates than ARM parts, and I don't see any reason to believe that's changed.
 IBM seems to be losing more money on processors every year, and the people I know at IBM have their resumes polished, because they don't expect POWER development to continue seriously (at least in the U.S.) for more than another generation or two, if that. Oracle is pouring money into SPARC, but it's not clear why, because SPARC has been basically dead for years. MIPS recently disappeared. AMD is in serious trouble. Every other major vendor was wiped out ages ago. The economies of scale are unbelievably large.
[X] Sorry, I'm editing this and not renumbering my footnotes. ARMv8 is supposed to address some of this, by creating a large, compatibility breaking, change to the ISA, and having the processor switch modes to maintain compatibility. It's a good idea, but it's not without disadvantages. The good news is, you don't have to deal with all this baggage in the new mode. The bad news is, you still have the legacy decoder sitting there taking up space. And space = speed. Wires are slow, and now you're making everything else travel farther.
Cortex A15 will get the benefit of pairing up with A7. ARM says on average the energy consumption should be about half, compared to Cortex A15 alone.
Also Haswell is rumored to cost 40% more than an IVB Core. That's close to $300 for a CULV. That's simply not sustainable when it comes to the new market that is forming for tablets. $300 is more than the whole BOM for your typical $500 tablet. I doubt you'll see that chip in anything cheaper than $800, at a time when you get "good enough" tablets for $200 whole. Intel's competitor to ARM simply isn't the Core line-up. It's Atom, for better or for worse.
And as I said, Intel will lose not because of lack of expertise in making chips, but because of an unsustainable cost structure and business model (they now have to compete against several ARM chip makers at once, including Apple). The fact that they also have no momentum or market share in the mobile market doesn't help.
In terms of OS for an x86 tablet, you're looking at Windows 8, or Linux with a custom shell, and that's it. There's an unofficial x86 port of Android, but I wouldn't stake any real product on that without any support from Google.
Since MS has already established the baseline price for the Win8 tablet around $800, and they're marketing it more as a tablet PC that can do everything you do with a normal desktop or laptop than an iPad, is this really that much of an issue?
However, this is the reason Microsoft is trying to stick to the 800$ price point for as long as it can. Cheaper options are more popular and they don't want most people to hear how 'crap' Windows 8 tablets are based on hardware anytime soon.
This is the kind of logic that says selling one million of $1 products is better than selling 100.000 of $10 products. I beg to differ...
The markets compact over time and $1 devices will kill $10 devices.
Innovators dilemma raises the question "will market demand shift to products where ARM has already displaced x86? (i.e. mobile and tablets)"
both IBM and Oracle have committed to two more generations of POWER nor SPARC. also there is Fujitsu keeping investing into SPARC. these uarchs are not going to go away anytime soon. they are high-margin niche products in the enterprise space and government with considerable footprints and huge long term service contracts attached to them.
POWER and SPARC are not competing against low-power CPU's but aim for the high-socket count, large unified memory, RAS market which for some workloads is the only alternative.
especially SPARC while having shrinking market share still makes Oracle more than a billion annually. afaik IBM's Power division is not loosing money either.
long story short: in the next five years there will be at least four ISA's (ARM, SPARC, POWER, X86) in the server space, but not all compete for the same markets.
interesting times ahead!
I continue to hear how MIPS is still technically and architecture better then even the cleaner ARMv8. But they never succeed. So how an ISA succeed depends on a lot of other factors/
So what you seem to be saying is that ARM only makes CPU cores (and now a GPU, I guess), so there's a wide market available for NVIDIA and Qualcomm to enter to provide a complete SoC.
Intel, on the other hand, makes CPU cores, and GPUs, and display controllers, and DRAM controllers, and USB controllers, and PCI bridges, and audio hardware. And they put all that stuff on a single chip for their customers.
... and somehow you're spinning this as an advantage for ARM Ltd.?
Intel doesn't want customizations that can be offered by other companies. Intel wants Intel's GPU not a choice of NVIDIA, S3, ARM, and soon AMD/ATI. Intel wants to ship "Phone Motherboard 1", etc.
Its an advantage because ARM lets other companies play and Intel won't.
Maybe what you're really trying to talk about is the "ARM ecosystem", where the big mix of players has a market incentive to try new stuff. And there you might have a point. But it's certainly no disadvantage to Intel specifically -- every one of those players wants to be doing what Intel already is (Apple is very close already, with their own CPU core and SoC design).
I think what he's saying is Intel is at a disadvantage because it's the only company with the ability to fab its designs. Therefore, any new designs will have to compete for production capacity and catalog space with whatever the currently most-profitable product is.
Large companies have great difficulty managing multiple product lines that diverge widely in their natural profit margin.
I was saying Intel doesn't want any customization because it needs volume and price to maintain profits. Customization subtracts from volume. They have PC mentality not a Mobile mentality. Power consumption isn't the problem, but business model is.
Intel is a much smaller part of all CPU's sold, that's true, but it's also true that the market for CPU's has increased exponentially in the last few years. The places where Intel is losing is places where they have never actually competed in.
A decade ago, if you were looking for a low-power CPU for a mobile device, you sure as hell weren't looking at X86. You were going with an ARM solution. That hasn't changed, but the market for those CPU's has grown incredibly.
And it should scare the hell out of Intel.
Intel wants to build everything in volume. Intel's current business model does not benefit from specialization or customization. Intel needs to make a large profit on each CPU sold. Intel drove out of the market anyone who could build a chipset since it interfered with volume and profits. Intel would be happiest with their current business model if they built one laptop motherboard, one server motherboard, and one desktop motherboard.
This is a strategy built for the PC market. It does not have anything to do with the current mobile market (non-laptop).
Samsung and Apple want to build the best end product. They want to put things in and leave things out. They cannot do that with Intel, but can do that with ARM. Intel doesn't allow or want customized SoC. Apple and Samsung do. Other vendors also make their own SoC from ARM cores. These SoCs provide different benefits. Having a common instruction set allows switching to another SoC when needed.
I haven't seen any sings of Intel trying to push their GPU's into ATI/NVidia's niche market (gamers).
NVidia and ATI (now AMD) GPU's have always had their strongest consumer base among gamers . Intel GPU's have never been in the same class as their contemporary NVidia/ATI cards on any measure -- triangles per second, texture bandwidth, gigaflops. Intel also generally uses shared-memory architecture, which means their memory bandwidth is limited and contends with the CPU.
Intel's GPU's are focused on being a low-cost, low-power, on-board graphics solution. As long as they can run a 3D UI and play HD video, they're not going to push the performance envelope any more, for the good and simple reason that they don't want to incur additional manufacturing cost, chip area, design complexity and power consumption for features that are irrelevant to non-gamers.
 By "gamers," I really mean anyone who's running applications that require a powerful GPU.
When people go to their local stores and see rows of tablets that look like tablets and rows of laptops that look like tablets and rows of desktops that look like tablets, well they just seem to get actual tablets. Sure an Intel i3 will handily beat out the upper echelon of ARM offerings but with Android and IOS be entirely optimized for this experience we are finally realizing what AMD fans have been shouting for decades, the extra horsepower really does not come into effect often enough to make it a deal breaker. Your web page may load 40% slower on an ARM rig but when the Intel model will load it in 1.5s and you tablet will load it in 2s we now experience the Law of Diminishing Returns. If a dual core ARM A15 can consistently run at around 40-50% of the speed of an i3 mobile processor, a quad core ARM should settle at around 60-75% while being tasked to do much, much less.
With Intel and ARM you are also dealing with 2 very different ecosystems. x86 has had to be fast because most of the applications you will run on a daily basis are likely not really optimized, profiled or threaded anything above a couple compiler switches. They have to be fast because the code is so slow. Now with Android and IOS the language, libraries and sand-boxing improves the underlying mechanisms to the degree that most of the code that matters is optimized by Apple and Google where the equivalent Microsoft Windows libraries are not as optimized and in many cases so specialized that it gives a look and feel of a Wordpad type app rather than what you are really after.
Basically I feel by looking strictly at Intel's raw performance as to the reason why a platform will succeed is improper and an unfair comparison. People are moving en-mass to tablet and smart phone ecosystems not because they are faster or run by a particular processor but because it feels like a custom solution and the integration overall is acceptable. So I don't think it is ARM v Intel but rather tablet v notebook and laptop. If Microsoft keeps up with making their desktop OS look and feel like a tablet OS people buy tablets and Windows 8 doesn't have the applications, word of mouth or market penetration to make that possible right now.
If things don't change and quick we may just have a Microsoft and Intel "Double clutch"
"Double clutch: this is where a non-swimmer ends up in deep water and, out of panic, grabs the closest person to them to stay afloat."
This is so wrong, I actually don't know where to begin.
1. It's true that most code isn't optimized for x86. But most code isn't optimized, period. Optimization is freaking hard. Android and iOS aren't necessarily better optimized than Windows. And Linux, especially RHEL, is screaming fast on the new Intel chips. Windows isn't that terrible, either.
2. Sandboxing actually hurts performance, because it requires an additional layer between the OS and userland to make sure that the code the user is executing is correct.
3. None of this actually matters for chip architecture, since #1 is true of code in general, and #2 doesn't have any special architecture-based support.
4. x86 isn't just Windows. It's Linux, too.
When you program for Android/IOS how much of the logic you write is referenced from optimized libraries and how much is your own craft? Now look at the entire Market Place / App Store and figure how many of those apps entirely rely on Android / IOS optimized libraries.
While it may be true that some may wander off the beaten path and write their Apps in OpenGL ES directly and maybe even C/C++ most rely on frameworks and libraries already built in (which are indeed optimized).
Look at Android(as an example). For the first few years of existence, the OS was plagued with issues of bad battery life due to poorly optimized code and bugs. Android 3.0 was basically scrapped as an OS due to bad performance.
...where Apple beats three community-developed libraries. But maybe it just wasn't that hard to beat them.
I'm also skeptical because I don't see how Apple has any incentive to optimise code ever. Their devices (ARM & x86) are doubling in CPU power left and right while the UX basically stays the same. The second-to-last generation inevitably feels sluggish on the current OS version...which just happens to be the time when people usually buy their next Apple device. Why should they make their codebase harder to maintain in that environment?
That's just in one very restricted area (JSON parsing) where there are TONS of third-party libraries of varying quality for the exact same thing. Doesn't mean much in the big picture.
>I'm also skeptical because I don't see how Apple has any incentive to optimise code ever.
And yet, they use to do it all the time in OS X, replacing bad performing components with better ones. From 10.1 on, each release actually had better performance on the SAME hardware, until Snow Leopard at least. They had hit a plateau there I guess where all the low hanging fruit optimisations were already made.
Still, it makes sense to optimise severely, if not for anything else to boast better battery life.
No doubt about 10.0-10.5/10.6. But that seems to have been an afterthought for the last two OS X releases:
And has there ever been an iOS update that has made things faster on the same hardware?
I don't think that Apple is intentionally making things slower, which is what I'm trying to say with the JSON parser (it is easy to write a wasteful implementation). But in the big picture, they're not optimising much either.
It doesn't matter either way.
For one, Apple and Google aren't that keen on optimising their stuff either.
Second, most desktop applications use libraries and GUI toolkits by a major source, like Apple and MS, so the situation regarding "a large part of the app is made by a third party that can optimise it" is there for those too.
Third, tons of iOS/Android apps use third party frameworks, like Corona, MonoTouch, Titanium, Unity, etc etc, and not the core iOS/Android framework.
Fourth, the most speed critical parts of an app are generally the dedicated stuff it does, and not the generic iOS/Android provided infrastructure code it uses.
Second, ARM chips are too cheap. Intel's biz is built on $100+ chips. ARM chips are like $10. If intel's chip prices drop to say $25, they don't have nearly the money for R&D.
X86 won't die, but it can't grow and over time that's going to hamstring Intel.
Call it peak x86.
But on the flipside, ARM right now is very, very low-end -- the fancy new Cortex-A15s only match up against Atoms in single-thread CPU performance, and Atom < ULV < LV < desktop/server. You get away with it in mobile because of lower user expectations and heavy use of the GPU. When you look a little further out (Cortex-A53/57 on newer processes) you can picture ARM in actual client computers, or at least in super-zippy mobile gadgets that some people will happily replace their computers with. (Consumer software will probably adapt to an ARMy world too--use the GPU well, adapt to slower cores, use UI tricks to hide some CPU-caused delays.)
But I can't see an ARM chip that acts exactly like a Xeon within the next few years. I bet ARM finds some niches in the datacenter, where servers today have far more CPU than they need, or applications adapt well to a sea of tiny slow cores, or both. (For instance, Facebook uses AMD memcached boxes; they could just as well use ARM, and are looking at it. Intel will make cheap slow cores for those use cases, too.) And I bet ARM will put some price pressure on Intel. But all the things a top-of-the-line Intel chip does to maximize instruction-level parallelism will be really hard for anyone to copy for a very long time.
And the cloud needs servers. Lots and lots of servers.
True, Intel faces stiff competition here. But folks are sometimes forgetting Intel wasn't always a monopoly in its field. It had competition, lots of it over the years. I wouldn't bury them just yet.
But while you could still find something to argue about in some of those case, especially when the "fall off a cliff" hasn't happened yet for those companies (disruption takes a few years before it's obvious to everyone, including the company being disrupted), I think the ARM vs Intel/x86 one has been by far the most obvious one, and what I'd consider a "by-the-book" disruption. It's one of the most classical disruption cases I've seen. If Clayton Christensen decides to rewrite the book again in 2020, he'll probably include the ARM vs Intel case study.
What will kill Intel is probably not a technical advantage that ARM has and will have. But the pricing advantage. It's irrelevant if Intel can make a $20 chip that is just as good as an ARM one. Intel made good ARM chips a decade ago, too. But the problem is they couldn't live off that. And they wouldn't be able to survive off $20 Atom chips. The "cost structure" of the company is built to support much higher margin chips.
They sell 120 mm2 Core chips for $200. But as the articles says, very soon any type of "Core" chip will overshoot most consumers. It has already overshot plenty, because look at how many people are using iPads and Android tablets or smartphones, and they think the performance is more than enough. In fact, as we've seen with some of the comments for Tegra 4 here, they think even these ARM chips are "more than enough" performance wise.
That means Intel is destined to compete more and more not against other $200 chips, but against other $20 chips, in the consumer market. So even if they are actually able to compete at that level from a technical point of view, they are fighting a game they can't win. They are fighting by ARM's rules.
Just like Innovator's Dilemma says, they will predictably move "up-market" in servers and supercomputers, trying to chase higher-profits as ARM is forcing them to fight with cheaper chips in the consumer market. But as we know ARM is already very serious about the server market, and we'll see what Nvidia intends to do in the supercomputer market eventually with ARM (Project Denver/Boulder).
As for Microsoft, which is directly affected by Intel/x86's fate, Apple and Google would be smart to accelerate ARM's takeover of Intel's markets. Because if Microsoft can't use their legacy apps as an advantage against iOS and Android, that means they'll have to start from scratch on the ARM ecosystem, way behind both of them. Apple could do it by using future generations of their own custom-designed ARM CPU in Macbooks, and Google by focusing more on ARM-based Chromebooks, Google TV's, and by ignoring Intel in the mobile market. Linux could take advantage of this, too, because most legacy apps work on ARM by default.
And certainly the price is a very significant factor. But remember that ARM sells an order of magnitude more chips than Intel does. So if Intel is successful, they can make it up on volume, at least to a degree.
a) Intels effectiveness at preventing AMD SKU's from hitting markets
b) The Core 2 family from Intel
c) AMD insisting on shipping 'native' dual / quad cores with worse yields - there wasnt any advantage to the end user and I would imagine the yields were worse
d) The TLB bug
But again, even if they succeeded making competitive chips against ARM, that doesn't equal market success in the mobile market, and it doesn't equal that they will survive unless they take serious steps to survive in a world where they are just one of several companies making chips for devices, and where they might not even have a big market share of that, and where they make low-margin chips. Bottomline is they need to start firing people soon, restructure salaries, and so on. I think this is why Paul Otellini left. He didn't want to be the one to do that, and be blamed for that.
The primary cost of a SoC is manufacturing. Process advantages mean that you have access to cheaper transistors that have better performance and power characteristics. The easiest way to improve the ratio of performance to anything in microprocessors has always been to make it smaller. There have been far too many words wasted on the role of instruction sets and architectures. Those things matter but that's the easy part. The hard part is getting a meaningful advantage in the manufacturing side, which is what Intel has. This is precisely why AMD is dying. They can't even undercut Intel because Global Foundries is so far behind Intel that they physically can't produce an equivalent product for less despite Intel's ~60% margins.
I think what you will see is Intel getting more aggressive in the mobile space in the next couple years because they are going to want to ensure that TSMC doesn't get a chance to catchup. TSMC is the real key to anyone threatening Intel not ARM.
A way out for Intel is their world-leading foundries, with process shrink a generation ahead. It's been suggested they manufacture ARM SoCs, and sell at a premium. But there isn't really a premium market... except for Apple, and its massive margins. And Apple is feeling the heat from competitors hot on its heels. Therefore: Intel fabs Apple's chips. Intel gets a premium. Apple gets a truly unmatchable lead. It's sad for Intel, but Andy Grove has a quote on the cover of The Innovator's Dilemma. They know the stakes.
The nice thing for consumers would be a x2 fast or x2 battery life or half-weight iPhone/iPad next March, instead of in 1.5 years.
BTW, re: Tegra 4/overshoot - In the next generation, when silicon is cheap enough for oct-core, because we don't need the power (and can't utilise multicore anyway) it will instead lead to the next smaller form-factor. But smaller than a phone is hard to use, for both I and O. A candidate solution is VR glasses - because of size.
If we could get wireless hdmi down and standardized (or wireless displayport, or some display standard over some wireless frequency band) I can easily see computers going so far as being solar powered interconnected nodes, where instead of you lugging around hardware you just link up to the nearest node and utilize a network of tiny quarter sized chips as capable as a modern dual core A9 or some such that runs off solar.
I don't think the Unity desktop will make it, but I definitely see some windowed environment inserting itself into the gap between Android tablets and Windows laptops on the high end ARM chips. And unlike Windows, the GNU core has a ton of software written that runs on it already, and thanks to being open source and compiled against GCC, this stuff rarely has large issues besides performance running on ARM.
I only say Ubuntu because it seems Canonical is the only market force trying to push a GNU os commercially. Red Hat seems content to let Fedora just act as the playground for Enterprise, and Arch / Mint / Gentoo / Slack / etc don't have the design goals (ease of use for a completely new user) or infrastructure (Debian and its ultra-slow release schedule wouldn't fly).
That is where you got it completely wrong. Intel can produce ARM based SoC that earns nearly the same margin as they are currently selling their CPU.
Not to mention you keep referencing x86 and Intel as the same thing. As the way they are and for the foreseeable future. I could literally bet anything that Intel wont die. Simply because Intel could still be the BEST Fab player you could ever imagine. In terms of State of Act Fabs, they beat TSMC, UMC, GF, and Samsung Combined! And Intel aren't dumb either, that have the best of the best Fab Engineers. And the Resources and R&D that is put now for the coming 3 - 5 years in the future.
So Intel wont die.
x86? That depends. If you look at the die shot of SoC you will notice CPU are playing less and less part in die areas. It used to be 50+%, Now it is less then 30%. CPU, or ISA is becoming less important. You will need a combination of CPU, GPU, Wirless, I/O and other things to succeed.
Your claim is essentially "Most people will never need today's high-end chips, let alone anything more powerful." This could have been equally well said in 1998. How do you know you're not as wrong making that claim now, as you would have been making that claim then? What's different?
Today's computers are powerful enough to comfortably run today's software. Tomorrow's computers will have a lot more power than today's software needs; but that's irrelevant, because they'll be running tomorrow's software instead.
To lay out my case in a little more detail:
"As hardware resources increase, software will bloat until it consumes them all." This is probably somebody's law, but I don't know who off the top of my head.
You don't really need more than ~300 MHz, 128 MB to do what the vast majority of users do: Word processing, email, and displaying web pages.
Usage patterns may change as you increase the amount of computing power you need. For example, I usually have a large number of tabs open in my web browser -- I probably wouldn't use the browser in this way if I had much less memory.
Some software is just bloated. My Windows box has dozens of different auto-updaters that run on every boot. Steam does literally hundreds of megabytes of I/O doing -- something, Turing only knows what.
Of course all the latest UI's have all kinds of resource-intensive 3D effects, run at unnecessarily high resolutions, use antialiased path rendering for fonts instead of good old-fashioned bit blitting, et cetera.
The point is that, as the standard hardware increases, OS'es and applications will add marginally useful features to take advantage of it. Users will learn new usage patterns that are marginally more productive but require much better hardware. As standard software's minimum requirements rise, people buy new hardware to keep up.
This is not a novel idea; it's been the story of computing for decades, and a trend that anyone who's been paying any attention at all to this industry has surely noticed.
Which way is up? Intel's been moving down in terms of per-core wattage since 2005, putting them closer to direct competition with ARM. Anybody can glue together a bunch of cores to get high theoretical performance, but it's Intel's single-threaded performance lead that is their biggest architectural advantage.
Intel has to improve their power because a major market for them - server chips - is full of people who want to spend less on electricity for the same computation. Despite this push, they have made absolutely no inroads into the unprofitable mobile market. By contrast ARM, which already has the required power ratios, has every economic incentive in the world to move into the server market. Unless Intel can offer a good enough power ratio to offset the higher costs of their chips, ARM will eventually succeed.
But so once was also owning the world's largest battleship. Things change. How often do the most disruptive changes come from or favor those with the largest physical plant?
What ARM is doing is the opposite - they are the ones hired by the foundries to provide an architecture, and then it is up to the foundries to make the architecture run. This largely decouples the architecture design from manufacturing constraints giving them free reign to innovate.
If a foundry can't cope, then that will not hurt the bottom line for ARM since other foundries can, whereas if you own the manufacturing, you actually have to pay to avoid obsoleteness.
Even for coding you don't really need all that power most of the time; if you are compiling big source trees, sure, but why not just do that in the cloud at EC2 or a dedi server? So you can freely work on your laptop. Game playing and very heavy graphical or music work I can see you need a fast computer in front of your nose for, but further?
2 years? 3?
Intel will catch up with power consumption. The biggest thing going for ARM is price, and because of price their user base is blowing up much faster than Intel, on more types of devices, and in more parts of the world. Most of the developing world's contact with computing is/will be ARM phones and tablets, and the number of people developing software for ARM will skyrocket
Here’s what I would like to read if a technology journalist could dig it up: What kind of strategic planning is going on within the halls of Intel, Dell, HP, Lenovo, et al with respect to keeping the desktop PC relevant? Put another way: I find it astonishing that several years have been allowed to pass since desktop performance became “good enough.” The key is disrupting what people think is good enough.
The average consumer desktop and business desktop user does consider their desktop’s performance to be good enough. But this is an artifact of the manufacturers failing to give consumers anything to lust for.
Opinions may vary, but I strongly believe that the major failure for desktop PCs in the past five years has been the display. I use three monitors--two 30” and one 24”--and I want more. I want a 60” desktop display with 200dpi resolution. I would pay dearly for such a display. I want Avatar/Minority Report style UIs (well, a realistic and practical gesture-based UI, but these science-fiction films provided a vision that most people will relate to).
I can’t even conceive of how frustrating it is to use a desktop PC with a single monitor, especially something small and low-resolution like a 24 inch 1920x1080 monitor. And yet, most users would consider 24” 1920x1080 to be large and “high definition,” or in other words, “good enough.”
That’s the problem, though. As long as users continue to conceive of the desktop in such constrained ways, it seems like a dead-end. You only need so much CPU and GPU horsepower to display 2D Office documents at such a low resolution.
There was a great picture CNet had in one of their reports (and I grabbed a copy at my blog ) showing a user holding and using a tablet while sitting at a desktop PC.
In the photo, the PC has two small monitors and is probably considered good enough to get work done. But the user finds the tablet more productive. This user should be excused for the seemingly inefficient use of resources because it’s probably not actually inefficient at all. The tablet is probably easier to read (crisper, brighter display) and faster, or at least feels faster than the PC simply because it’s newer.
Had desktop displays innovated for the past decade, the PC would need to be upgraded. Its CPU, GPU, memory, and most likely disk capacity and network would need to be beefier to drive a large, high-resolution display.
So again, what are the PC manufacturers doing to disrupt users’ notions of “good enough,” to make users WANT to upgrade their desktops? I say the display is the key.
Sure, a developer or designer salivates at the idea of more screen real estate, but that's because there's a practical use for it. PC Manufacturers follow, not decide, the needs of their users.
I love my screen real estate because I actually need it. If I just browsed Facebook, wrote a word document, and maybe planned out my finances with a spreadsheet - I'd have a hard time justifying some giant monolith of a monitor. I think this is a largely overlooked factor in the success of the mobile market.
I don't know if everyone's forgot this already but when the iPad first came out, most people were thinking: what in the actual eff is Apple thinking. Sure we all knew it'd sell because, well, Apple is Apple. But if I recall correctly most people were scratching their heads asking, "so, it's a big iPhone right?"
And guess what? It is just a big iPhone! And it succeeded NOT because it was the next "cool" thing but because it was designed to do what most people needed their computer to do, namely, send pictures of their grandkids to each other.
OK. But would it even be remotely possible to consider that year-to-year China is in recession, a lot of european countries are in recession and U.S. is not in a great position (e.g. the manufacturing sector is firing people left and right), Japan is in a terrible situation, etc. and that this may be playing a role on the number of PCs sold?
Year-to-year sales of cars in France has gone down by 20%.
When people enter a recession they tend to try to save money: cars and PCs are expensive things. Smartphones not so much (especially with all the "plans" luring people who cannot count in).
I think that smartphones and tablet did play a role in the "minus 21%" that TFA mentions but I'm also certain that the worldwide recession is playing a role too. People don't afford what they see as "expensive" that easily.
$100 + five-years-unlimited-plan-and-i-can-rape-your-children, they don't pay that much attention and so smartphones tend to be more "recession proof".