Hacker News new | past | comments | ask | show | jobs | submit login
Intel's Changing Future: Smartphone SoCs Broxton and SoFIA Officially Cancelled (anandtech.com)
217 points by dbcooper on April 30, 2016 | hide | past | favorite | 118 comments



Wow, and ouch. I give them credit for ceding the ground for now but this is another sign of how much ARM has been encroaching on Intel's space.

Scott McNealy said early on that retreating to the Data Center was where companies went to die. And at the time was selling workstations on desks and crushing DEC, then when that business got eaten away by Windows the data center was where Sun was going to make its stand. And then it died.

I think a lot about what the fundamental principle is here. How do seemingly invincible forces in the market get killed by smaller foes? Christensen's Dilemma doesn't really explain it, it describes it, but it doesn't tease out the first principles behind it.

At this point I think it is a sort of Enterprise "blindness" which was something Steve Kleiman at NetApp shared with me. A company can be so good at something that they focus all their energy on it, even when it is vanishing beneath them. Consider a fictional buggy whip company when automobiles came on the scene, right up until the last day they made buggy whips they could be the best whip you could buy, all the secrets to making a great buggy whip where mastered by this company, all the "special sauce" that made them last longer, work in a wide range of conditions, and yet the reality was that the entire reason for buggy whips existing was evaporating with the loss of carriages. By focusing on what the company was the undisputed leader in doing, they ride the wave right into the rocks.

When the company is so stuck on what used to work for them, even after the technology has moved on, they become blind to the dangers. Challenging to watch, even harder if you feel like you can see the train wreck coming. And of course soul crushing if nobody driving the bus will listen to the warnings.

The next sign I'm waiting for is Apple to ship a Macbook laptop with their own 64 bit ARM processor in it. Then it gets really really interesting if AMD can pull off an ARM server chip, going where Intel won't.


Scott McNealy said early on that retreating to the Data Center was where companies went to die. And at the time was selling workstations on desks and crushing DEC....

Maybe. That was before cloud computing based on a much cheaper, faster and more pervasive Internet became such a big thing, plus there were at least 3 things that caused them to ultimate fail to where they "retreated" to:

A problem with insufficient error checking and correction plus SRAM chips from IBM that Sun claimed had higher error rates than they anticipated was handled dreadfully, they wouldn't help customers without forcing them to sign NDAs, they initially blamed customer environmental conditions, etc., In short, they showed they couldn't be trusted, which ought to be fatal in the enterprise market.

They consciously decreased the quality of their Intel servers, two things I remember was penny pinching by removing one tried and trusted Intel Ethernet chip and substituting a Marvell chipset one, and kept changing the lights out hardware without changing model numbers, which is what Joylent blamed for their stopping buying from them.

If you couldn't charge it to a credit card, or bought enterprise quantities of stuff, like more than a couple million dollars, you simply couldn't buy their stuff back when it was higher quality, they insisted you go through a VAR or reseller, but few of those were really interested in selling the hardware, and many of the few that might were squirrelly. So before AWS et. al. became a thing, a lot of companies reluctantly bought Dell hardware simply because they could, and by the time they graduated to buying hardware at "enterprise" levels they already had the in-house expertise to manage cheaper but lower quality Dell systems.

And the latter problem, making it hard to buy their stuff (which damages or kills more companies than you think, it's also been a big problem for HP and one reason they didn't get this business), is one of the things that killed DEC.

So comparatively simple failures to execute can be a big part of a company failing at the same time it's facing the Innovators Dilemma.


> it's also been a big problem for HP and one reason they didn't get this business

This is a little OT already, but I have a friend who does IT purchasing and the horror-stories she tells me about how difficult it is to buy HP products scare me. On top of that their client-service is horrendous, and from time to time the local HP office even tries to "cheat" on its direct clients (among other things by "helping" other, preferred clients, to get lower prices and to win tenders). I'm still wondering how they're still in business.


I'm still wondering how they're still in business.

A "least worst" enterprise vendor?

Why is the dollar still strong? The theory I like is because all competing currencies are worse. And while I don't know the competing enterprise market as well, I don't get the impression that today HP has seriously better competitors in full service classic enterprise space (this of course ignores paradigm shifts like the cloud, where I'm sure AWS and company are eating the lunch of various enterprise categories, storage for one).

Who's noticeably better than them? Sun and DEC weren't, IBM sure doesn't sound like they have been any time recently (and for service service they've joined the race to the bottom), who else is there? I don't know, it's not a field I follow anymore.


... this of course ignores paradigm shifts like the cloud, where I'm sure AWS and company are eating the lunch of various enterprise categories, storage for one

HN has its own distortion bubble here. It takes a long (decades) time for new technology / best practices to penetrate into large organizations in stable, established markets.

But I think two things are changing that. One: upstream vendor migration to cloud / more modern solutions (e.g. Salesforce). Which decreases in-house footprint. Two: younger talent that's more familiar with modern ways of doing things (working with AWS, cattle services on commodity boxes, etc).

I think one of the greatest failures of enterprise companies is that they've historically done a terrible job at making their hardware / software available to students. Which means no reasonably-priced supply of knowledgeable potential employees.

And if the TCO of a mainframe / COBOL solution includes scouring the globe for the one remaining person who knows how to work on it and then paying them a premium to do so...? Well, that'll get enterprise (still slowly) moving.


cattle services?



Sun was killed by the confluence of a poorly timed loan and the GFC. Their revenue/cash reserves were insufficient to cover payments on loans taken at the peak before the crash. When GFC happened, their primary Wall Street customers suddenly turned cost conscious. I think that if they had negotiated slightly better terms on the loan they'd still be around today.


That sounds like a final straw; if they'd been well managed and healthy, would that have necessarily been a killing blow (I don't know about those details of their fall, just the decline that preceded it and made it inevitable).


How do seemingly invincible forces in the market get killed by smaller foes?

Back around 2009 I had a little visibility into Intel's mobile efforts. My information is way out of date, but unless Intel management changed drastically, my hunch is that this explanation is still valid.

At the time Intel had already introduced some mobile silicon, but there was little uptake. So they were iterating; they wanted to improve for each succeeding generation. But they had a kind of design-by-committee process. One person or group wanted a certain feature, another group wanted something else, a third group though that yet another thing was important. And so on. Sorry if that sounds vague, I won't write anything more specific.

The end result was a chipset that had a lot of features. A LOT OF FEATURES. Gold plated features. But that meant higher power consumption than the competition, higher cost, larger form factor, longer time to market.

At the time, there was nobody like a Steve Jobs running Intel mobile. Nobody had the intuition and the gravitas to say: "we want A, B, C, but not D, E, F. Quit wasting time and build the fucking thing! I need something that's competitive!"

This is similar to something that an Intel chip design manager told me about 30 years ago. His view was that there were really only two ways to motivate engineers:

1) show them a path to lots and lots of money. This is the path that startups take. Focus, build something quickly, get rich or die trying.

or 2) engineers will "play in the sand", they will add in all sorts of neat features that they think are really cool, that they want to implement, that they are convinced are important. But all that crap isn't very useful to the average user, it just results in complex designs that miss their market window in so many ways.

Intel probably continued to choose path 2 for their mobile efforts.

Intel didn't fail because of any variation on The Innovator's Dilemma. They understood that mobile was the future, that it was very important. They expended tremendous amounts of capital trying to "win" in mobile. They just didn't have the organizational structure that let them execute.


The thing is, it's not gonna help to have a "Steve Jobs" nor will any degree of understanding the competition help if you're used to ship an expensive high-margin product and along comes a low-margin competitor. Neither ARM nor TSMC attempt to make the kind of profit that Intel is used to making and this is the real problem for Intel; even if it wins 75% of the mobile market by shipping adequate chips at adequate prices, the profits won't be anywhere near what it gets in the PC market. (Of course this is just as true about Apple as it is about Intel, and I think their latest quarter results testify to it just as much as Intel's. Actually, the surprising thing about Intel (which is not true about Apple, Steve Jobs and all) is the degree of their dominance in the market for anything resembling a PC - a computer where you can get work done; one might expect both AMD and non-x86-based workstation products to do better, certainly Android is a bigger threat to iPhone/iPad than any competitor ever was to Intel in the PC space.)


Something about margins I don't get: Intel's Gross margin is about 80%. In the fabless ecosystem, the fab usually gets 50% and quallcom is usually above 50% , something even 65% .

So the total combined margin is as high as Intel or more. So why does Intel fail?


All of these companies sell many different products and I'd guess that their reported overall gross margins will be different from the margins of any given product. For instance, I rather doubt that either TSMC or Samsung get 50% margins when making Apple's chips.


In Apple's case it makes sense to lower margins to increase profits.Just common sense. But why not in general ? why does Wall Street hates that so much ?


Benefits through competition? Intel is only one company, it's competing with a whole ecosystem of companies, who can all iterate and find efficiencies on chip design, and on manufacturing. And those improvements which are specific to the processor type will inevitably get passed around over time.


> Intel didn't fail because of any variation on The Innovator's Dilemma.

Depends how you look at it. My understanding of the Innovator's Dilemma, is that when faced with disruption, which implies radical change within the organization, nobody in that organization really wants to do that, because it messes up too many of their processes/products, etc.

So what do they do instead? They try to make their existing technology/product fit the new paradigm. This is what Nokia tried to do with "touch-enabled Symbian", instead of writing a new touch-focused OS from scratch (they did do that with Maemo actually, but also because of Innovator's Dilemma reasons, they failed to focus on it and invest in it).

It's also what Intel did with Atom, and by the way, why they killed the ARM-based Xscale division as well a few years before the iPhone. Atom has too high cost to compete in mobile. This is mainly what killed Intel's mobile division. Intel even tried to license it out to cheap Chinese chip makers, but it was still not competitive in performance/price.

And it's also why Microsoft failed for so long to conquer tablets, by retrofitting desktop Windows for them. Before you mention the Surface Pro, first off I don't consider it a major success, it's too expensive for most people, and 90% of the reason people get it is to use it as a PC, not as a tablet (other than perhaps designers, but that's not a mainstream market, and it's more inline with the Wacom tablet professional market).

It's also why hybrid cars or regular cars being retrofitted as EVs will fail against pure EVs, too, etc, etc


Is there a problem with Intel's technology? I think their technology is great, probably better than the competition. It's the licensing and packaging. Intel doesn't sell a core, doesn't license a core for you to mask your own SoC, doesn't let you graft on other parts, they decide and sell a SoC. A SoC that they fab


Path 2 is exactly what happened on Broxton. Source: I worked on it.


> At the time Intel had already introduced some mobile silicon, but there was little uptake. So they were iterating; they wanted to improve for each succeeding generation. But they had a kind of design-by-committee process. One person or group wanted a certain feature, another group wanted something else, a third group though that yet another thing was important. And so on. Sorry if that sounds vague, I won't write anything more specific.

> The end result was a chipset that had a lot of features. A LOT OF FEATURES. Gold plated features. But that meant higher power consumption than the competition, higher cost, larger form factor, longer time to market.

I have some experience in this field, and this sounds utterly bizarre. Most of the customers are fairly selective in what bits they want, so provided everything (including the kitchen sink) in a product is useless.

Being able to comfortably ship any variant of your SoC without certain parts is important.


> How do seemingly invincible forces in the market get killed by smaller foes?

In terms of value ARM is a far smaller company than Intel. But in terms of cores manufactured, ARM is way larger. ARM (through their partners obviously) shipped 10 billion cores in 2013. Intel shipped about half a billion.


Yes, because ARM is the IP

Intel is the IP + Manufacturing + Other business areas

The right comparison would be ARM IP vs. Intel IP or ARM + Manufacturers vs. Intel x86 area


Of course I'm aware of how ARM operates. We have the sales figures from 2013 as well, and Intel is larger than the ARM partners combined. However I think that just indicates that (a) ARM processors are lower featured and hence cheaper and (b) there's real competion in the ARM processor market which is mostly absent from the x86 market.

http://www.icinsights.com/news/bulletins/Qualcomm-And-Samsun...


You hit the nail in the head. Intel has built up amazing margins on the x86 business over the years by beating out competitors. ARM margins are razor thin by comparison and Intel can't bring themselves to cannibalize their cash cow by becoming another ARM manufacturer.


Similar to iPhone vs Android, in many ways. And judging by Apple's changing fortunes, similar results.


Steve's point is certainly valid. In this case I'd add that Intel seemingly convinced themselves that a number of things that would be good for Intel such as x86 across both mobile and desktop represented market reality and customer desires whether or not they did.


> Scott McNealy said early on that retreating to the Data Center was where companies went to die.

That is a striking thought. I wonder how (if at all) does it apply to open source.

GNU/Linux for example always had strong aspirations for desktop [1], yet this never materialized. The only place where Linux (but not GNU) succeeded in a big way outside data-center is becoming a golden-standard hardware abstraction layer for CPEs, Android and to some extent other embedded computing devices.

Now with Linux ABI being embraced first by Joyent in SmartOS [2], and now by Microsoft [3], GNU/Linux is pretty much all-in data-center nowadays. Does that mean it too, is going the way of DEC and Sun in due course?

[1] See "Year of the Linux Desktop" motto from 1998 is pretty much every year thereafter.

[2] https://news.ycombinator.com/item?id=9257130

[3] https://news.ycombinator.com/item?id=11388418


The buggy-whip analogy is interesting and applicable from a certain viewpoint ("big core" thinking in a low-power market) but I'm not sure it fully works: mobile chips are fundamentally converging toward desktop chips microarchitecturally (big out-of-order engines and wider pipelines, larger cache hierarchies, higher clock speeds, the move to 64-bit, etc), and Intel's desktop/server parts are also converging toward mobile in many ways (very power-efficient, more on-chip SoC-like integration).

In other words, they're different markets, but I'm not sure that they're so different that Intel's existing expertise couldn't allow them to build a kickass part if they played cards right. They still employ some of the world's best microarchitects and design engineers. My feeling here is that this is more of a business/execution issue than a fundamental big-incumbent-doesn't-get-it failure.


> The next sign I'm waiting for is Apple to ship a Macbook laptop with their own 64 bit ARM processor.

They have all the pieces already. OSX already runs on ARM and can handle apps that support multiple architectures and the Apple 9X is more than fine for 90% of their user base.

The only problem is around Carbon. I can't imagine them investing the effort into porting it to ARM and it has been deprecated for a while. But still some people would lose the ability to run some existing apps.


I think this is the most worrisome thing from Intel's perspective. Fighting off "good enough" (in both mobile and enterprise/data center).

It always seemed like the hyper-scale vendors' roadmap had to include "And when Intel reaches the end of process scaling, switch to lower-cost commodity vendor CPU" at some point in the future.

ARM x86 partnerships are one source. But resurrecting and improving other ISAs is another.

Of course, this reorg may be Intel putting all their chips into the data center basket to ensure they're always more competitive. Because, hats off to everyone there, they can work chip magic when they have to.


What is in there for Apple to gain from switching macOS to ARM? Apple save about $100 BOM on most of its MacBook in exchange for incompatible PC ecosystem? Mac Pro and iMac?

Tim Cook has already said iPad is the future of PC. We may as well move to the iPad ecosystem. For content creation let it stay on the x86 and macOS, and if Apple wanted to do some cost saving they now finally have an option, AMD. I bet Zen CPU will be more then enough for Apple needs.


Plus all the people that need to run Windows in VMs or Bootcamp. And ultimately, what would they gain by switching to ARM? ARM isn't yet better at the types of chips being integrated into MacBooks, let alone iMacs and Mac Pros.


He mentions 'a macbook' which is most likely the air to start. People that run VMs or dual boot are almost all on pro.


I'd say more like a 50% user base.

I'm going on a limb here, but nowadays, 60% of macbooks (mainly the pro) are used for software development and 30% for creative work (video and editing). While something like half of those professional applications would be fine with the 9X, the other half would be left hanging for a better machine.

However, if they want to make the switch in the near future, I think that the new MacBook will first be released with an ARM processor. This way they won't alienate their core market and they will have a testing ground. Then, I'd predict two or three generations until the pros catch up.

> The only problem is around Carbon

They have the money, if they want to change to ARM it won't be Carbon stopping them.


Those numbers sounds wildly off to me. I'd be shocked if even 10% of MacBooks are ever used for software development of any kind. They are mostly used for web browsing and Office.


> I'm going on a limb here, but nowadays, 60% of macbooks (mainly the pro) are used for software development and 30% for creative work (video and editing).

Your numbers are way out on a limb.

In a single year, Apple sells more Macs than there even are software developers in the world.

College students alone form a much bigger market than software developers.


Intel has a long and unfortunate history with promising initiatives into new markets outside their core, and then promptly cutting and running when there's obstacles to those efforts... and I say that as a former (long ago, circa 2000) Intel employee.

Intel saw the explosion of graphics chipsets and decided to try its hand with the i740. After initial teething pains, they designed the i752 and i754 to address these concerns but renewed competition from AMD started to cut their x86 margins and rather than continue on the broader product path, they ejected the graphics business and ran back to Mama x86.

In 1999 and 2000, Intel made several substantial acquisitions in the networking space regarding routers, load balancers, etc. They aggressively tried to move into these markets (I know, I was a sales engineer for that line at the time) but between AMD's Sledgehammer and the dotbomb they promptly fled those markets as well in order to run back to x86.

I can't argue that those were poor business decisions, but I can say that anyone depending on hardware initiatives from Intel that aren't directly x86-related are skating on some mighty thin ice.


As I mentioned elsewhere I have a baytrail tablet, and that thing is amazing. But this is the usual corp thinking that got Intel into the trouble they are now in. Ceding the market because they can't currently make billions from it pretty much guarantees they won't ever make any money from it (see POWER giving up the desktop too). Meanwhile as ARM struggles building servers they will have all the time in the world to figure it out, as they now have an uncontested market to fund that work.

I don't understand why intel haven't learned the history lessons from all the other processor manufactures. As soon as you stop competing in low end markets the low end guys build better and better products until they build a higher end product that makes them the top dog.

I guess its because Andy Gove isn't around to kick them in the pants.


> As I mentioned elsewhere I have a baytrail tablet, and that thing is amazing.

Really? How?

I bought a Dell Venue 8 Pro, and the only advantage I can see that it has over any ARM part is the ability for me to play x86 Steam games from 10 years ago on it.

The sleep power consumption sucks. While I can leave an ARM-based Android tablet with the screen off for a week, and it will still have enough charge to be useful, my Venue 8 Pro is dead after about 24 hours in sleep mode.

Linux is still broken on Bay Trail-T. None of the wireless drivers are in mainline, and modesetting doesn't work, so enjoy using only frame buffer graphics. This is ~2 years after Intel launched the platform!

I haven't found any manufacturer making a Bay Trail-T tablet with dual band WiFi, while loads of ARM based tablets are including it.

I've been looking into Cherry Trail, since the X5-Z8500 has a pretty decent GPU, but user forums all over the internet are talking about how it goes into thermal throttling because the fanless cooling solution that Intel has touted to OEMs doesn't work when the chip is under heavy load for extended periods of time.

Again, compare to pretty much any ARM based tablet (not Snapdragon 810 based) and you won't have thermal throttling issues crippling your performance.

So, yes, Intel's tablet chips are alright for running x86 apps. I can definitely run Office on my Venue, but then I need a mouse and keyboard to do any real work, so why wouldn't I just buy a laptop which has longer battery life, more RAM, and a CPU which doesn't cripple itself under load?


"Ceding the market because they can't currently make billions from it pretty much guarantees they won't ever make any money from it"

Are you kidding? Intel has spent the last decade and billions of dollars trying to push their x86 designs places they really don't belong. It's amazing they didn't give up years ago.


I just finished reading Andy Goves book "high output management" it was really quite good.

It's also Zucks favourite business book as well apparently.


Linked statement https://newsroom.intel.com/editorials/brian-krzanich-our-str...

tl;dr Moore's law remains important, not because of speed or power improvements, but cost improvements.

The cloud [servers] will grow; the internet of things [clients] will grow. So we'll do that.

He doesn't say this, but the smartphone soirée is over. Imagination laying off workers, iPhone sales down, samsung galaxy s7 sales down. Flagship smartphones are obviously way more powerful than needed for common usages. A $40 smartphone is now so good, it's good enough.

What's the point in intel chasing a ship that has not only sailed, but sunk?


PC sales have also sunk. Maybe Intel should quit the PC business, too. It seems hard to imagine they would do that now, but if 1) Microsoft didn't have such a strangehold on PCs, and 2) if Windows wasn't so dependent on x86, we'd be seeing ARM competition creeping up on low-end Intel chips in notebooks already by now.

Intel is lucky to have a lock-in on PCs thanks mostly to old programs being x86-specific. Hopefully Microsoft can change this with Project Centennial, if it's too late for Windows RT.

https://channel9.msdn.com/Events/Build/2016/B829

https://www.microsoft.com/en-us/download/details.aspx?id=516...


I think even if Windows was not dependent on ARM, Intel would continue to own the PC/notebook market for a long, long time to come. As soon as your device is large enough to fit even a 30-Wh battery in, you can already afford to use Intel CPUs power-consumption wise. Cost-wise, they’re still more expensive, but so are all the other components in notebooks such that they don’t make that big of a difference. Consumers are also okay with higher prices for longer-lasting notebooks as opposed to smart phones which are replaced every 2-3 years.

All in all, ARM still sucks performance-wise. Yes, you can run a decent-ish desktop with it, but you really don’t want to do anything that takes any amount of CPU time (development, typesetting, gaming, even browsing the web thanks to today’s JS craziness).


RT is sunk. Intel is in a much better position to compete on price performance in the PC space. They could cut their margins if they have to and still be profitable. They will be in the PC business for a while.


Because people will still be buying phones/tablets in 10 years. And no, they aren't yet good enough. Just browsing the web can often be quite painful as they struggle with javascript. Plus, there are a ton of devices with embedded mobile phone tech (tv's, etc) that have been dragged into the ARM camp.

The 4.1 billion ARM devices sold last quarter should scare the pants of Intel. The very hungry ARM partners that drool over the fat margins Intel is making will steam roller them in the server space if they don't figure out how to compete.


Just browsing the web can often be quite painful as they struggle with javascript.

You mean... browsing the web can be quite painful with a crap-ton of advertisements. Lots and lots of Javascript, auto-play video advertisements... ugh.


If major websites can lag on my i7, I don't think it's a hardware problem.


I have a hard time with small screens (farsightedness) so I only use mobile Internet in emergency cases. Is it a hardware (screen size) problem or a software (too much screen real state eaten by ads) one? I honestly don't know the answer, but it's still a problem.

Edit: I use glasses, specially for reading, but the optmestrists refuse to raise the graduation that would give me a clear vision for small letters. They say it would make the problem worse long term.


It's a bad design problem on the websites. AMP rendered pages are blazing fast to load on mobile and still contain ads. Desktop apps of yore had similar problems and people complained about them then (ex: Java swing apps locking up when devs did real work on the UI thread).

Things can work well with minimal CPU power, it's just grand design problems without the engineering effort going into it to make sure it's performant. Bad performance is much less apparent than bad design to decision makers.


My native apps don't have any problem browsing the network.


Will Intel spin out the fab side of the business?

The large volume segments of the processor market seems headed towards nothing but low margins.


> The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.

not speed, not power


Given that they address it in their statement, I'm interested to see how their FPGA offerings develop in the next few years. FPGAs have a brutal learning curve but sit in a yet unscaled power efficiency niche. Could we see FPGA VPS in the future if Intel's backing them?

Then again, that mention is probably just to respond to concerns regarding layoffs ~immediately after the Altera purchase.



Usually I don't associate FPGAs and power efficiency. I know about lattice's offerings or is there something I'm missing?


Relative to custom silicon, yes, FPGAs are way more power hungry. However, there are many things you can do much more power efficiently in an FPGA than on CPU or GPGPU like video decoding/encoding, crypto, digital signal processing, and many other real time critical systems. FPGA sits in a price/performance niche that has high unit costs and power requirements but little upfront investment compared to custom silicon and a little more than general computing but with the benefit of throughput/latency


Where do FPGAs sit for running neural networks? Or, does it make sense to manufacture Field Programmable Node Arrays rather than Field Porgrammable Gate Arrays?


I think that GPUs are already basically field programmable node arrays. Maybe for certain instances FPGAs would be better but the neural network would have to be designed at the logic gate level so it'd be significantly more costly in engineering time than a algorithm running on the GPU or CPU. FPGAs are for relatively simple logic running at incredibly high throughput with very low latencies. It's extremely common to move more complicated logic to a "soft" CPU on the FPGA that runs C code compiled to ARM/SPARC/POWER etc. and controls the low level hardware.


GPUs are getting closer to being math- and draw-optimized FPGAs. I don't think they'll be separate products too far in the future.


GPUs and FPGAs are very far from each other architecturally and are good at very different things. They will certainly be separate products for the next 3 decades as they were for the last 3 decades.


At the desktop/server level, I agree, which is why I don't get the whole Intel/Altera synergy thing.

But in more specific embedded cases -- meaning the vast majority of FPGA applications -- they are nothing alike. If I need to run a multichannel digital downconverter on a power budget of 1 watt, no GPU available in the foreseeable future is going to handle that role as well as an FPGA will.

Perhaps someday a hypothetical I/O-optimized GPU will be able to do the jobs that are currently done on FPGAs and ASICs, but definitely not now.


I don't get the whole Intel/Altera synergy thing

Let's recall a previous acquisition, one that I totally didn't understand. An analyst had a really good quip about it:

JPMorgan analyst Christopher Danely upgraded Intel to overweight following the company's earnings, although he still struggles to reconcile Intel's recent acquisition spree.

"Intel might as well have bought Whole Foods," he said of the McAfee deal.[1]

Compared to McAfee, acquiring Altera was brilliant.

As a public company Intel is always facing scrutiny from Wall Street. They need to constantly increase both revenue and earnings. What better way than via a strategic acquisition? Instant growth.

And Altera was a logical target. Altera sells high-margin silicon. Intel sells high-margin silicon. Intel was even fabbing some of Altera's parts, so they were quite familiar with the company. And maybe some smart people could eventually think of some synergy. E.g. once upon a time Intel made the IXP1200, a "network processor". What if, instead of designing custom silicon for something like that, perhaps a CPU + FPGA on a single die would be sufficient?

Makes much more sense than Intel acquiring McAfee.

[1] http://www.reuters.com/article/us-dealtalk-intel-idUSTRE73K7...


I think they'll continue to be seperate products far into the future because they're fundamentally different approaches. Aside from anything else, FPGAs are optimized for latency whereas GPUs are optimized for overall throughput.


Can someone explain the "intersection" of neural networks and FPGAs/GPUs. I feel like I'm seeing this referenced a fair amount but haven't heard a good explanation on why this hardware is attractive for these workload.


Because neural networks use "simple" math that is massively parallelizable. Your computer's CPU may have eight cores, but my older, mid range graphics card has over a thousand cores.

If your work can be broken down into a thousand peices that can be computed independently, than using a GPU can get you ludicrous speedups.


This is a bit of an over simplification, because the 'cores' of a GPU aren't really comparable to a CPU core. It's more like scaling up a factory - at a single point on the production like you go from 1 person doing a job to 100 people doing the same job with the same tools. It means you can do more of the same thing, but can't do 100 completely different things.

Here is a good overview:

http://gamedev.stackexchange.com/a/17255


Thanks


I think it is something people just like to say because both sound a big magical.

Neural nets are matrix multiplications and addition. They are generally memory constrained on GPUs and Main <> GPU memory is also a bottleneck.

I don't see how an FPGA will really help there.


I can't find the link right now, but I remember reading a more recent article on NextPlatform.com about how FPGAs are much more memory constrained than GPUs are, which essentially negates any slight power efficiency advantage they might have over GPUs.


Also, don't FPGAs bring along additional operational constraints ? Modern large/vast FPGAs will take anywhere from 2 to 4 hours to be Pd in the F. This makes managing new releases even more challenging.


"Pd in the F"? What does that mean?


FPGA = Field Programmable Gate Arrays, Pd in the F => Programmed in the Field


what about FPGAs allows these operations to be more power efficient on than A GPU or even CPU? Could you elaborate. I found your statement interesting. Thanks.


Essentially, FPGAs are as "close to the metal" as you can get - they are the metal - which means you can configure circuits that utilize the minimal number of 'gates' needed to go from a specific set of allowable input to a specific set of allowable output.

To make a programmable gate, however, requires more power-inefficient transistor designs, usually using more transistors than an equivalent gate on a general purpose CPU (e.g. a programmable gate set to be an OR gate will be less efficient than an OR-only gate).

FPGAs are therefore more power-hungry per gate, but since you don't need the overhead of transistors that can process arbitrary input to arbitrary output, you can make them very efficient for particular tasks.


It sounds like you're thinking of power consumption in some absolute sense (ie. FPGAs suitable for mobile/battery-powered use) rather than power efficiency. For the right workload, FPGAs can be much more efficient than general-purpose CPUs. But FPGAs aren't the only thing occupying the middle ground between general-purpose CPUs and ASICs; DSPs and GPUs also compete in that space.


I wonder how much of Intel's pain in mobile is due to the pains of the x86 platform, especially all the overengineered and legacy stuff around it (BIOS, ACPI, chipset and bus logic, etc)

ARM has some equivalent components (bootloader, power mgm) but they're much, much simpler.


What does this mean for Microsoft's idea for a unified Windows 10 platform? Will they have to bury their x86 Windows 10 Smartphone plans as well?



Boy, I think this is the final blow for Microsoft's smartphone ambitions. Whatever technological advantage they would have had is now gone. Their strategy was all over the place in terms of platform - pretty much every version of Windows Phone AFAIK required a rewrite because of constantly shifting grounds.


Not necessarily, all mobile phones are ARM based and continuum is already here.

Yes, the rewrites were shitty.

Apple and Google also do it, but they can allow themselves to push the developers through them.


Smartphones are all about ecosystems, and as the various Microsoft efforts there have shown, Apple and Google now have that locked up. However they don't do the x86/Windows ecosystems in phone space and not Apple in tablet space, and now no one will find out how much demand there was for x86 Windows smart phones.


Agree, and I would actually gladly exchange my Android devices by WP ones, given the developer experience. I really don't get where do all those PhDs at Google work, surely not on Android dev tools.

But Microsoft might still win the space of productive tablets, as I am yet to find one as good as the Surface.


And what about the HoloLens?


According to [1] the HoloLens chip continues to be supported.

[1] http://www.pcworld.com/article/3063672/windows/the-death-of-...


All I can see is "Microsoft uses a Cherry Trail chip inside of the HoloLens, but it's unclear whether or not that will be affected."


Sorry, I read

> "The company said it will continue to support tablets with a 3G derivative of the SoFIA chip, the older Bay Trail and Cherry Trail, as well as some upcoming Core chips."

that it will be supported, but reading this source again this indeed doesn't make it clear whether that's actually the chip used in HoloLens. I'm sure they will find a solution, but it will be a shame if HoloLens gets much worse battery life because of this...


But What will happen to Intel's modem business? I would love to see some competition to Qualcomm. Broadcom have disbanded their 4G Modem and WiFi business as well, leaving the market with very little choices.


In the article it mentions Intel is investing in 5G wireless, so the modem team will have plenty of work.


They are focusing on 5G

> 5G will become the key technology for access to the cloud and as we move toward an always-connected world.

https://newsroom.intel.com/editorials/brian-krzanich-our-str...


It's rumor that Apple is using Intel modem chip for iPhone 7.


Oh man, the Austin TX design group...


Sofia was primarily done by the formerly-Infineon team in Germany, though the Austin team was merged with that group. Broxton was designed in Oregon and Malaysia.


Geographically seperated teams, is that normal when developing successful smartphone SoCs?

Or does it in general work as good as developing software at multiple sites?

I remember working with ST's IPTV chips that were designed all over Europe, and bugs that involved the interaction between muliple hardware blocks were a disaster to get support on.


I've never worked on a successful smartphone SoC, so I couldn't tell you. I have worked on successful client parts (Haswell, Broadwell, Apollo Lake) that were similarly separate and we were able to divide the work sufficiently to make the multi-geo effort work.


a shame, i own a zenfone 2 and it's a great device. i was looking for more from intel.


Me too, had this device for 12 months now, and it just flys, always put it down to the intel cpu given how I only paid ~160usd for the device. New samsung devices at higher price points, j5, j7 are slow dogs in comparison. so does this mean a dead end for x86 android? Ill stick with Asus thougj given the frequency of firmware updates, and all round solid hardware.


Intel subsidized that chip. It was more like seeing Snapdragon 805 or whatever was the high-end ARM chip at the time, on a $200 device, when everyone else could only afford to put a Snapdragon 400 in such a device.

It's anti-competitive (selling chips below cost - imagine if say the meat industry used its profits to sell milk under cost - it would take out the "pure" milk companies), and very costly to Intel. It was losing a $1 billion a quarter back then doing stuff like that. It's not going to do it anymore.

http://www.bloomberg.com/news/articles/2015-01-16/intel-s-4-...


I thought that may of been the case, that could explain why the same phone is 50% higher in price now than 12 months ago, at least in BKK. I wont lose sleep over Intel selling chips below cost.. On the mention of diary.. milk these offers when they come along..


Good phone for the price. Makes me wonder if the soc was subsidised though.


It was.


Doesn't that means that CoreM is finally ready to replace Atom ? Why all the gloom ?


Core M is in a different price bracket, it certainly can't compete price-wise with cheap ARM counterparts.




The Anandtech article linked here is better.


Oh dear, is this the death of the Windows tablet already?

Microsoft had to make a stripped-down ARM-only version of Windows, Windows RT, because Intel's CPUs just weren't there. Then they abandoned that in favour of full x86 Windows once Intel's CPUs got there. But now Intel is gone.

This means Windows will now only be on larger, more laptop-like devices, I would assume? No more 7" tablets.


It probably just means more i3/i5-based tablets, which will probably end up being a good thing anyway, since the push to optimize for weak processors that there was on the iOS/Android side wasn't really there on the Windows side, since a lot of testing happened on developer workstations.


Possibly, or even m3/m5-based tablets. I have an Atom-based Windows 10 tablet and it seriously struggles with browsing some websites without an ad blocker installed. It can use Twitter.com or play Solitaire fine, though.


Intel has been selling mobile chipsets at loss for the last few years, because no one would touch them at break even price (and don't even think of profit. And still hardly anyone touched them..)

So they basically cut the subsidy (which is understandable) and didn't wait for their market segment to die as it surely would - they just killed it immediately.


Interesting that this brings up aicha evens who recently left Intel after heading up their mobile comm chip division


I really think they are going to come up with something to get into the mobile/embedded market. It is a very important market. With their experience, fabrication process and tech they can totally rip the competence.


This is an empty statement. Let me nit pick it for you:

> With their experience In mobile/tablets? Did you read the article - there have only been a handful of intel devices (and less cores) out there that intel has had a part in. Virtually every mobile phone and tablet out there has been ARM (including Nokia from the past). > fabrication process You don't need to own a fabrication process to be competitive in mobile, since in that market cost and power efficiency matter more than performance, so OEMs will stick with a process that is good enough for now and then they'll al move to a newer one. This is not what the market competes on. > tech Yes, their tech is very performant. Yes, if I want to crunch numbers I will buy an Intel chip. But if my mobile phone has a Xeon, it will run out of batery in 30 minutes. Even Atom chips are an order of magnitude less power efficient than cores of other companies in mobile.

So, no, they can't 'rip the competition' in mobile easily.


I know it's a generic statement. My point is that Intel, the most important chip vendor in the world, knows very well how to make ICs. If they know how to make the best desktop and server CPUs, it's only a matter of will and time to translate that knowledge and expertise to the mobile market. They also have more and better resources (money and engineers).


> there have only been a handful of intel devices (and less cores) out there that intel has had a part in.

and I think nearly every one of these Intel has had to pay the manufacturer to use Atom over ARM


Is the future of computing in q-bits?


Why doesn't Intel just buy ARM?


Good question, my first guess would be antitrust laws/monopoly. Intel already got bitten because they played dirty against AMD.


My guess is that it would be too expensive. Also antitrust issues might pop up as well.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: