Scott McNealy said early on that retreating to the Data Center was where companies went to die. And at the time was selling workstations on desks and crushing DEC, then when that business got eaten away by Windows the data center was where Sun was going to make its stand. And then it died.
I think a lot about what the fundamental principle is here. How do seemingly invincible forces in the market get killed by smaller foes? Christensen's Dilemma doesn't really explain it, it describes it, but it doesn't tease out the first principles behind it.
At this point I think it is a sort of Enterprise "blindness" which was something Steve Kleiman at NetApp shared with me. A company can be so good at something that they focus all their energy on it, even when it is vanishing beneath them. Consider a fictional buggy whip company when automobiles came on the scene, right up until the last day they made buggy whips they could be the best whip you could buy, all the secrets to making a great buggy whip where mastered by this company, all the "special sauce" that made them last longer, work in a wide range of conditions, and yet the reality was that the entire reason for buggy whips existing was evaporating with the loss of carriages. By focusing on what the company was the undisputed leader in doing, they ride the wave right into the rocks.
When the company is so stuck on what used to work for them, even after the technology has moved on, they become blind to the dangers. Challenging to watch, even harder if you feel like you can see the train wreck coming. And of course soul crushing if nobody driving the bus will listen to the warnings.
The next sign I'm waiting for is Apple to ship a Macbook laptop with their own 64 bit ARM processor in it. Then it gets really really interesting if AMD can pull off an ARM server chip, going where Intel won't.
Maybe. That was before cloud computing based on a much cheaper, faster and more pervasive Internet became such a big thing, plus there were at least 3 things that caused them to ultimate fail to where they "retreated" to:
A problem with insufficient error checking and correction plus SRAM chips from IBM that Sun claimed had higher error rates than they anticipated was handled dreadfully, they wouldn't help customers without forcing them to sign NDAs, they initially blamed customer environmental conditions, etc., In short, they showed they couldn't be trusted, which ought to be fatal in the enterprise market.
They consciously decreased the quality of their Intel servers, two things I remember was penny pinching by removing one tried and trusted Intel Ethernet chip and substituting a Marvell chipset one, and kept changing the lights out hardware without changing model numbers, which is what Joylent blamed for their stopping buying from them.
If you couldn't charge it to a credit card, or bought enterprise quantities of stuff, like more than a couple million dollars, you simply couldn't buy their stuff back when it was higher quality, they insisted you go through a VAR or reseller, but few of those were really interested in selling the hardware, and many of the few that might were squirrelly. So before AWS et. al. became a thing, a lot of companies reluctantly bought Dell hardware simply because they could, and by the time they graduated to buying hardware at "enterprise" levels they already had the in-house expertise to manage cheaper but lower quality Dell systems.
And the latter problem, making it hard to buy their stuff (which damages or kills more companies than you think, it's also been a big problem for HP and one reason they didn't get this business), is one of the things that killed DEC.
So comparatively simple failures to execute can be a big part of a company failing at the same time it's facing the Innovators Dilemma.
This is a little OT already, but I have a friend who does IT purchasing and the horror-stories she tells me about how difficult it is to buy HP products scare me. On top of that their client-service is horrendous, and from time to time the local HP office even tries to "cheat" on its direct clients (among other things by "helping" other, preferred clients, to get lower prices and to win tenders). I'm still wondering how they're still in business.
A "least worst" enterprise vendor?
Why is the dollar still strong? The theory I like is because all competing currencies are worse. And while I don't know the competing enterprise market as well, I don't get the impression that today HP has seriously better competitors in full service classic enterprise space (this of course ignores paradigm shifts like the cloud, where I'm sure AWS and company are eating the lunch of various enterprise categories, storage for one).
Who's noticeably better than them? Sun and DEC weren't, IBM sure doesn't sound like they have been any time recently (and for service service they've joined the race to the bottom), who else is there? I don't know, it's not a field I follow anymore.
HN has its own distortion bubble here. It takes a long (decades) time for new technology / best practices to penetrate into large organizations in stable, established markets.
But I think two things are changing that. One: upstream vendor migration to cloud / more modern solutions (e.g. Salesforce). Which decreases in-house footprint. Two: younger talent that's more familiar with modern ways of doing things (working with AWS, cattle services on commodity boxes, etc).
I think one of the greatest failures of enterprise companies is that they've historically done a terrible job at making their hardware / software available to students. Which means no reasonably-priced supply of knowledgeable potential employees.
And if the TCO of a mainframe / COBOL solution includes scouring the globe for the one remaining person who knows how to work on it and then paying them a premium to do so...? Well, that'll get enterprise (still slowly) moving.
Back around 2009 I had a little visibility into Intel's mobile efforts. My information is way out of date, but unless Intel management changed drastically, my hunch is that this explanation is still valid.
At the time Intel had already introduced some mobile silicon, but there was little uptake. So they were iterating; they wanted to improve for each succeeding generation. But they had a kind of design-by-committee process. One person or group wanted a certain feature, another group wanted something else, a third group though that yet another thing was important. And so on. Sorry if that sounds vague, I won't write anything more specific.
The end result was a chipset that had a lot of features. A LOT OF FEATURES. Gold plated features. But that meant higher power consumption than the competition, higher cost, larger form factor, longer time to market.
At the time, there was nobody like a Steve Jobs running Intel mobile. Nobody had the intuition and the gravitas to say: "we want A, B, C, but not D, E, F. Quit wasting time and build the fucking thing! I need something that's competitive!"
This is similar to something that an Intel chip design manager told me about 30 years ago. His view was that there were really only two ways to motivate engineers:
1) show them a path to lots and lots of money. This is the path that startups take. Focus, build something quickly, get rich or die trying.
or 2) engineers will "play in the sand", they will add in all sorts of neat features that they think are really cool, that they want to implement, that they are convinced are important. But all that crap isn't very useful to the average user, it just results in complex designs that miss their market window in so many ways.
Intel probably continued to choose path 2 for their mobile efforts.
Intel didn't fail because of any variation on The Innovator's Dilemma. They understood that mobile was the future, that it was very important. They expended tremendous amounts of capital trying to "win" in mobile. They just didn't have the organizational structure that let them execute.
So the total combined margin is as high as Intel or more. So why does Intel fail?
Depends how you look at it. My understanding of the Innovator's Dilemma, is that when faced with disruption, which implies radical change within the organization, nobody in that organization really wants to do that, because it messes up too many of their processes/products, etc.
So what do they do instead? They try to make their existing technology/product fit the new paradigm. This is what Nokia tried to do with "touch-enabled Symbian", instead of writing a new touch-focused OS from scratch (they did do that with Maemo actually, but also because of Innovator's Dilemma reasons, they failed to focus on it and invest in it).
It's also what Intel did with Atom, and by the way, why they killed the ARM-based Xscale division as well a few years before the iPhone. Atom has too high cost to compete in mobile. This is mainly what killed Intel's mobile division. Intel even tried to license it out to cheap Chinese chip makers, but it was still not competitive in performance/price.
And it's also why Microsoft failed for so long to conquer tablets, by retrofitting desktop Windows for them. Before you mention the Surface Pro, first off I don't consider it a major success, it's too expensive for most people, and 90% of the reason people get it is to use it as a PC, not as a tablet (other than perhaps designers, but that's not a mainstream market, and it's more inline with the Wacom tablet professional market).
It's also why hybrid cars or regular cars being retrofitted as EVs will fail against pure EVs, too, etc, etc
> The end result was a chipset that had a lot of features. A LOT OF FEATURES. Gold plated features. But that meant higher power consumption than the competition, higher cost, larger form factor, longer time to market.
I have some experience in this field, and this sounds utterly bizarre. Most of the customers are fairly selective in what bits they want, so provided everything (including the kitchen sink) in a product is useless.
Being able to comfortably ship any variant of your SoC without certain parts is important.
In terms of value ARM is a far smaller company than Intel. But in terms of cores manufactured, ARM is way larger. ARM (through their partners obviously) shipped 10 billion cores in 2013. Intel shipped about half a billion.
Intel is the IP + Manufacturing + Other business areas
The right comparison would be ARM IP vs. Intel IP or ARM + Manufacturers vs. Intel x86 area
That is a striking thought. I wonder how (if at all) does it apply to open source.
GNU/Linux for example always had strong aspirations for desktop , yet this never materialized. The only place where Linux (but not GNU) succeeded in a big way outside data-center is becoming a golden-standard hardware abstraction layer for CPEs, Android and to some extent other embedded computing devices.
Now with Linux ABI being embraced first by Joyent in SmartOS , and now by Microsoft , GNU/Linux is pretty much all-in data-center nowadays. Does that mean it too, is going the way of DEC and Sun in due course?
 See "Year of the Linux Desktop" motto from 1998 is pretty much every year thereafter.
In other words, they're different markets, but I'm not sure that they're so different that Intel's existing expertise couldn't allow them to build a kickass part if they played cards right. They still employ some of the world's best microarchitects and design engineers. My feeling here is that this is more of a business/execution issue than a fundamental big-incumbent-doesn't-get-it failure.
They have all the pieces already. OSX already runs on ARM and can handle apps that support multiple architectures and the Apple 9X is more than fine for 90% of their user base.
The only problem is around Carbon. I can't imagine them investing the effort into porting it to ARM and it has been deprecated for a while. But still some people would lose the ability to run some existing apps.
It always seemed like the hyper-scale vendors' roadmap had to include "And when Intel reaches the end of process scaling, switch to lower-cost commodity vendor CPU" at some point in the future.
ARM x86 partnerships are one source. But resurrecting and improving other ISAs is another.
Of course, this reorg may be Intel putting all their chips into the data center basket to ensure they're always more competitive. Because, hats off to everyone there, they can work chip magic when they have to.
Tim Cook has already said iPad is the future of PC. We may as well move to the iPad ecosystem. For content creation let it stay on the x86 and macOS, and if Apple wanted to do some cost saving they now finally have an option, AMD. I bet Zen CPU will be more then enough for Apple needs.
I'm going on a limb here, but nowadays, 60% of macbooks (mainly the pro) are used for software development and 30% for creative work (video and editing). While something like half of those professional applications would be fine with the 9X, the other half would be left hanging for a better machine.
However, if they want to make the switch in the near future, I think that the new MacBook will first be released with an ARM processor. This way they won't alienate their core market and they will have a testing ground. Then, I'd predict two or three generations until the pros catch up.
> The only problem is around Carbon
They have the money, if they want to change to ARM it won't be Carbon stopping them.
Your numbers are way out on a limb.
In a single year, Apple sells more Macs than there even are software developers in the world.
College students alone form a much bigger market than software developers.
Intel saw the explosion of graphics chipsets and decided to try its hand with the i740. After initial teething pains, they designed the i752 and i754 to address these concerns but renewed competition from AMD started to cut their x86 margins and rather than continue on the broader product path, they ejected the graphics business and ran back to Mama x86.
In 1999 and 2000, Intel made several substantial acquisitions in the networking space regarding routers, load balancers, etc. They aggressively tried to move into these markets (I know, I was a sales engineer for that line at the time) but between AMD's Sledgehammer and the dotbomb they promptly fled those markets as well in order to run back to x86.
I can't argue that those were poor business decisions, but I can say that anyone depending on hardware initiatives from Intel that aren't directly x86-related are skating on some mighty thin ice.
I don't understand why intel haven't learned the history lessons from all the other processor manufactures. As soon as you stop competing in low end markets the low end guys build better and better products until they build a higher end product that makes them the top dog.
I guess its because Andy Gove isn't around to kick them in the pants.
I bought a Dell Venue 8 Pro, and the only advantage I can see that it has over any ARM part is the ability for me to play x86 Steam games from 10 years ago on it.
The sleep power consumption sucks. While I can leave an ARM-based Android tablet with the screen off for a week, and it will still have enough charge to be useful, my Venue 8 Pro is dead after about 24 hours in sleep mode.
Linux is still broken on Bay Trail-T. None of the wireless drivers are in mainline, and modesetting doesn't work, so enjoy using only frame buffer graphics. This is ~2 years after Intel launched the platform!
I haven't found any manufacturer making a Bay Trail-T tablet with dual band WiFi, while loads of ARM based tablets are including it.
I've been looking into Cherry Trail, since the X5-Z8500 has a pretty decent GPU, but user forums all over the internet are talking about how it goes into thermal throttling because the fanless cooling solution that Intel has touted to OEMs doesn't work when the chip is under heavy load for extended periods of time.
Again, compare to pretty much any ARM based tablet (not Snapdragon 810 based) and you won't have thermal throttling issues crippling your performance.
So, yes, Intel's tablet chips are alright for running x86 apps. I can definitely run Office on my Venue, but then I need a mouse and keyboard to do any real work, so why wouldn't I just buy a laptop which has longer battery life, more RAM, and a CPU which doesn't cripple itself under load?
Are you kidding? Intel has spent the last decade and billions of dollars trying to push their x86 designs places they really don't belong. It's amazing they didn't give up years ago.
It's also Zucks favourite business book as well apparently.
tl;dr Moore's law remains important, not because of speed or power improvements, but cost improvements.
The cloud [servers] will grow; the internet of things [clients] will grow. So we'll do that.
He doesn't say this, but the smartphone soirée is over. Imagination laying off workers, iPhone sales down, samsung galaxy s7 sales down. Flagship smartphones are obviously way more powerful than needed for common usages. A $40 smartphone is now so good, it's good enough.
What's the point in intel chasing a ship that has not only sailed, but sunk?
Intel is lucky to have a lock-in on PCs thanks mostly to old programs being x86-specific. Hopefully Microsoft can change this with Project Centennial, if it's too late for Windows RT.
All in all, ARM still sucks performance-wise. Yes, you can run a decent-ish desktop with it, but you really don’t want to do anything that takes any amount of CPU time (development, typesetting, gaming, even browsing the web thanks to today’s JS craziness).
The 4.1 billion ARM devices sold last quarter should scare the pants of Intel. The very hungry ARM partners that drool over the fat margins Intel is making will steam roller them in the server space if they don't figure out how to compete.
Edit: I use glasses, specially for reading, but the optmestrists refuse to raise the graduation that would give me a clear vision for small letters. They say it would make the problem worse long term.
Things can work well with minimal CPU power, it's just grand design problems without the engineering effort going into it to make sure it's performant. Bad performance is much less apparent than bad design to decision makers.
The large volume segments of the processor market seems headed towards nothing but low margins.
not speed, not power
Then again, that mention is probably just to respond to concerns regarding layoffs ~immediately after the Altera purchase.
But in more specific embedded cases -- meaning the vast majority of FPGA applications -- they are nothing alike. If I need to run a multichannel digital downconverter on a power budget of 1 watt, no GPU available in the foreseeable future is going to handle that role as well as an FPGA will.
Perhaps someday a hypothetical I/O-optimized GPU will be able to do the jobs that are currently done on FPGAs and ASICs, but definitely not now.
Let's recall a previous acquisition, one that I totally didn't understand. An analyst had a really good quip about it:
JPMorgan analyst Christopher Danely upgraded Intel to overweight following the company's earnings, although he still struggles to reconcile Intel's recent acquisition spree.
"Intel might as well have bought Whole Foods," he said of the McAfee deal.
Compared to McAfee, acquiring Altera was brilliant.
As a public company Intel is always facing scrutiny from Wall Street. They need to constantly increase both revenue and earnings. What better way than via a strategic acquisition? Instant growth.
And Altera was a logical target. Altera sells high-margin silicon. Intel sells high-margin silicon. Intel was even fabbing some of Altera's parts, so they were quite familiar with the company. And maybe some smart people could eventually think of some synergy. E.g. once upon a time Intel made the IXP1200, a "network processor". What if, instead of designing custom silicon for something like that, perhaps a CPU + FPGA on a single die would be sufficient?
Makes much more sense than Intel acquiring McAfee.
If your work can be broken down into a thousand peices that can be computed independently, than using a GPU can get you ludicrous speedups.
Here is a good overview:
Neural nets are matrix multiplications and addition. They are generally memory constrained on GPUs and Main <> GPU memory is also a bottleneck.
I don't see how an FPGA will really help there.
To make a programmable gate, however, requires more power-inefficient transistor designs, usually using more transistors than an equivalent gate on a general purpose CPU (e.g. a programmable gate set to be an OR gate will be less efficient than an OR-only gate).
FPGAs are therefore more power-hungry per gate, but since you don't need the overhead of transistors that can process arbitrary input to arbitrary output, you can make them very efficient for particular tasks.
ARM has some equivalent components (bootloader, power mgm) but they're much, much simpler.
Yes, the rewrites were shitty.
Apple and Google also do it, but they can allow themselves to push the developers through them.
But Microsoft might still win the space of productive tablets, as I am yet to find one as good as the Surface.
> "The company said it will continue to support tablets with a 3G derivative of the SoFIA chip, the older Bay Trail and Cherry Trail, as well as some upcoming Core chips."
that it will be supported, but reading this source again this indeed doesn't make it clear whether that's actually the chip used in HoloLens. I'm sure they will find a solution, but it will be a shame if HoloLens gets much worse battery life because of this...
> 5G will become the key technology for access to the cloud and as we move toward an always-connected world.
Or does it in general work as good as developing software at multiple sites?
I remember working with ST's IPTV chips that were designed all over Europe, and bugs that involved the interaction between muliple hardware blocks were a disaster to get support on.
It's anti-competitive (selling chips below cost - imagine if say the meat industry used its profits to sell milk under cost - it would take out the "pure" milk companies), and very costly to Intel. It was losing a $1 billion a quarter back then doing stuff like that. It's not going to do it anymore.
Microsoft had to make a stripped-down ARM-only version of Windows, Windows RT, because Intel's CPUs just weren't there. Then they abandoned that in favour of full x86 Windows once Intel's CPUs got there. But now Intel is gone.
This means Windows will now only be on larger, more laptop-like devices, I would assume? No more 7" tablets.
So they basically cut the subsidy (which is understandable) and didn't wait for their market segment to die as it surely would - they just killed it immediately.
> With their experience
In mobile/tablets? Did you read the article - there have only been a handful of intel devices (and less cores) out there that intel has had a part in. Virtually every mobile phone and tablet out there has been ARM (including Nokia from the past).
> fabrication process
You don't need to own a fabrication process to be competitive in mobile, since in that market cost and power efficiency matter more than performance, so OEMs will stick with a process that is good enough for now and then they'll al move to a newer one. This is not what the market competes on.
Yes, their tech is very performant. Yes, if I want to crunch numbers I will buy an Intel chip. But if my mobile phone has a Xeon, it will run out of batery in 30 minutes. Even Atom chips are an order of magnitude less power efficient than cores of other companies in mobile.
So, no, they can't 'rip the competition' in mobile easily.
and I think nearly every one of these Intel has had to pay the manufacturer to use Atom over ARM