The lure of mobile is the upgrade cycle as those are much shorter. The same, short upgrade cycles were a boon for intel in the hey day of desktops/laptops.
The bigger threat the article fails to focus on is ARM.
Singularity for apps has always hinged on write-once-run-everywhere. Intel hasn't been able to compete with ARM lately and it seems that ARM is moving into Intel's turf. A few months back Microsoft launched windows 10 on ARM -- that's an inflection point. Apple relatively recent iPad Pro also continues to blur the lines along with other initiatives around app continuity (Mac/iOS) or the Nintendo Switch's hardware continuity that offers a glimpse into what the same thing could look like for mobile in general.
It's not for Intel's lack of trying either. It's just that ARM got off to a good start while Intel failed to make in-roads with Atom and whatever else.
Microsoft has launched Windows for various non-x86 platforms over the years, including ARM, and it has always petered out. Maybe it will take off this time, but I wouldn't bet too much on it.
Some people might consider that a blessing in disguise with Slack.
I'm not sure what the behavior is, maybe best case Slack crashes and restarts itself. You could almost call that behavior a feature of 32bit Slack: "automatic garbage collection"
I'm not seeing this in the business world. Yes, there are some tablets in use, but their numbers are still very small in comparison to desktop and laptop computers. And a huge majority of these users are definitely not what I would call "power users".
Give it 10 years and people will be keeping their phones until they break.
Also anyone can make an ARM CPU so the margins are thin, whereas with x86 CPUs Intel will basically only ever have one competitor - AMD. They want to find another monopoly.
The upgrade cycle has really just been a process of filling up the market. In fact I’d argue there are still a lot of old devices still in use beyond an ideal lifespan. There’s still going to be a much quicker refresh cycle than desktops for a while, but they’re likely to converge gradually.
What we might see is high end users keeping their flagship for longer, and refresh phones ‘downstream’ in the family to more recent midrange devices rather than passing on their own device and getting a new flagship. In fact, that’s what I just did for my teenage daughters.
We should be able to keep our devices until they die, but without regular software updates, you're playing with fire.
Now I do have a 10 year old Core 2 Duo 2.66Ghz with 4GB RAM, Gigabit Ethernet and a nice 1920x1200 display serving as a Plex Server. In day to day use, the only time I can tell the difference between that and my modern laptop is when I try to run too many things at once and that could be alleviated by upgrading the RAM to 8GB.
As far as the iPad 1st generation, it's true that you can't get software updates -- I have one too -- but you can still download the "last compatible version" of apps for it. I reset mine last year and re-downloaded and ran Netflix, Hulu, Crackle, Google Drive (for reading PDFs), Spotify, Plex, and all of Apple's iWork Suite.
It still supports AirPlay and prints to all of my printers. I can check my Exchange email at work. The browser on the other hand crashes constantly.
Intel gave up too early in my opinion, the Zenfones (pre-Snapdragon) were great phones, I just don't think Asus alone had enough penetration to make it worth their while.
My experience with the Zenfone 2 was not very positive though. It seems to have been more Asus's fault than Intel's, but also Intel could have done a better job with their kernel. Yes the CPU was fast relative to its price (because Intel was dumping them) but almost everything else about it was not very good. Battery life was poor, seemingly because Intel never got core parking to work in their Android kernel (I also owned a Lenovo Atom tablet which had the exact same problem so I'm going to blame Intel here). Most apps ran pretty fast but anything with ARM binaries ran very slowly in emulation or in a few cases didn't work at all. ASUS's Android skin was terrible and had an obscene amount of preinstalled bloatware. There were loads of bugs, and installing updates was always a guessing game of whether it improved things or made it even worse. Sometimes GPS wouldn't work. The Marshmallow update was both extremely late and extremely buggy. The plastic back started developing cracks after about a year without ever dropping it.
The ability to run windows software in either Wine or a VM was a rather cool-but-useless novelty.
The "killer app" for smartphone refresh rates staying low are (1) battery degradation and (2) forced OS updates for app updates, which in turn slows the phone down to compel an update.
 I'm actually curious if OS updates slow down older phones these days. This was the case 5 years ago but perhaps things have changed/
It was hobbled at the outset with 2GB RAM. At introduction performance was adequate. It has degraded since then. There are times when I exit an app and it takes several seconds before the home screen is populated. Navigation normally runs in the background but if I open a different app, it may get bumped and disappears from notifications. I listen to podcasts frequently and its background process (the part that keeps audio going if I open other apps) gets bumped.
These are not related to processor horsepower but I believe are symptoms of insufficient RAM. It could also result from additional installed applications, but I have uninstalled more than I have added and it seems not to improve.
Maybe it's time to look at third party ROMs.
Towards the end of its life its performance had degraded to the same point you experienced. Multiple seconds for the home screen to populate, camera lag, etc. It's crazy that a cell phone with 2GB of ram is incapable of running recent versions of Android with decent performance and even crazier when I'd open up a task manager and see mandatory things like Google Play Services consuming almost 60% of the ram on the device.
Short support cycles seriously dent the feasibility of keeping the same phone for even two years...
- Devices that support Verizon's bands & CDMA are mostly only sold as Verizon branded devices. (CDMA is less important every year but I still occasionally need 3g)
- ~All Verizon phones have locked bootloaders and no official way to unlock them.
- Manufacturers don't make the easy mistakes which let you unlock their bootloaders anymore.
I will not be a Verizon customer for much longer.
Advantage is that you buy for half the price of a new flagship.
Things will always improve, but for cameras, we are getting to the point where if your smartphone camera isn't good enough for you, you are probably at least an enthusiast and you probably want to get a real camera.
Since then, I've seen absolutely no game changing improvement.
On the Android side, it's sorta debatable whether Google Services is really part of the OS or not, but in any case, a Galaxy S4/5 with the latest google SW is a performance disaster. Maps regularly freezes for like 3-5 seconds at a time; its sluggishness is almost certainly causing additional auto accidents. Pretty much every other G app takes more than a second to open, despite being compiled-to-native-code during installation. Disabling airplane mode will cause 100% CPU on all cores for like 15+ seconds. Performance was certainly better when the phone was new 4-5 years ago.
The said that as the battery degrades, they limit CPU spikiness that can abruptly kill the device even with a non-trivial amount of juice left.
That's not nearly the same as "slowing down the phone with every upgrade".
Often people have reported better performance after an OS upgrade, although obviously results may vary, and subjective impressions are always, well, subjective.
I haven't broken a phone yet, but I doubt I could keep using a phone for 10 years anytime soon.
Even if not, most phones being sold worldwide are cheap Android phones that don't have sophisticated anti-theft technology.
Laptop: 1 (spilt wine on it, was not able to repair it)
I agree for desktops, but laptops regularly get broken and stolen.
That's basically me right now, still have my trusty sgs2 running lineageOS and will continue to have it till it breaks
From app developer perspective, processor architecture is low on the list of the problems you have to solve to run anywhere. User-facing apps aren't written in C or anything that close to metal anymore; I suspect that most of the Java, Kotlin and Swift code is completely architecture agnostic: add new capability to the compiler, change the build command and your ARM app becomes x86 app.
C and its derivatives (C++ and Objective-C) are the outliers thanks to implementation defined numeric types, the abuse in pointer tricks and casts.
In fact, they still exist for micro-controlers, https://www.mikroe.com/mikrobasic.
So about only pitfall is assembler, SIMD intrinsics and depending on undefined behaviour like arithmetic or not signed right shift.
Oh and different thread safety guarantees. ARM is much more lax with loads and stores and will require locking and correct use of atomic operations.
EDIT: Forgot to mention that C spec only specifies "at least X" sizes, there is no portability guarantee if the types from stdint.h aren't used.
It was my first time programming C on something that wasn't a 32 bit CPU :)
It only goes out to 2014. It leaves one with the impression that ARM will soon pass Intel in performance.
This would suddenly eat into Intel’s profits.
Additionally, you can check the QVL for each motherboard, and then test it by hand.
*Edit - $750 is a lot for the cheapest threadripper + mobo. I'd be happy with 48 PCI-E lanes around $400 (CPU+Mobo).
$400 is really cheap even by consumer standards.
My hope is that Ryzen 2 shakes things up / Xeon 'Entry' offers a decent board.
The Xeon D chips are a total frigging racket.
Intel is not fighting for its life by any stretch of the imagination.
I guess you don’t see ARM attaining parity with x86?
On top of that the arm laptops that have been announced were significantly more expensive than an equivalent x86 laptop and the x86 emulation isn't good enough to count on it.
ARM tech is on an intersect trajectory with x86 tech, and it’s only a matter of time before those lines cross. Up until then Intel will do fine. After that, all bets are off. Fortunately for Intel, they’re a couple of steps ahead of you in that at least they now see the problem.
Not trying to be pedantic, just so that no one gets confused.
Jim Keller, who's been on both sides of the table, x86 with AMD and ARM with Apple, expects that a fully developed ARM CPU will be 15-30% more efficient than their x86 counterparts.
What metric is that? FLOPS / Watt?
No, it is not decelerating, it's still growing . Besides, demand from other low-power applications such as IoT will only lead to increased demand for low-power (mobile) chips.
> In 2017, Intel's revenue hit an all-time high, as it did the year before, and its stock is an all time high. This is not a company "fighting for its future."
" ... as Microsoft shows, revenue is a lagging indicator in the technology business." -- pg 
Given enough time, all investments will eventually mature.
In other words, revenue is about the past -- reaping the rewards from mature (technology) investments. Growth is about the future -- future returns can only come from nascent (technology) investments.
Growth in a new product with lots of potential applications is a much better indicator of a company's potential to continuing earning returns in the future, before that product reaches maturity (and revenue attributable to that product starts to decline). Intel needs a new product category to future-proof its existence.
EDIT: as pointed out by several commenters below, I incorrectly internalized your use of "decelerate" to mean "decline" but I'll leave my comment as-is.
You can see, total market grew by less than 200M as opposed to more than 500M last year. Definitely "decelerating".
That doesn't mean that growth will stop, though it might.
New opportunities in an existing market are equally as effective in spurring on growth. For example, there doesn't seem to be any reduction in appetite for servers, and Intel has been the market leader in this space for a while. Just because we've had server market for over half a century doesn't mean it's lacking in potential growth.
If it's accurate, the following graph is quite telling:
Graph was part of the following article:
Apple is company with an own silicon, but it will never be a true microelectronics company.
Google's TPUs are another examplary case. Goog did spare no money to get into the market first and secure a tech/platform lockdown. Yet, what took them at least 5 years, and extremely expensive to make reticule limit chips, was bested by a no-name Chinese fabless that blew TPUs out of the water on power/cost/performance ratio.
All American dotcoms that are actively trying to get into hardware are oblivious to the fact that there is an inherent difference in between a hardcore engineering company to a company whose topmost technical expertise is underhanded web programming.
BTW, cloud provider already making tons of customized hardware themselves. And talking like Amazon/Google/Microsoft only have talents for web programming is short sighted and simply not sure.
Well, I am saying that they do obviously spend an ennormous effort trying to do so. To better formulate what I said, the fact that they did acquire some hardware/semi expertise in house, does not mean that this expertise drives them as a company.
If tomorrow 2 engineers, one from semi side, and one fromwebdev, will knock at the door of CEO with "I found a way to do things A and B 250 times better, but you have to scrap half of your business plan," a webdev will be heard, and semi guys not.
Btw, the chip Amazon showed, was from a company they acquired solely for that - to not to pay an arm and a leg for high end switching chips.
And about OEM servers - it is surprising that big dotcoms are late comers to the party, and were relying on off the shelf brand hardware to the very last moment.
Big hosting providers from outside of dotcom ecosystems were relying on direct OEM orders and custom built DCs for more than a decade. I do remember selling Atom based single board computers stuffed into U1s and Intel core 2 duo systems with soldered on memory and cpus to budget web hosting guys back when I worked as a trainee in a trade company back in 2007-2009.
What are you basing this assumption on? Have you worked for one of these companies? Do you know anyone that does?
Also, regarding Microsoft, any suggestion that they're focused on acquiring web devs is clearly short sighted. If you want a better idea of its priorities, I'd suggest taking a look at which sectors it earns its main revenue in, as well as taking a look at the work being done at Microsoft Research.
Surely do, both MS and Amazon.
>What are you basing this assumption on?
It takes a giant effort for ordinary managerial cadres to wrap their mind around of what a web company is and learn the whole model of behavior expected from them. The few who manage to learn some basic technical disciplines and go up in ranks tend to overestimate the importance of their experience.
You meet such people a lot in a dotcom setting. It takes great effort to persuade such person to bother to put efforts to understand yet another mentally voluminous subject that will break his idea of "cool" yet another time.
It is like trying to persuade a prideful child who just learned how to drive a tricycle to learn to drive a normal bike...
BTW, are you from Microsoft?
Thanks for confirming.
> "BTW, are you from Microsoft?"
No, I don't work for Microsoft. However, I have enough experience with their ecosystem to suggest that their revenue focus is not in web dev. Other products (such as Windows, Office, Azure and Xbox) are their prime source of revenue. Whilst I don't doubt they have plenty of web devs (TypeScript and VS Code both spring to mind as web-based tech from Microsoft), I wouldn't say that is their core competency, so...
> "a company whose topmost technical expertise is underhanded web programming"
... doesn't ring true. However, if you know people on the inside I'd be interested in knowing how the size of the web dev teams compares to other teams, such as the Xbox division.
I'm not familiar with what company or product you're referring to. What is it?
Cambricon - possibly a scam, though also claims performance on an equal level with moderately sound technical approach. They never released much info other than fancy CG videos.
I'd take the statement "Unknown Chinese company beating Google with fewer resources and less time" with a huge grain of salt.
NovuMind only does 3x3 convolutions. Literally a hard burnt ASIC. Also, the TPU's efficiency is close. (Btw both their "FLOPs/watt " numbers are useless, they don't account for utilization)
And Cambricon... well, I'll just leave it at that...
(Edited to answer the below question: Dave Andersen - http://www.cs.cmu.edu/~dga/
I'm back at CMU full time, but was having too much fun at Google to quit entirely.)
EDIT: Question answered :)
And what Chinese fabless was that?
Surely, he is a top cadre, and an accomplished academician. The number of his students serving on CTO level jobs is in double digits.
The fact that Google went so far with cadres, and said to be throwing high six digit salaries even on people who came to the unit fresh out of universities clearly signifies the extend of their efforts and commitment. Their intent seem to be to bang money on it until it works, no matter the cost.
>Hardly clueless and inexperienced...
I never challenged the technical expertise per se. I'm saying that a generic web/dotcom/clickfarm business can't be normally turned into an engineering company of an inherently different nature, regardless of how much money will be banged on the exercise.
People say that for the past 5 years, the TPU unit was running effectively, like a research institute of some kind: regular workshops with academicians, amount of research works written exceeding the amount of code, and so on.
TPU unit people did their job splendidly, but Google's managerial unit that authorized the whole affair seem to me to be having hard time wrapping their mind around the question on how and what to do with it.
On the other hand, Ren Wu, being originally a semi engineer, had a very clear idea what he wants from the very start: off-the-shelf i/o ip, axi bus, wide registers, sram fifo, directly register fed matrix multiplication units, and predominantly synchronous operation. Voila. No talkshops, company being turned into research institute, or six digit salary cadres what so ever. The chip might well be a one man project.
To the previous parent, the TPU didn't take five years - I don't know where you got that. But, in any event, I'd argue the key innovation in TPUv1 wasn't actually the design of a specific optimized processor: It was seeing the need for a year or two before anyone else in industry did!
Also - I think translation problem, what's "reticule limit"? In any event, if you're comparing what someone produced today with TPUv1, that's a bit of a weird comparison considering that the v2 is live and available in beta. :)
The maximum area a stepper can expose at the time. Effectively the limit of how big you can make a single microchip.
Making a single-chip inference engine, which NovuTensor appears to be, is a very different thing. So it's better compared to TPUv1, which is also an inference engine. I can't find the die area of TPUv1 out there, but it's not a monster, as you can probably infer from its 75W TDP.
It would be helpful in this discussion to be precise whether you're comparing to TPUv1, Cloud TPU (the v2), or something else, and if you're talking about inference or training.
(I assume my disclaimer is already obvious - I work part time at Google Brain - and that everything I'm writing is my own opinion, etc., etc., etc.)
193i immersion steppers have a reticle limit of 858mm^2, the TPU is nowhere near that.
Yes, they have server-oriented processor families. No, after the storm that has been Meltdown/Spectre, none of these processor families are still considered "safe enough", even firmware-related fixes won't cut it there because the profitibality of a data center is calculated with TFLOPS/m^2 in mind.
Even if profitiability was not an issue (=> government), they won't quite cut it, because you really do not want a system that has this big of an vulnerability just sitting around your high-security/high-impact infrastructure. I do know government agencies in my country who - ever since the Meltdown/Spectre information broke - banned the purchase of Intel processors "until further notice".
Intel is in dire straits, and they should be worried.
that being said, government is a tiny market compared to businesses (cloud providers etc. etc.) so really if they all switch to ibm or so intel still wouldn't be in trouble. it would be a tiny bump in the road.
These sorts of issues were discovered a decade ago when speculative execution first hit the scene, we just never had a practical exploit (in public knowledge anyway) until recently, and even then it really isn't that bad despite all the security apocalypticists shouting that the end was nigh.
Oh sweet summer child.
And it doesn't matter because NOBODY ELSE DOES EITHER.
Intel is so much bigger than everybody else in the space in terms of volumes, that even if they vaporized tomorrow, it would take a couple years for everybody else to plug the hole.
Intel has quite a lot of runway to fix things.
Being in mobile also has the effect of being in IOT.
What happens if they don't spot the next server/cloud market, like they didn't spot mobile?
Add to that, a new generation of envelop layer in the making (Augmented Reality) - that will replace desktops and workstations, and intels huge castle seems truely build upon sand.
In addition they also failed on the other big market- massive parallelization (Neuronal Nets, Cryptocurrencys, etc.) - it just seems that intel lost its ability to innovate quite a while ago- and was just kept upright by its ability to monopolize markets.
Im sure steam-engine companys had great share-value, until the day they had not.
What are your thoughts on that?
Wal-Mart is investing heavily in ecommerce which could threaten Amazon if they become successful. It's hardly accurate to say that Amazon is fighting for its survival.
Purely looking at those numbers. For the same amount of transistor @$30, Intel were selling those at anywhere between $80 to $300. Not to mention Intel has to invest in Fabs just for Apple as 100 - 200M is no small numbers.
Intel cares about its margin, and its profitability. Sometimes we argue we should do things at lower margin to avoid the risk of being wiped out. Sometimes vice versa.
I am not entirely sure what Intel's exec were thinking, they were possibly driven out of Apple and Mobile decision because of fear of shareholders protest at lowering margins, and driven back to Mobile precisely because of shareholders.
And not only is Intel late to Mobile. They were late to GPU as well, and GPGPU in general. We are now looking at 2020 before an dGPU from Intel is out.
Basically Intel hasn't been innovating for a few years. And may be they haven't been to war for a long time. They loses the sense of danger. May be they were typical Silicon Valley's optimist where everything is going to be fine.
Which strangely is different to Andy Grove, only paranoid survives.
But then I dont see x86 ever lose out in PC or Datacenter Market. Right now the biggest threat is AMD, and that is not an existential threat.
I also dont see how Broadcom and Qualcomm merge would be an existential threat to Intel. Apple might not use Intel's modem, but that is not exactly a problem. Intel needed some customers to use its product so it could at least cover the R&D cost of modem, which is getting insanely complicated. And Mobile Network has already reached an inflection point where Modem aren't going to get the rapid development and improvement as we had in the last 10 years.
With iPhone, we manage to move from End of 2G to 3G to 4G and now to 5G. All within 10 years. We have reached a stage where Top Speed no longer sales. Market wants more data, or higher capacity rather then unattainable top 1Gbps speed. We have Massive MIMO and LAA, and in the next 5 years we will see anywhere from 4x to 20x capacity improvement, along with better reception. All these improvement are now bottlenecked by Carrier upgrading their sites, assuming we have a steady rate of mobile phones upgrade.
Intel has a clear market of 1.5 Billion PC to upgrade to, while not everyone will be upgrading their PC with Modem. It is by no means a small market. And Apple has their own W2 with 802.11n and Bluetooth 4.1. It is only a matter of time before they have their own 802.11ac and Bluetooth 5, as well as possible 802.11ax. All these are currently coming from Broadcom. I am much more worried about Broadcom then Intel.
Serial performance is important in processors, and as long as Intel is king in this area, they will have a home in server racks and workstations. I keep hearing about how cheap arrays of ARM processors will take over the world, but I'll believe the easy parallel programming unicorn has arrived when I see it. It seems more likely to me that when this is the case, a move to GPUs, FPGAs, or custom accelerator card would make more sense.
Of course, the field is young enough that this has only happened two or three times, so it may just be a coincidence. Still, I like the thought of ARM chips replacing Intel in ten years, and twenty years later ARM being replaced by whoever currently makes the chips for those musical greeting cards.
In the end, low-power chipmakers like ARM have no where else to go but up -- they can slowly move up-market and over time, enjoy the same high margins but at much lower costs relative similar to entrenched players like Intel.
If margins in mobile are razor thin, then Intel can make bigger margins than anyone else by owning the fabs. And mobile's volume can justify owning fabs.
M&A is hard to do right -- most fail, and a staggering percent have all the surplus delivered to the seller instead of the buyer. See: https://www.mckinsey.com/business-functions/strategy-and-cor...
The real answer would speak to why (Intel thinks) this deal is not like the average, unsuccessful deal, and specifically why they would be able to use Broadcom's assets better than Qualcomm.
The answers here probably have something to do with datacenters and energy/heat efficiency gains when you stop being fabless, something to do with anti-trust avoidance (qualcomm is limited by anti-trust concerns much more than Intel is, without those limits the value goes up), and something to do with cost and workflow concerns. Intel has a substantial risk to their current model, which is already hitting something of a natural ceiling of cost per manufacturing facility, driven largely by precision requirements -- that ceiling is what stopped Moore's law, I think, because no one can build a facility that costs even more money than Intel -- and developing business processes that can deliver classic Intel improvements in a more fabless or fabless-friendly process could reduce that risk.
This is all pure guesswork, but I think it gets closer to the actual discussions inside the companies.
I don't know which acquisition is better. I do think that Intel should probably enter the ARM, somehow, just to hedge their bets.
Intel has an architectural license for 32-bit ARM (https://en.wikipedia.org/wiki/Arm_Holdings#Arm_architectural...), so they could design their own micro-architecture whenever they want to. I don’t know whether they have one for 64-bit ARM, but they have money, so I expect it wouldn’t be difficult to get one.
That way, if things go wrong with x86, they can at least pull a Yahoo - Alibaba.
Intel thought we used too many CPUs?!.....
All these years later, I'm still in disbelief. I do realize that VCs often are uncomfortable giving the real reason they're passing, but "too many CPUs" was a first for me. (And my company is still around and making more than $25MM/year, thank you very much.)
This helped drill home the intuition that large companies are not a single entity, but an amalgamation of different groups and individuals each with a potentially different motivation.
Before your firm can buy a ton of Intel CPUs and make money for Intel, it needs to be profitable. It doesn't make sense for their VC arm to give you money to buy CPUs, then fail to sell your product and go bankrupt.
For example, a company might be developing some cool new IP that Intel is considering licensing or is licensing. Intel Capital will take a stake in the company, and then eventually might consider acquiring the company if they want to corner the technology or think it's cheaper than a long term licensing agreement.
Windows 7 and Windows 10 have been pretty efficient operating systems. That's great for Microsoft but not Intel. With identical system requirements, it means there's very little reason for most people to upgrade hardware. Me, a supposed high-end PC user am still running a 3rd generation Core i5 processor with no urgent need for upgrading in the near future. Graphics cards are still a once every 2-3 year upgrade for PC games.
All of my friends who run it must be using it wrong because that is not my experience.
It seems to me that most people would do a bit of research, see that Intel is still destroying AMD on speed tests, or they go to Best Buy and see a load of Intel stickers. The sales person will rightly or wrongly tell the customer that Intel is better, and so on.
The main point is that mot people are not power users or knowledgeable about computers. That's why you can read reviews online about how Dell is horrible because "it catches viruses."
Intel integrated GPUs have no place at all in deep learning. They are perfectly good for emulating the 2001 Gamecube and playing classics like Doom and Quake, but they have a bad enough reputation among gamers that if and when they do come out with a discrete card they will have to brand it something other than Intel.
They certainly want to compete in the discrete GPU space, but I think they should be much more scared of losing CPU market share.
>"If the dispute is settled, Intel loses its wireless modems deal with Apple. No mobile CPUs + no modems = nothing of substance."
Aren't nearly all processors in mobile devices a SoC these days? Isn't the "wireless modem" just an LTE radio on the same SoC that has an 802.11 radio for wifi? I'm not understanding how these would be separate in a device like an iPhone.
I'm really intrigued by stories of missed opportunities. Certain companies have all of the power then make a minor miscalculation on the future of technology. Does anyone have similar personal experiences that echo this type of missed opportunity deal?
It happens all the time, in matters big and small. A company takes a bet on something, it fails, and afterwards people (even on the inside) are scratching their heads and thinking "how did we ever think that was going to work?". Most of the time, nobody outside the company hears about it.
When companies are powerful, their power is generally concentrated in a very narrow domain. Stray just a little outside that domain, and they're just as clueless as anyone else (although perhaps with deeper pockets).
Having said that, Intel are going nowhere overnight.
Also, why is Krzanich still CEO? He's made terrible decision after terrible decision. Buying Broadcom now reminds me a lot of Ballmer wanting to buy Yahoo for $40 billion. In hindsight that looks like a terrible deal, doesn't it? I think if Intel buys Broadcom, this will look the same in 10 years.
Intel has strong conflicts of interest against other non-x86 chip divisions, which would be Broadcom, I'm sure it's the same with Altera, too. In a few years they'll regret buying FPGAs, too.
The "synergy" is a lie.
Interesting to see that such a giant can really start shaking so badly by losing one of its sources of profit. I think even if the whole desktop market dies Intel will still make more money than most companies. Sure, maybe they need to shrink 80%, but 20% is far from a small company.
It just wants to thwart a [Qualcomm, Broadcom] combine that's likely to aim at Intel's jugular - the cloud/server market
Between switch ASIC's like the Trident II, RAID controllers and SAS HBA's, 802.11 chips, DOCSIS modem chips, optics, etc. they are almost everywhere even if you don't see them.
There is not a world where there will be one processor manufacturer. We will see multiple manufacturers on multiple architectures for as long as things continue to progress.
Intel may have a short-term existential threat, but that threat is minimal.
If it doesn't have the consumer CPU volumes (notebook, desktop, mobile) then production costs for server chips will be much higher, and it won't have those nice margins.
Perfect storm if AMD continue to pile on pressure and ARM licensees start making inroads into servers.
Can you please elaborate on this?