Hacker News new | comments | show | ask | jobs | submit login
Intel Fights for Its Future (mondaynote.com)
226 points by KKKKkkkk1 7 days ago | hide | past | web | favorite | 192 comments

This article makes it seem that being a major player in mobile is the only way Intel could survive. Mobile is a decelerating market, while cloud and servers, where Intel has a huge lead is an expanding market and profitable. In 2017, Intel's revenue hit an all-time high, as it did the year before, and its stock is an all time high. This is not a company "fighting for its future."

The headline is definitely sensationalist but there are threats and missed opportunities on Intel's horizon and past.

The lure of mobile is the upgrade cycle as those are much shorter. The same, short upgrade cycles were a boon for intel in the hey day of desktops/laptops.

The bigger threat the article fails to focus on is ARM. Singularity for apps has always hinged on write-once-run-everywhere. Intel hasn't been able to compete with ARM lately and it seems that ARM is moving into Intel's turf. A few months back Microsoft launched windows 10 on ARM -- that's an inflection point. Apple relatively recent iPad Pro also continues to blur the lines along with other initiatives around app continuity (Mac/iOS) or the Nintendo Switch's hardware continuity that offers a glimpse into what the same thing could look like for mobile in general.

It's not for Intel's lack of trying either. It's just that ARM got off to a good start while Intel failed to make in-roads with Atom and whatever else.

> A few months back Microsoft launched windows 10 on ARM -- that's an inflection point.

Microsoft has launched Windows for various non-x86 platforms over the years, including ARM, and it has always petered out. Maybe it will take off this time, but I wouldn't bet too much on it.

But this time it's (almost) full windows, and it can run x86 application.

Clarification: Only 32 but apps. Not 64 bit apps, which the Windows world has almost fully moved to. And I’m sure they would run at reduced performance as well.

most non-compute intensive programs still are compiled for 32-bit, or have 32 bit versions available. also, visual studio still defaults to 32 bit when creating a new c++ project.

You can even get Slack for Windows in 32-bit, although I suspect you'd hit the nasty 4gb limitation pretty quickly

I thought it was odd that a single application would even hit the limit. Then I realized Electron.


Some people might consider that a blessing in disguise with Slack.

I'm not sure what the behavior is, maybe best case Slack crashes and restarts itself. You could almost call that behavior a feature of 32bit Slack: "automatic garbage collection"

Garbage collection isn't strong enough: it doesn't collect the garbage generator which is Slack


Almost anything compiled for 32bit windows would work just as well on a beefy tablet. Laptops and Desktops are, more and more, being reserved for power users and computationally heavy tasks.

> Laptops and Desktops are, more and more, being reserved for power users and computationally heavy tasks

I'm not seeing this in the business world. Yes, there are some tablets in use, but their numbers are still very small in comparison to desktop and laptop computers. And a huge majority of these users are definitely not what I would call "power users".

Reminds of the early nineties when MS was keeping everyone in the 16-bit world. Win 95 was this amazing kludge that straddled 16 and 32 bits. Thankfully they’re not the monopoly they once where.

For what it's worth, Windows NT 4.0 ("full windows") ran on DEC Alpha, and using FX!32 emulation, executed x86 Win32 apps. There was even a 64-bit build of NT running internally at Microsoft as a proof-of-concept, but wasn't publicly released.




Yeah but the upgrade cycle won't stay short for long, just like it hasn't for desktops.

Give it 10 years and people will be keeping their phones until they break.

Also anyone can make an ARM CPU so the margins are thin, whereas with x86 CPUs Intel will basically only ever have one competitor - AMD. They want to find another monopoly.

We’re already keeping phones until they break, by handing them down through the family. I’ve bought 5 iPhones and all but 2 of them are still in use. In fact the second one, a 3GS, was only fully retired at the beginning of last year.

The upgrade cycle has really just been a process of filling up the market. In fact I’d argue there are still a lot of old devices still in use beyond an ideal lifespan. There’s still going to be a much quicker refresh cycle than desktops for a while, but they’re likely to converge gradually.

What we might see is high end users keeping their flagship for longer, and refresh phones ‘downstream’ in the family to more recent midrange devices rather than passing on their own device and getting a new flagship. In fact, that’s what I just did for my teenage daughters.

Problem with mobile devices is they long outlast their manufacturer's software support. My iPad, for instance. The first device was announced in 2010, and software support ended with 5.1.1 in 2012. Two measly years of updates. This thing has not had an official update (including security updates) in nearly 6 years. Your 3GS hasn't been supported for 4 years. Whereas, I can keep a desktop computer for 20 years and update its software throughout that time.

We should be able to keep our devices until they die, but without regular software updates, you're playing with fire.

Keep using a computer for 20 years? A 20 year old computer is not going to run the latest version of Windows.

Now I do have a 10 year old Core 2 Duo 2.66Ghz with 4GB RAM, Gigabit Ethernet and a nice 1920x1200 display serving as a Plex Server. In day to day use, the only time I can tell the difference between that and my modern laptop is when I try to run too many things at once and that could be alleviated by upgrading the RAM to 8GB.

As far as the iPad 1st generation, it's true that you can't get software updates -- I have one too -- but you can still download the "last compatible version" of apps for it. I reset mine last year and re-downloaded and ran Netflix, Hulu, Crackle, Google Drive (for reading PDFs), Spotify, Plex, and all of Apple's iWork Suite.

It still supports AirPlay and prints to all of my printers. I can check my Exchange email at work. The browser on the other hand crashes constantly.

In Europe, where most of us use pre-pay without major subsidies, everyone uses them until they break or get stolen.

I think that very dependent on the target demographic and country. Also definitely some delusion in it too, many people I know will go on about planned obsolescence and so forth in a put the world to right speech and then whip out the latest iPhone or Samsung flagship to call an Uber that they have on contract.

Intel gave up too early in my opinion, the Zenfones (pre-Snapdragon) were great phones, I just don't think Asus alone had enough penetration to make it worth their while.

What was the difference between Zenfones pre-Snapdragon and post?

The Intel CPUs were performance-competitive with the SD8xx chips at the time but most of the newer zenfones use slower SD6xx chips.

My experience with the Zenfone 2 was not very positive though. It seems to have been more Asus's fault than Intel's, but also Intel could have done a better job with their kernel. Yes the CPU was fast relative to its price (because Intel was dumping them) but almost everything else about it was not very good. Battery life was poor, seemingly because Intel never got core parking to work in their Android kernel (I also owned a Lenovo Atom tablet which had the exact same problem so I'm going to blame Intel here). Most apps ran pretty fast but anything with ARM binaries ran very slowly in emulation or in a few cases didn't work at all. ASUS's Android skin was terrible and had an obscene amount of preinstalled bloatware. There were loads of bugs, and installing updates was always a guessing game of whether it improved things or made it even worse. Sometimes GPS wouldn't work. The Marshmallow update was both extremely late and extremely buggy. The plastic back started developing cracks after about a year without ever dropping it.

The ability to run windows software in either Wine or a VM was a rather cool-but-useless novelty.

Thanks for the info.

Subsidies baked into the cell plan are mostly gone in the US too, replaced with monthly installments that superficially are the same but in practice allow you to pay up front and then only pay for cell service on a monthly basis.

Not everyone, but many people do.

Even today I see an increasing number of friends moving from 2 years cycles to 3 year cycles.

The "killer app" for smartphone refresh rates staying low are (1) battery degradation and (2) forced OS updates for app updates, which in turn slows the phone down to compel an update[1].

[1] I'm actually curious if OS updates slow down older phones these days. This was the case 5 years ago but perhaps things have changed/

I have a Nexus 5X that was 2 years old late last year. At that time I git it's last major OS upgrade. (It will continue to get security updates for another year or so.) This is as good as it gets in the Android world.

It was hobbled at the outset with 2GB RAM. At introduction performance was adequate. It has degraded since then. There are times when I exit an app and it takes several seconds before the home screen is populated. Navigation normally runs in the background but if I open a different app, it may get bumped and disappears from notifications. I listen to podcasts frequently and its background process (the part that keeps audio going if I open other apps) gets bumped.

These are not related to processor horsepower but I believe are symptoms of insufficient RAM. It could also result from additional installed applications, but I have uninstalled more than I have added and it seems not to improve.

Maybe it's time to look at third party ROMs.

I had a 5X that I recently replaced due to the common boot loop failure.

Towards the end of its life its performance had degraded to the same point you experienced. Multiple seconds for the home screen to populate, camera lag, etc. It's crazy that a cell phone with 2GB of ram is incapable of running recent versions of Android with decent performance and even crazier when I'd open up a task manager and see mandatory things like Google Play Services consuming almost 60% of the ram on the device.

Could it be a case of degraded battery like the iPhones? If the battery can't supply enough current, the system has to slowdown to avoid a hard crash.

As a grumpy Moto G 3 owner, I would love for an OS update that slowed my phone down.

Short support cycles seriously dent the feasibility of keeping the same phone for even two years...

Switch to LineageOS. The Moto G series is well supported up to Nougat. It will not be any slower either.

I assume aftermarket ROMs are a non-option for you?

As a Verizon customer:

- Devices that support Verizon's bands & CDMA are mostly only sold as Verizon branded devices. (CDMA is less important every year but I still occasionally need 3g)

- ~All Verizon phones have locked bootloaders and no official way to unlock them.

- Manufacturers don't make the easy mistakes which let you unlock their bootloaders anymore.

I will not be a Verizon customer for much longer.

Depending on your device, installing an aftermarket ROM can range from "easily accomplished from initial query in a search engine to ROM installed within 1 hour" to "I am stuck in a nightmare of poorly written, contradictory forum posts written in a mishmash of languages with lots of instructions that say 'this is easy, but WARNING COULD BRICK YOUR DEVICE'." Who has the time or patience for such confusion?

I've been doing a variant of the 3 year cycle: buy a year old model (e.g. I bought an S7 just after the S8 came out), and then keep it for 2 years (it's even possible I'll be able to keep this one longer. It's not feeling slow yet).

Advantage is that you buy for half the price of a new flagship.

for me the camera is one major reason to upgrade every 1.5-2 years. But I buy a phone(asus) that is at least 50% cheaper than most of the high end phones so I figure I'm coming out ahead. They probably have better cameras but mine has been good enough until the next refresh cycle

For me, cameras have become good enough in around 2014, before then, smartphones cameras tended to turn low light pictures into a blurry mess.

Things will always improve, but for cameras, we are getting to the point where if your smartphone camera isn't good enough for you, you are probably at least an enthusiast and you probably want to get a real camera.

Since then, I've seen absolutely no game changing improvement.

> if OS updates slow down older phones these days

On the Android side, it's sorta debatable whether Google Services is really part of the OS or not, but in any case, a Galaxy S4/5 with the latest google SW is a performance disaster. Maps regularly freezes for like 3-5 seconds at a time; its sluggishness is almost certainly causing additional auto accidents. Pretty much every other G app takes more than a second to open, despite being compiled-to-native-code during installation. Disabling airplane mode will cause 100% CPU on all cores for like 15+ seconds. Performance was certainly better when the phone was new 4-5 years ago.

I have an iPhone 6S - I'd say OS updates do slow the phone down, but they also tend to be really buggy and generally rubbish for a while. I think it's probably more bad engineering and software quality than anything nefarious.

My experience with Android is the opposite, it's gradually better and faster with every release.

Apple is intentionally slowing down phones with every OS upgrade to "preserve the battery" [1].

[1] http://money.cnn.com/2017/12/21/technology/apple-slows-down-...

That's not at all what they said.

The said that as the battery degrades, they limit CPU spikiness that can abruptly kill the device even with a non-trivial amount of juice left.

That's not nearly the same as "slowing down the phone with every upgrade".

Often people have reported better performance after an OS upgrade, although obviously results may vary, and subjective impressions are always, well, subjective.

How many times have you had to replace a PC because you dropped it, left it in a cab, etc.?

I haven't broken a phone yet, but I doubt I could keep using a phone for 10 years anytime soon.

With the new anti-theft systems on smart phones I can imagine laptops experience theft more frequently than smartphones.

In aggregate? There are far more smartphones in use world wide than laptops.

Even if not, most phones being sold worldwide are cheap Android phones that don't have sophisticated anti-theft technology.

Phones are carried and used in places and situations that a laptop is not, this creates a lot more opportunities for theft.

Phone: 1 (dropped; screen broke but I was able to repair it)

Laptop: 1 (spilt wine on it, was not able to repair it)

I agree for desktops, but laptops regularly get broken and stolen.

Something you keep in your pocket along with your keys, that will be dropped ten times, and on which you will regularly sit will inevitably have a shorter life than a laptop. And for the rare people who take good care of their smartphone, there is the matter of non replaceable batteries on a non fixable device.

On the other hand, all of the above are also reasons why - one could argue - the mobile market is going to be commoditized more and more.

>Give it 10 years and people will be keeping their phones until they break.

That's basically me right now, still have my trusty sgs2 running lineageOS and will continue to have it till it breaks

I'm most of the way there now. Honestly, all I want is call, text, email, web-browser, and PDF docs.

> Singularity for apps has always hinged on write-once-run-everywhere.

From app developer perspective, processor architecture is low on the list of the problems you have to solve to run anywhere. User-facing apps aren't written in C or anything that close to metal anymore; I suspect that most of the Java, Kotlin and Swift code is completely architecture agnostic: add new capability to the compiler, change the build command and your ARM app becomes x86 app.

Even native languages close to the metal (Ada, Pascal, Basic, ...) don't have any big issue switching processors, unless inline Assembly is used.

C and its derivatives (C++ and Objective-C) are the outliers thanks to implementation defined numeric types, the abuse in pointer tricks and casts.

> "native languages close to the metal ... BASIC"


A long time ago in a galaxy far, far away, there used to be lots Basic compilers for doing systems programming in Z80, 68000, VAX, 80x86 ....

In fact, they still exist for micro-controlers, https://www.mikroe.com/mikrobasic.

I once had to help port a large system written in Ada from SPARC/Solaris to x64/RHEL. The only issues we rant into were predominately related to the endian difference between the two architectures, and that was really only due to the fact that our code did a lot of low-level bit manipulation :)

I have a curious question. I am not a Ada Developer but I have two that work for me. Another retired in OCT 17 after successfully porting our project from Solaris to RHEL 6. He told me at that time that we could not go back and compile the code for a Solaris system even though we still have the Solaris box we originally compiled on. Are there any issues you can see preventing us from going back and compiling the code for Solaris?

And ARM is little endian in all currently deployed processors.

So about only pitfall is assembler, SIMD intrinsics and depending on undefined behaviour like arithmetic or not signed right shift. Oh and different thread safety guarantees. ARM is much more lax with loads and stores and will require locking and correct use of atomic operations.

a lot of compiler works to enable.. C spec mandates fixed width numerics afaik.. try imagining 64bit division using 32bit.

I don't need to imagine it, I started programming on Z80.

EDIT: Forgot to mention that C spec only specifies "at least X" sizes, there is no portability guarantee if the types from stdint.h aren't used.


Reminds me of the time I had to program a Z80-based controller board. It came with a non-standard C Compiler where integers were 16-bits long. Of course, I didn't realize this at first (I suppose I could have read the documentation, but...), so I had to figure it out for myself while debugging :)

It was my first time programming C on something that wasn't a 32 bit CPU :)

well there is. if you used <= "at least" sizes.

The picture at the beginning emphasizes ARMs rapid improvements.

It only goes out to 2014. It leaves one with the impression that ARM will soon pass Intel in performance.

This would suddenly eat into Intel’s profits.

All I want is a single socket CPU and a motherboard that supports ECC/isn't total garbage. AMD has said it's on the motherboard manufacturers, and I haven't been able to get a straight answer as to whether ECC works as it should on motherboards that support Ryzen - https://www.reddit.com/r/Amd/comments/80wpd1/ryzen_2_ecc_sup...

Here's a list of Ryzen motherboards that people have tried and succeeded in using ECC with: http://www.overclock.net/forum/11-amd-motherboards/1629642-r...

Additionally, you can check the QVL for each motherboard, and then test it by hand.


*Edit - $750 is a lot for the cheapest threadripper + mobo. I'd be happy with 48 PCI-E lanes around $400 (CPU+Mobo).

You can get a non-Threadripper Ryzen CPU+mobo with ECC support for far less than $400. Obviously it lacks the PCIe lanes of the TR setup. These are both ridiculously cheaper than you could get similar capability for before Ryzen launched. If you really want 48 PCIe lanes for $400 you can get a used Xeon X99 setup for that much.

$750 for "workststation" class is really inexpensive.

$400 is really cheap even by consumer standards.

Really? I don't need high clock speeds, I'm looking for a reasonable (48) number of PCI-E lanes and ECC support.

That's called workstation class. Most consumers need barely 24.

To me that overclock post was just further demonstration of the sad state of affairs re. ECC

Ryzen Threadripper and the TR4 socket are advertised with ECC support on the AMD website¹², in contrast to regular Ryzen/AM4. This may mean that all motherboards support it. I can't be sure, however, ECC seems to work fine on ASRock X399 Taichi.

¹ https://www.amd.com/en/products/ryzen-threadripper ² https://www.amd.com/en/products/str4-platform

Why not buy a server board? Edit: Its on the motherboard manufacturers not really the chip makers.

The C236 chipset is old as hell (2015) - they're holding back on adding PCI-E lanes, because they can.

My hope is that Ryzen 2 shakes things up / Xeon 'Entry' offers a decent board.

The Xeon D chips are a total frigging racket.

Totally agree regarding Xeon D statement

Yes. I must say I chuckled a bit when I saw the headline. Intel is still churning out 63% gross margins. On semiconductors. That is exceptional.

Intel is not fighting for its life by any stretch of the imagination.

RIM was at its most profitable three years after the iPhone came out....

My take is that the analysis refers to Intel's long-term prospects, not today's or yesterdays's revenue. These reports serve as a foundation to guide decisions on future long-term investments.

Intel is fine as long as ARM doesn’t compete in their core markets. Then it’s a sudden and violent change.

I guess you don’t see ARM attaining parity with x86?

Apple is the only manufacturer that actually produces competitive processors. Qualcomm is lagging behind x86 by at least a factor 2 compared to laptop chips and by factor 6 compared to desktop chips.

On top of that the arm laptops that have been announced were significantly more expensive than an equivalent x86 laptop and the x86 emulation isn't good enough to count on it.

Your talking about what’s on the market right now, or coming out based on existing tech. It’s exactly that kind one short term “everything’s fine, we’re doing great as we are” thinking that the OP article is deconstructing. It’s what killed Nokia, Blackberry and Palm, as he pointed out. Chip fabricators like Intel are making investments now that won’t affect the market for 6 years (2014 announcement of expected launch of 450nm tech in 2020).

ARM tech is on an intersect trajectory with x86 tech, and it’s only a matter of time before those lines cross. Up until then Intel will do fine. After that, all bets are off. Fortunately for Intel, they’re a couple of steps ahead of you in that at least they now see the problem.

*450mm, not nm. Referring to the wafer size.

Not trying to be pedantic, just so that no one gets confused.

Jim Keller, who's been on both sides of the table, x86 with AMD and ARM with Apple, expects that a fully developed ARM CPU will be 15-30% more efficient than their x86 counterparts.

ARM is looking like it'll be a major competitor in the expandin server market, judging by preliminary benchmarks: https://blog.cloudflare.com/arm-takes-wing/

> Qualcomm is lagging behind x86 by at least a factor 2

What metric is that? FLOPS / Watt?

What makes ARM fundamentally better than Intel? The instruction set? I just don’t see it. Intel has an incomprehensibly massive installed base of compatible software. All other things being equal, why does ARM catch up?

Right now judging from Cloudflare's benchmarks, the win will be in TDP. [1] Heat is a very nasty issue to handle in large datacenters, and can end up costing quite a bit in cooling. Intel's been making some great strides in this area, but ARM chips like Qualcomm's Cintriq are pretty tempting in this space.

[1]: https://blog.cloudflare.com/arm-takes-wing/

I guess it depends for what sort of compute. For a heavy monolythic computation isn’t CISC a structurally faster approach? For lots of cheap, low wattage micro-services/containers I could see ARM being more efficient/cheaper.

> Mobile is a decelerating market, while cloud and servers, where Intel has a huge lead is an expanding market and profitable.

No, it is not decelerating, it's still growing [0]. Besides, demand from other low-power applications such as IoT will only lead to increased demand for low-power (mobile) chips.

> In 2017, Intel's revenue hit an all-time high, as it did the year before, and its stock is an all time high. This is not a company "fighting for its future."

" ... as Microsoft shows, revenue is a lagging indicator in the technology business." -- pg [1]

Given enough time, all investments will eventually mature. In other words, revenue is about the past -- reaping the rewards from mature (technology) investments. Growth is about the future -- future returns can only come from nascent (technology) investments.

Growth in a new product with lots of potential applications is a much better indicator of a company's potential to continuing earning returns in the future, before that product reaches maturity (and revenue attributable to that product starts to decline). Intel needs a new product category to future-proof its existence.

[0] http://www.asymco.com/2018/02/27/the-number/

[1] http://www.paulgraham.com/ambitious.html

EDIT: as pointed out by several commenters below, I incorrectly internalized your use of "decelerate" to mean "decline" but I'll leave my comment as-is.

That's a misleading chart, since it shows "cummulative sales". Here, complete numbers: http://communities-dominate.blogs.com/brands/2018/02/smartph...

You can see, total market grew by less than 200M as opposed to more than 500M last year. Definitely "decelerating".

You make a fair point and I'll concede my error in equating deceleration with decline. But I think most of my points still stand.

Decelerating doesn't mean that it's no longer growing. It means that growth is declining over time, so growth today is less rapid than growth a year ago.

That doesn't mean that growth will stop, though it might.

You make very good points except I think you are confused by what decelerating means. A market may be growing and at the same time be decelerating, as in it is growing albeit at a slower rate of growth each year or quarter.

Decelerating != declining. The claim is that the growth is slowing, not that it's not growing anymore.

> "Growth in a new product with lots of potential applications is a much better indicator of a company's potential to continuing earning returns in the future"

New opportunities in an existing market are equally as effective in spurring on growth. For example, there doesn't seem to be any reduction in appetite for servers, and Intel has been the market leader in this space for a while. Just because we've had server market for over half a century doesn't mean it's lacking in potential growth.

If it's accurate, the following graph is quite telling:


Graph was part of the following article:


But as cloud service provider grow stronger, they will begin to make their own chips, like TPU. That is a much bigger threat to Intel. Essentially, as Moore's law is effectively put into an end, people will seek to make their own customized chips, CPU will be marginalized as time goes by.

Except they wouldn't. It took Apple a decade to up their silicon to a level where it is now. For most of that time, their activity was a money sink for them. And even today, even with some of the best talent in IC design one can get in USA (PA Semi and its veteran designers from HP, Sun, Freescale, and whatever else was remaining of American semi industry), their chips are still half outsourced, 70% external IP if you remove SRAM from calculations.

Apple is company with an own silicon, but it will never be a true microelectronics company.

Google's TPUs are another examplary case. Goog did spare no money to get into the market first and secure a tech/platform lockdown. Yet, what took them at least 5 years, and extremely expensive to make reticule limit chips, was bested by a no-name Chinese fabless that blew TPUs out of the water on power/cost/performance ratio.

All American dotcoms that are actively trying to get into hardware are oblivious to the fact that there is an inherent difference in between a hardcore engineering company to a company whose topmost technical expertise is underhanded web programming.

I won't be so certain. An ASIC/FPGA chip that accelerates specific Database operation is already a reality. The problem is the cost effect ratio.

BTW, cloud provider already making tons of customized hardware themselves. And talking like Amazon/Google/Microsoft only have talents for web programming is short sighted and simply not sure.


>And talking like Amazon/Google/Microsoft only have talents for web programming is short sighted and simply not sure.

Well, I am saying that they do obviously spend an ennormous effort trying to do so. To better formulate what I said, the fact that they did acquire some hardware/semi expertise in house, does not mean that this expertise drives them as a company.

If tomorrow 2 engineers, one from semi side, and one fromwebdev, will knock at the door of CEO with "I found a way to do things A and B 250 times better, but you have to scrap half of your business plan," a webdev will be heard, and semi guys not.

Btw, the chip Amazon showed, was from a company they acquired solely for that - to not to pay an arm and a leg for high end switching chips.

And about OEM servers - it is surprising that big dotcoms are late comers to the party, and were relying on off the shelf brand hardware to the very last moment.

Big hosting providers from outside of dotcom ecosystems were relying on direct OEM orders and custom built DCs for more than a decade. I do remember selling Atom based single board computers stuffed into U1s and Intel core 2 duo systems with soldered on memory and cpus to budget web hosting guys back when I worked as a trainee in a trade company back in 2007-2009.

> "If tomorrow 2 engineers, one from semi side, and one fromwebdev, will knock at the door of CEO with "I found a way to do things A and B 250 times better, but you have to scrap half of your business plan," a webdev will be heard, and semi guys not."

What are you basing this assumption on? Have you worked for one of these companies? Do you know anyone that does?

Also, regarding Microsoft, any suggestion that they're focused on acquiring web devs is clearly short sighted. If you want a better idea of its priorities, I'd suggest taking a look at which sectors it earns its main revenue in, as well as taking a look at the work being done at Microsoft Research.

>Do you know anyone that does?

Surely do, both MS and Amazon.

>What are you basing this assumption on?

It takes a giant effort for ordinary managerial cadres to wrap their mind around of what a web company is and learn the whole model of behavior expected from them. The few who manage to learn some basic technical disciplines and go up in ranks tend to overestimate the importance of their experience.

You meet such people a lot in a dotcom setting. It takes great effort to persuade such person to bother to put efforts to understand yet another mentally voluminous subject that will break his idea of "cool" yet another time.

It is like trying to persuade a prideful child who just learned how to drive a tricycle to learn to drive a normal bike...

BTW, are you from Microsoft?

> "Surely do, both MS and Amazon."

Thanks for confirming.

> "BTW, are you from Microsoft?"

No, I don't work for Microsoft. However, I have enough experience with their ecosystem to suggest that their revenue focus is not in web dev. Other products (such as Windows, Office, Azure and Xbox) are their prime source of revenue. Whilst I don't doubt they have plenty of web devs (TypeScript and VS Code both spring to mind as web-based tech from Microsoft), I wouldn't say that is their core competency, so...

> "a company whose topmost technical expertise is underhanded web programming"

... doesn't ring true. However, if you know people on the inside I'd be interested in knowing how the size of the web dev teams compares to other teams, such as the Xbox division.

> Yet, what took them at least 5 years, and extremely expensive to make reticule limit chips, was bested by a no-name Chinese fabless that blew TPUs out of the water on power/cost/performance ratio.

I'm not familiar with what company or product you're referring to. What is it?

Novumind, they showed their engineering samples few months ago. And yes, they got quite close to their initial promise of 3 teraflops per watt.

Cambricon - possibly a scam, though also claims performance on an equal level with moderately sound technical approach. They never released much info other than fancy CG videos.

I can't find much information about Novumind (NovuTensor is their product) nor Cambricon. NovuTensor seems to be only doing inference , and only a specific type of inference (judging from their hardware, probably only images), which is a lot simpler to speed up than the training that TPUs do.

I'd take the statement "Unknown Chinese company beating Google with fewer resources and less time" with a huge grain of salt.

FWIW, there are other small companies that have made amazing chips, like Adapteva and REX Computing.


NovuMind only does 3x3 convolutions. Literally a hard burnt ASIC. Also, the TPU's efficiency is close. (Btw both their "FLOPs/watt " numbers are useless, they don't account for utilization)

And Cambricon... well, I'll just leave it at that...

I don't think that relevant. For running NNs, this is pretty much all what you need from an asic hardware - raw floating point matrix multiplications per second. And measuring how much power it takes for it to do so is a good measure of power efficiency.

Pretty sure the novumind chip has hard coded the Winograd 3x3 convolution algorithm.

You seem to be pretty knowledgeable in the subject. Are you someone working in the field? IC designer?

psst, click their username: "Founder of a deep learning chip startup called Vathys.ai (YC Winter 2018)."


(Edited to answer the below question: Dave Andersen - http://www.cs.cmu.edu/~dga/

I'm back at CMU full time, but was having too much fun at Google to quit entirely.)

Yup, guilty as charged :)

EDIT: Question answered :)

Yep, Vathys; W18

You do realize that David Patterson is heading up Google's hardware efforts, right? Hardly clueless and inexperienced...

And what Chinese fabless was that?

>David Patterson

Surely, he is a top cadre, and an accomplished academician. The number of his students serving on CTO level jobs is in double digits.

The fact that Google went so far with cadres, and said to be throwing high six digit salaries even on people who came to the unit fresh out of universities clearly signifies the extend of their efforts and commitment. Their intent seem to be to bang money on it until it works, no matter the cost.

>Hardly clueless and inexperienced...

I never challenged the technical expertise per se. I'm saying that a generic web/dotcom/clickfarm business can't be normally turned into an engineering company of an inherently different nature, regardless of how much money will be banged on the exercise.

People say that for the past 5 years, the TPU unit was running effectively, like a research institute of some kind: regular workshops with academicians, amount of research works written exceeding the amount of code, and so on.

TPU unit people did their job splendidly, but Google's managerial unit that authorized the whole affair seem to me to be having hard time wrapping their mind around the question on how and what to do with it.

On the other hand, Ren Wu, being originally a semi engineer, had a very clear idea what he wants from the very start: off-the-shelf i/o ip, axi bus, wide registers, sram fifo, directly register fed matrix multiplication units, and predominantly synchronous operation. Voila. No talkshops, company being turned into research institute, or six digit salary cadres what so ever. The chip might well be a one man project.

To clarify, Dave doesn't head the hardware efforts (I doubt he wants the management headache!). Norm Jouppi was the lead of the TPU efforts. But Dave absolutely is part of it, he just doesn't have to herd the cats. :p

To the previous parent, the TPU didn't take five years - I don't know where you got that. But, in any event, I'd argue the key innovation in TPUv1 wasn't actually the design of a specific optimized processor: It was seeing the need for a year or two before anyone else in industry did!

Also - I think translation problem, what's "reticule limit"? In any event, if you're comparing what someone produced today with TPUv1, that's a bit of a weird comparison considering that the v2 is live and available in beta. :)

>what's "reticule limit"?

The maximum area a stepper can expose at the time. Effectively the limit of how big you can make a single microchip.

But, that said: For training, it's odd that you'd criticize the die area of the TPU when the major competitor is Volta -- 815mm^2 (!). It's pretty clear that TPUv2 and Volta both have similar aims in terms of high memory bandwidth (both use HBM), monstrous matrix multiply throughput, and a high-speed interconnect to be able to scale to larger clusters.

Making a single-chip inference engine, which NovuTensor appears to be, is a very different thing. So it's better compared to TPUv1, which is also an inference engine. I can't find the die area of TPUv1 out there, but it's not a monster, as you can probably infer from its 75W TDP.

It would be helpful in this discussion to be precise whether you're comparing to TPUv1, Cloud TPU (the v2), or something else, and if you're talking about inference or training.

(I assume my disclaimer is already obvious - I work part time at Google Brain - and that everything I'm writing is my own opinion, etc., etc., etc.)

Ah, you mean "reticle". Thanks for clarifying. (A reticule is a cute little bag -- https://upload.wikimedia.org/wikipedia/commons/thumb/8/80/Re... )

The TPU is nowhere near the reticle limit, don't listen to OP.

193i immersion steppers have a reticle limit of 858mm^2, the TPU is nowhere near that.

Well, it's not like Norm Jouppi is also a very well known engineer :)

This thread is full of well-written arguments that don't offend anyone. I understand that these points may be wrong, but these comments sure as hell shouldn't be downvoted.

They also do not have a product for the server market.

Yes, they have server-oriented processor families. No, after the storm that has been Meltdown/Spectre, none of these processor families are still considered "safe enough", even firmware-related fixes won't cut it there because the profitibality of a data center is calculated with TFLOPS/m^2 in mind.

Even if profitiability was not an issue (=> government), they won't quite cut it, because you really do not want a system that has this big of an vulnerability just sitting around your high-security/high-impact infrastructure. I do know government agencies in my country who - ever since the Meltdown/Spectre information broke - banned the purchase of Intel processors "until further notice".

Intel is in dire straits, and they should be worried.

intel is just fine. spectre and meltdown were blown way out of proportion. almost every CPU to date has had similar vectors to exploit them or worse. if intelligence agencies don't know this they have shitty assurance and should get better. most agencies for high integrety systems wont have been using intel/amd as there are cpus out there (ibm for example) which offer much better assurance with regards to memory integrity etc.

that being said, government is a tiny market compared to businesses (cloud providers etc. etc.) so really if they all switch to ibm or so intel still wouldn't be in trouble. it would be a tiny bump in the road.

I don't see how Spectre and meltdown were blown way out of proportion? Google and Amazon seem to have treated them quite seriously.. Maybe they should have just closed their eyes. The solution to Spectre right now is "Don't do speculative execution". Meltdown exploits a pretty embarrassing bug in the hardware. One has to wonder how it got through all of the testing/review that must happen before a new design is sent off to a fab($$$). Don't forget that there are 30+ class action lawsuits filed against Intel due solely to these bugs.

Most people who run servers aren't cloud hosts, so they're not running untrusted code on their servers in the first place, making Spectre/Meltdown pretty much a non-issue.

These sorts of issues were discovered a decade ago when speculative execution first hit the scene, we just never had a practical exploit (in public knowledge anyway) until recently, and even then it really isn't that bad despite all the security apocalypticists shouting that the end was nigh.

> so they're not running untrusted code on their servers in the first place

Oh sweet summer child.

> They also do not have a product for the server market.

And it doesn't matter because NOBODY ELSE DOES EITHER.

Intel is so much bigger than everybody else in the space in terms of volumes, that even if they vaporized tomorrow, it would take a couple years for everybody else to plug the hole.

Intel has quite a lot of runway to fix things.

Mobile may be "decelerating", but the replacement cycle is shorter and even the poorest people in the world who will never buy PCs are already buying phones. The market is much bigger.

Being in mobile also has the effect of being in IOT.

Doubling down on market you know well is a lot easier than spotting a new one. Judging by their dabbles in both mobile and IoT hardware they're areas of interest (whether seriously or just scare tactics to show they could move into the area notwithstanding) - but the perception is they missed the boat.

What happens if they don't spot the next server/cloud market, like they didn't spot mobile?

agree, even if they sold nothing atall in the smart phone market, all the smart phones of the world still need servers to service the users needs.. and they practically all run on intel or ibm cpus.

Take mobile devices- add a new type of battery, allowing for true decentralized servers- and intels seemingly value in the serverspace evaporates in a cloud of smoke.

Add to that, a new generation of envelop layer in the making (Augmented Reality) - that will replace desktops and workstations, and intels huge castle seems truely build upon sand.

In addition they also failed on the other big market- massive parallelization (Neuronal Nets, Cryptocurrencys, etc.) - it just seems that intel lost its ability to innovate quite a while ago- and was just kept upright by its ability to monopolize markets.

Im sure steam-engine companys had great share-value, until the day they had not.

Yes. Although, Qualcomm is also investing in cloud and servers market and if they get successful (rather arm in general) could hit intel hard.

What are your thoughts on that?

I think the point is that the article is basing the title that "Intel is fighting for its future" on the idea that they might make a bid for Broadcom to survive. The idea that Qualcomm could invest in cloud and servers and maybe, if lots of things happen in Qualcomm's favor, threaten Intel's server business has little to do with that.

Wal-Mart is investing heavily in ecommerce which could threaten Amazon if they become successful. It's hardly accurate to say that Amazon is fighting for its survival.

How big is the server market though and how much is it supposed to grow?

Yes And I guess only time will tell. If we travel back in time, and told Paul Otellini that Apple is going to make 100 to 200M chip a year with Intel, going for prices less then $30 per unit. Would Otellini have said yes? For some reason I doubt it. When there was no transformation of Smartphone to the industry, and saw importance of PC shrinks in his own eyes.

Purely looking at those numbers. For the same amount of transistor @$30, Intel were selling those at anywhere between $80 to $300. Not to mention Intel has to invest in Fabs just for Apple as 100 - 200M is no small numbers.

Intel cares about its margin, and its profitability. Sometimes we argue we should do things at lower margin to avoid the risk of being wiped out. Sometimes vice versa.

I am not entirely sure what Intel's exec were thinking, they were possibly driven out of Apple and Mobile decision because of fear of shareholders protest at lowering margins, and driven back to Mobile precisely because of shareholders.

And not only is Intel late to Mobile. They were late to GPU as well, and GPGPU in general. We are now looking at 2020 before an dGPU from Intel is out.

Basically Intel hasn't been innovating for a few years. And may be they haven't been to war for a long time. They loses the sense of danger. May be they were typical Silicon Valley's optimist where everything is going to be fine.

Which strangely is different to Andy Grove, only paranoid survives.

But then I dont see x86 ever lose out in PC or Datacenter Market. Right now the biggest threat is AMD, and that is not an existential threat.

I also dont see how Broadcom and Qualcomm merge would be an existential threat to Intel. Apple might not use Intel's modem, but that is not exactly a problem. Intel needed some customers to use its product so it could at least cover the R&D cost of modem, which is getting insanely complicated. And Mobile Network has already reached an inflection point where Modem aren't going to get the rapid development and improvement as we had in the last 10 years.

With iPhone, we manage to move from End of 2G to 3G to 4G and now to 5G. All within 10 years. We have reached a stage where Top Speed no longer sales. Market wants more data, or higher capacity rather then unattainable top 1Gbps speed. We have Massive MIMO and LAA, and in the next 5 years we will see anywhere from 4x to 20x capacity improvement, along with better reception. All these improvement are now bottlenecked by Carrier upgrading their sites, assuming we have a steady rate of mobile phones upgrade.

Intel has a clear market of 1.5 Billion PC to upgrade to, while not everyone will be upgrading their PC with Modem. It is by no means a small market. And Apple has their own W2 with 802.11n and Bluetooth 4.1. It is only a matter of time before they have their own 802.11ac and Bluetooth 5, as well as possible 802.11ax. All these are currently coming from Broadcom. I am much more worried about Broadcom then Intel.

I think this article misses the point: Intel has nothing to gain by entering the commodity smartphone processor market. The margins are slim, and therefore doesn't make good use of Intel's advantage in semiconductor fabrication.

Serial performance is important in processors, and as long as Intel is king in this area, they will have a home in server racks and workstations. I keep hearing about how cheap arrays of ARM processors will take over the world, but I'll believe the easy parallel programming unicorn has arrived when I see it. It seems more likely to me that when this is the case, a move to GPUs, FPGAs, or custom accelerator card would make more sense.

In the short run, you are correct. However, by looking at history, it seems that the high-end chip manufacturers always end up being replaced by the low-end manufacturers, as the focus on cheap, small, low-power designs end up providing various strategic advantages (and raw revenue for product development). That's how Intel replaced the various RISC manufacturers in the 90s.

Of course, the field is young enough that this has only happened two or three times, so it may just be a coincidence. Still, I like the thought of ARM chips replacing Intel in ten years, and twenty years later ARM being replaced by whoever currently makes the chips for those musical greeting cards.

Exactly -- the key thing is the historical cost structures of the high-end makers makes low-end chip making highly unattractive which was why it was easy for Paul Otellini to say no to Apple to make chips for the iPhone.

In the end, low-power chipmakers like ARM have no where else to go but up -- they can slowly move up-market and over time, enjoy the same high margins but at much lower costs relative similar to entrenched players like Intel.

I'll argue the opposite, the low end mobile market will be fantastic for Intel precisely for that reason.

If margins in mobile are razor thin, then Intel can make bigger margins than anyone else by owning the fabs. And mobile's volume can justify owning fabs.

So why are they considering acquiring Broadcom?

To leverage their synergies.

To be fair, that is the response to all 'Why is X acquiring Y?' questions, and it's a terrible answer.

M&A is hard to do right -- most fail, and a staggering percent have all the surplus delivered to the seller instead of the buyer. See: https://www.mckinsey.com/business-functions/strategy-and-cor...

The real answer would speak to why (Intel thinks) this deal is not like the average, unsuccessful deal, and specifically why they would be able to use Broadcom's assets better than Qualcomm.

The answers here probably have something to do with datacenters and energy/heat efficiency gains when you stop being fabless, something to do with anti-trust avoidance (qualcomm is limited by anti-trust concerns much more than Intel is, without those limits the value goes up), and something to do with cost and workflow concerns. Intel has a substantial risk to their current model, which is already hitting something of a natural ceiling of cost per manufacturing facility, driven largely by precision requirements -- that ceiling is what stopped Moore's law, I think, because no one can build a facility that costs even more money than Intel -- and developing business processes that can deliver classic Intel improvements in a more fabless or fabless-friendly process could reduce that risk.

This is all pure guesswork, but I think it gets closer to the actual discussions inside the companies.

It was just a joke, based on the usual wooden speak of these announcements.

I don't know which acquisition is better. I do think that Intel should probably enter the ARM, somehow, just to hedge their bets.

Again (https://en.wikipedia.org/wiki/XScale)? They gave up on that to focus on low power x86 CPUs.

Intel has an architectural license for 32-bit ARM (https://en.wikipedia.org/wiki/Arm_Holdings#Arm_architectural...), so they could design their own micro-architecture whenever they want to. I don’t know whether they have one for 64-bit ARM, but they have money, so I expect it wouldn’t be difficult to get one.

I didn't mean that they should necessarily build something. But at least buying stock from an ARM manufacturer might not be a bad idea. Maybe not taking them over, but at least having stock.

That way, if things go wrong with x86, they can at least pull a Yahoo - Alibaba.

Actually about 70% of corporate acquisitions more or less achieve their goals. As for the other 30% it often is a disaster.

I had a citation for only 22% of mergers meeting their revenue goals. If you have someone more likely to have quality guesses than McKinsey, then please share yours.

Years ago, during a fund-raising tour of Silicon Valley VC firms for my previous company, we also pitched Intel Capital (their in-house VC arm). Their feedback to our pitch was that our SaaS service was "too compute intensive" and therefore they passed.....

Intel thought we used too many CPUs?!.....

All these years later, I'm still in disbelief. I do realize that VCs often are uncomfortable giving the real reason they're passing, but "too many CPUs" was a first for me. (And my company is still around and making more than $25MM/year, thank you very much.)

In a similar vein, we pitched Microsoft Ventures on a developer tool we were building. One of their chief concerns was that we were targeting C# developers. (Why not JS? Is C# even that popular?)

This helped drill home the intuition that large companies are not a single entity, but an amalgamation of different groups and individuals each with a potentially different motivation.

> Intel thought we used too many CPUs?!.....

Before your firm can buy a ton of Intel CPUs and make money for Intel, it needs to be profitable. It doesn't make sense for their VC arm to give you money to buy CPUs, then fail to sell your product and go bankrupt.

Intel Capital is another branch and not known as a leading VC firm. I don't think you can connect their achievements with Intel itself.

Another possibility is that they were indeed, correct -- we did use many CPUs to perform the work, even though it is the nature of the work (running end-to-end tests on browsers, simulators, and devices).

Maybe the intention of the fund was to diversify - i.e., hedge their huge exposure to the CPU market by investing into totally different things, i.e., not you!

I knew a guy who was a technical advisor to Intel Capital (worked at Intel). They're not there to diversify, they use capital to advance their existing business strategy in key markets. They often take stakes in companies that Intel are partnering with on technical projects in order to gain some level of insight, control, and eventually to acquire the company if there's a strategic argument for it.

For example, a company might be developing some cool new IP that Intel is considering licensing or is licensing. Intel Capital will take a stake in the company, and then eventually might consider acquiring the company if they want to corner the technology or think it's cheaper than a long term licensing agreement.

Quite possible!

Intel has reasons to be worried. They missed several buses, trains and ships and other opportunities. Servers still appear to be secure but with threats from ARM solutions, and AMD.

Windows 7 and Windows 10 have been pretty efficient operating systems. That's great for Microsoft but not Intel. With identical system requirements, it means there's very little reason for most people to upgrade hardware. Me, a supposed high-end PC user am still running a 3rd generation Core i5 processor with no urgent need for upgrading in the near future. Graphics cards are still a once every 2-3 year upgrade for PC games.

also threats from NVIDIA in high-throughput demanding markets (machine learning, crypto mining, HPC). Essentially Intel hasn't been able establish new markets and their existing ones are being chipped away.

>Windows 7 and Windows 10 have been pretty efficient operating systems.

All of my friends who run it must be using it wrong because that is not my experience.

I can't speak for Win10, but back in the 7 days, you'd be batty to buy an AMD PC. They had a high flake value and I'm not really sure if the average PC user ever changed their mind on that.

It seems to me that most people would do a bit of research, see that Intel is still destroying AMD on speed tests, or they go to Best Buy and see a load of Intel stickers. The sales person will rightly or wrongly tell the customer that Intel is better, and so on.

The main point is that mot people are not power users or knowledgeable about computers. That's why you can read reviews online about how Dell is horrible because "it catches viruses."

Intel shall definitively be worried


This article misses that Intel has really missed the boat when it comes to GPUs; the company that it needs to beat is NVIDIA which actually is producing a rapidly improving product.

Intel integrated GPUs have no place at all in deep learning. They are perfectly good for emulating the 2001 Gamecube and playing classics like Doom and Quake, but they have a bad enough reputation among gamers that if and when they do come out with a discrete card they will have to brand it something other than Intel.

The company they need to beat is AMD. AMD's integrated GPU's are thoroughly trouncing Intel's and are good enough for light gaming. Their weak integrated graphics hurt their core business of CPU sales.

They certainly want to compete in the discrete GPU space, but I think they should be much more scared of losing CPU market share.

The author states:

>"If the dispute is settled, Intel loses its wireless modems deal with Apple. No mobile CPUs + no modems = nothing of substance."

Aren't nearly all processors in mobile devices a SoC these days? Isn't the "wireless modem" just an LTE radio on the same SoC that has an 802.11 radio for wifi? I'm not understanding how these would be separate in a device like an iPhone.

> By declining Steve Jobs’ proposal to make the original iPhone CPU in 2005, Intel missed a huge opportunity.

I'm really intrigued by stories of missed opportunities. Certain companies have all of the power then make a minor miscalculation on the future of technology. Does anyone have similar personal experiences that echo this type of missed opportunity deal?

I think this is a misinterpretation. If you are the market leader and have huge profits you can enter such a market at any time. It just costs more and more the stabler the market becomes, which in turn of course also decreases the risks and costs of development. The art is finding the sweet spot to enter, and beating the competition that's already in the market and tries to keep you out.

> Certain companies have all of the power then make a minor miscalculation on the future of technology

It happens all the time, in matters big and small. A company takes a bet on something, it fails, and afterwards people (even on the inside) are scratching their heads and thinking "how did we ever think that was going to work?". Most of the time, nobody outside the company hears about it.

When companies are powerful, their power is generally concentrated in a very narrow domain. Stray just a little outside that domain, and they're just as clueless as anyone else (although perhaps with deeper pockets).

Not a single company in the field has a 100% guaranteed future. Even Microsoft, completely dominating in the 90s, is now second fiddle to the likes of Google and has their future jeopardised by the emergence of the cloud and mobile platforms.

Having said that, Intel are going nowhere overnight.

And even Google feels a little shaky these days, not really knowing what the next big thing could be for them after search+ads. Their mobile and their cloud business are both small compared (if the data didn't change over the last 1-2 years, haven't looked it up for some time).

I think at least part of this reason is driven by NSA/DoJ, just like Microsoft's Skype acquisition was.

Also, why is Krzanich still CEO? He's made terrible decision after terrible decision. Buying Broadcom now reminds me a lot of Ballmer wanting to buy Yahoo for $40 billion. In hindsight that looks like a terrible deal, doesn't it? I think if Intel buys Broadcom, this will look the same in 10 years.

Intel has strong conflicts of interest against other non-x86 chip divisions, which would be Broadcom, I'm sure it's the same with Altera, too. In a few years they'll regret buying FPGAs, too.

The "synergy" is a lie.

Speaking of Intel failing to innovate, has anyone seen Intel's 10nm process? Canon Lake was supposed to be out in 2016 and it has now been pushed back to mid-2018. There was a super brief update about it at CES [1]. Makes one wonder if the future is not arriving on-time as previously expected.

[1] https://www.anandtech.com/show/12271/intel-mentions-10nm-bri...

10nm is facing acute yield issues, rumoured to be because of the use of cobalt.

I am surprised at the under-reporting of this issue. We are literally on the brink of a serious chip technology stagnation which could cause a dead stop to moore's law and very few people are concerned.

When I graduated from UW in the late 90s, Intel was the place to work for most CE and many CS majors (as well as tons of EEs). Those glory days are long gone it seems.

Often the solution is a stalemate until a profitable opportunity arises. So, no surprises here.

Interesting to see that such a giant can really start shaking so badly by losing one of its sources of profit. I think even if the whole desktop market dies Intel will still make more money than most companies. Sure, maybe they need to shrink 80%, but 20% is far from a small company.

Both deals [Intel, broadcom] , [Qualcomm, broadcom] are unlikely to pass regulatory muster. Further, as many have noted, Intel doesn't have a seat at the mobile table.

It just wants to thwart a [Qualcomm, Broadcom] combine that's likely to aim at Intel's jugular - the cloud/server market

I hope any merger or purchase of Broadcom is thoroughly investigated, considering how much of the datacenter market and even consumer networking is reliant on them there's a lot of ways the entire industry could get messed up by the purchasing company.

Between switch ASIC's like the Trident II, RAID controllers and SAS HBA's, 802.11 chips, DOCSIS modem chips, optics, etc. they are almost everywhere even if you don't see them.

There are always missed opportunities in any business. Intel did not capture the mobile market, so what? There is no evidence that ARM-based servers from Qualcomm, or anyone else for that matter, will somehow make a dent in Intel's leadership in that space in the short to mid term. Not to mention that Intel has significantly diversified its product line and will continue to do so.

There is not a world where there will be one processor manufacturer. We will see multiple manufacturers on multiple architectures for as long as things continue to progress.

Intel may have a short-term existential threat, but that threat is minimal.

I think you missed the point. It's fine to disagree, but your comments indicate that you don't understand or appreciate the magnitude of Intel's hubris and the inevitable blunder that followed. Intel literally invented the microprocessor. The fact that they let this opportunity slip through their fingers is astonishing.

Has Intel experienced some kind of strategic brain drain in the past 5 years? I remember a professor telling me in the late 90's about how RISC wasn't going to dominate the future, because Intel had a roadmap going well into the 2000's specifying how they were going to stay ahead. Now, it looks like AMD has the strategic upper hand.


Why can't Intel acquire Qualcomm instead of Broadcom?

it likely would never pass with the regulatory agencies I suppose

A key point missed in this discussion is that Intel needs high volume through its fabs to pay for their high capital investment requirements.

If it doesn't have the consumer CPU volumes (notebook, desktop, mobile) then production costs for server chips will be much higher, and it won't have those nice margins.

Intel also taking its eye off the ball as far as desktop and laptop CPUs go. CoffeeLake should be renamed CoffeeBreak given the years it’s been delayed!

Perfect storm if AMD continue to pile on pressure and ARM licensees start making inroads into servers.

The big problem is that Intel's 10 nm is very late, rumor says having to do with their decision to move to using cobalt more aggressively than other other fabs. Intel's traditional advantage in their silicon process is substantially eroded which is making things much easier for their competition.

They are also showing signs of fatigue in the data center space which carries high margins. The underhanded I/O game they are playing is both alienating customers and providing a big opportunity for AMD, IBM, and Arm.

AMD already pushed ARM out of the server market. Low cost, high performance, high IO and x86 compatibility. ARM's only advantage compared to AMD is low standby power consumption.

>The underhanded I/O game they are playing

Can you please elaborate on this?

They are extremely stingy on the PCIe lane counts to limit the usefulness of GPU coprocessors. They've also pretty much ignored PCIe v4.

Barely insightful. The rumor of Intel buying Broadcomm, ... is most likely being perpetuated by Broadcomm itself

Gassée has his own reality distortion field, of course. His punditry is entirely focused on closed-source, proprietary technology and the big corporations that own it. He never writes about RISC-V or Linux, perhaps because they aren't generating big sums of cash and therefore aren't worthy of his attention.

This thread is funny because no one really knows what Broadcom does.

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact