Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Intel has no chance in servers and they know it (semiaccurate.com)
39 points by mozumder on Aug 7, 2018 | hide | past | favorite | 10 comments



'Intel has no chance in servers and they know it' is a little hyperbolic when they currently have over 90% of the server market. Maybe when they have been reduced down to single digit figures you can say they have no chance.

Despite the current scaling issues they can simply reduce margins to be price competitive and keep up their market share. AMD entering the market is a great way for Microsoft/Google/etc to negotiate hard for the CPU's needed for the next datacentre/upgrade cycle.

Reading semiaccurate.com feels a bit like reading InfoWars, everything is a conspiracy.


> Reading semiaccurate.com feels a bit like reading InfoWars, everything is a conspiracy.

That may be but Intel is certainly not beyond ... ahem ... interesting tactics. Demoing a CPU holding 5GHz by cooling it via an industrial chiller just to steal some of AMD's thunder?


The article is mainly about price/performance and the author made clear that in this area Intel has no chance and that this remains until 2022 according to their own documents.


The problems Intel has at 10nm are well covered by the tech press. And Intel has confirmed this to investors. The thing that is new here is the claim that internal documents show that Intel is rather more worried than people expected.

It's not really believable that Intel thinks it has no chance in servers. What I think is believable is that Intel will not be able to limit AMD's advance into their market share. This should start to show in next quarter's results, as AMD server product qualifications are finalised by large data centres (as AMD has pointed out in their briefings).

What I had believed until now is that Intel was essentially pivoting to a different strategy, moving away from traditional CPUs. But this article makes me believe they have just miscalculated and could be in for a rough few years (where rough, for Intel, is still making billions of dollars).


> What I had believed until now is that Intel was essentially pivoting to a different strategy, moving away from traditional CPUs.

Intel is addicted to $100 - $3,000/CPU for its "traditional CPUs." Where do you think they would move to exactly?


They've hired heavily on the GPU side and announced a partnership with AMD (presumably to use their infinity fabric).

This'll sound a bit strange at this point, but I have the feeling CPU's will become little more than an I/O controller in the long term. Much consumer computing will be done in the data centre. Data centres will be dominated with devices that look more like GPUs and TPUs than CPUs.

Naturally, GPUs will inherit more CPU like features before this happens.


If it runs all of today's software with at most a recompile, that sounds like a CPU to me.


There's no way today's software can be scaled in terms of performance in the face of the "end of Moore's law", with a mere recompile. Today's software will be displaced by tomorrow's software. Of course I am talking fairly long term here. But I believe all major players are making very definite moves in this direction.

Major hardware companies have to have fairly long roadmaps, because they have so much inertia. And they have to reveal what they are up to, to their investors. I certainly don't believe these roadmaps reveal another couple of decades of die shrinks and IPC gains. They reveal a real shift of strategy, across the board. Consumers simply won't be able to afford the "CPUs" of the future.


I don't expect renting computers in the cloud to get more expensive. Do you? The end of Moore's law means slower improvements, not that things will actually get worse. If anything, it means computers in data centers need to be replaced less often, so on a per-hour basis, they're still cheaper.

For a scalable application, better performance basically means reducing expenses. If your cloud computing bills aren't high to begin with, it may not be worth the rewrite. Of course there might be new companies or new projects that can use TPU's for machine learning, etc.

But Moore's law is not the only way to scale better. Today's machine learning algorithms are ridiculously inefficient and that seems unlikely to remain true forever, given the amount of research being done. A series of algorithmic improvements might result in a 10-100x reduction in cost, or maybe even not needing TPU's anymore?

Who knows what the future will bring, but making straight-line predictions in a fast-moving field like machine learning seems unlikely to work out.


Previous submission was flag killed for being click-bait: https://news.ycombinator.com/item?id=17708847




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: