
Perhaps it is simpler to say that Intel was disrupted - MBCook
https://medium.learningbyshipping.com/intel-disruption-594f806cfc21
======
jillesvangurp
One angle that is missing to this story: Nokia. Nokia bet big on Intel by
partnering with them at the worst possible moment. In a nutshell, Intel was
looking to get in on the mobile action and Nokia was willing to partner. This
was ten years ago.

When platforms failed to come together because Intel was not delivering the
goods, Nokia had to scramble to get new phones ramped up around alternative
platforms. Nokia lost a lot of valuable time this way right when Apple and
Android started kicking its ass. I wouldn't point at it as the root cause for
Nokia's rapid collapse but it certainly was a contributing factor. E.g. Meego
was years late to the market partly because of this and ultimately killed off
in favor of the deal with Microsoft (another ill fated bet). And yes, Intel
Atom was a big part of that failing as well.

Bottom line is that Intel tried and failed to get into the mobile market
repeatedly and failed while being way too comfortable milking desktops and
servers. Now that that market finally seems to be drying up, they have a
problem and still no viable strategy to deal with it. Atom flopped. Itanium
flopped. AMD is back looking stronger than ever. MS is has retired the wintel
brand years ago and is seemingly open to running on ARM and getting quite cozy
with Linux. Apple is widely rumored to also consider switching to their quite
credible in house ARM based processors.

~~~
AceJohnny2
> _Apple is widely rumored to also consider switching to their quite credible
> in house ARM based processors._

Does the Mac App Store require submission as source code?

If not, consider the implications.

~~~
iainmerrick
They require (I think -- maybe just strongly encourage) submissions to the iOS
App Store to include LLVM bitcode. It’s easy to imagine them doing the same
for the Mac. That would be a pretty clear signal that they plan to switch
architectures.

~~~
r00fus
Though it's not required they could easily make it fashionable / desirable
(anyone remember fat binaries i.e. the ones including both PPC and Intel
executables in one .app container?).

Or they could slowly make it required.

~~~
dottrap
> Does the Mac App Store require submission as source code?

No. You do not submit source code.

> They require (I think -- maybe just strongly encourage) submissions to the
> iOS App Store to include LLVM bitcode.

Apple does not currently require LLVM bitcode for iOS. It does require it for
Apple TV and Apple Watch.

Additionally, LLVM bitcode is not architecture agnostic. It would not solve
the problem of going from x86 to ARM.

> (anyone remember fat binaries i.e. the ones including both PPC and Intel
> executables in one .app container?)

This was also used in Apple's transition from 32-bit to 64-bit on Mac. And
later again on iOS.

So yes, if Apple were to transition CPU architectures again, fat binaries
would be the likely approach.

------
DiabloD3
"In 2006 AMD (struggling) bought ATI for $5.4B. Intel just didn’t even notice.
It was super weird."

I think this is the best line in the whole thing. Intel _didn 't_ notice, and
it _was_ super weird, and it was the best thing they ever did.

Fun fact: the first thing the engineers did with their new toy was bolt then-
new R700 pipes to a hypertransport bus, so it used hypertransport to fulfill
memory requests instead of a native Radeon GDDR controller.

The purchase of AMD and the Christmas morning giddiness of unwrapping that
gift and turning it into what eventually became the prototype of the APU was
the most brilliant thing ever.

To put it in perspective, lets count all x86 sales: all the desktops, all the
laptops, all the servers, all the weird little things like x86 chinese
tablets, all of it.

Combined Xbox One and PS4 sales (both massive APUs, essentially) dwarf all of
the sales _both_ Intel and non-console AMD do; it dwarfs it in per chip and
per thread. Intel only claims more sales in dollars than AMD (incl. MS and
Sony deals) because of Intel tax, no legitimate reason.

Similarly, lets do the same with Radeon sales vs Nvidia GPUs... take all of
the Nvidia GPUs (desktop, laptop, ARM SoCs like the Shield and the Nintendo
Switch, server compute, etc), and all the AMD GPUs that aren't XBOne and
PS4... console sales dwarf everything else that both companies make.

AMD won both races and no one even paid attention: AMD makes more x86 CPUs
(both per chip and per thread), and more GPUs. Optimizing performance software
(ie, games) has far more focus on both of AMD's platforms than anything else.

Intel and Nvidia are now the underdogs in their own races. This happened
silently, and the cheer-leading for Team Blue and Team Green drowned out
reality for a bit.

~~~
claytonjy
> Intel and Nvidia are now the underdogs in their own races

For Nvidia specifically I think that's only true of the gaming market (maybe
crypto as well?), but my understanding is their massive increase in value over
the last half-decade or so is predominantly from the deep learning market,
where they continue to dominate by no small margin due to CUDA being a hard
dependency of the major DL frameworks.

I don't have any hard numbers on it, but I wouldn't be surprised if the CUDA-
tax is more than the Intel-tax ever was.

~~~
microcolonel
The CUDA tax is serious, however in this case it is mostly the fault of
competing vendors that there is no CUDA or almost-CUDA implementation, since
it is fairly high-level, unlike an ISA and the accompanying treadmill of
patented features which move the expiry date of the "platform" 20 years into
the future every couple years.

~~~
claytonjy
> it is mostly the fault of competing vendors that there is no CUDA or almost-
> CUDA implementation

I very much agree, but I also don't have the skills or knowledge to say what
kind of effort it takes to build such a language/toolchain. It seems like such
an obvious opportunity for AMD, and has for so long, could it be less
incompetence and more engineering difficulty and/or adoption struggles?

I know I've seen talk of CUDA-killers around these parts before, would love to
hear more details from people more familiar with this stuff.

~~~
microcolonel
> _I know I 've seen talk of CUDA-killers around these parts before, would
> love to hear more details from people more familiar with this stuff._

From the market perspective, it seems to me that a "CUDA killer" would not
actually help. I think we need a free CUDA toolchain.

~~~
claytonjy
I think that's roughly what I meant; I'd like to see AMD dump resources into
making a fully open, compatible-with-all-modern-GPUs toolchain to directly
compete with and replace the CUDA one. For all I know, they already are?

Heck even a non-open, AMD-specific toolchain would probably be great for
consumers, but an open, cross-compatible toolchain would be even better for
us, and might be better for AMD as well by allowing non-AMD-employed experts
to contribute.

~~~
jcranmer
> I think that's roughly what I meant; I'd like to see AMD dump resources into
> making a fully open, compatible-with-all-modern-GPUs toolchain to directly
> compete with and replace the CUDA one. For all I know, they already are?

There's OpenCL. Except OpenCL performance is worse than CUDA performance on
NVidia, so anyone using NVidia's hardware (essentially, everyone) pretty much
wants to use CUDA instead.

~~~
pjmlp
OpenCL made the mistake of being C only in the beginning while CUDA embraced
C, C++ and Fortran, plus any language that would bother to write a PTX
backend.

SPIR was too little, too late.

~~~
microcolonel
Maybe we could have a PTX frontend for SPIR-V. ;- )

------
old-gregg
Any gamers or graphics experts here? I keep hearing that Intel's integrated
graphics is terrible, yet being a typical desktop user/non-gamer, I've always
avoided dedicated GPUs in favor of Intel on purpose. Intel GMAs always offered
better battery life, better Linux compatibility, better operating temperatures
(and zero additional fan noise) yet they did everything a non-gamer can ask
for: hardware-accelerated video playback, desktop effects like the ones Steven
writes about, etc.

Recently I looked into AMD's 2400G APU with much superior graphics but I don't
understand what is it for. Sure it can sort of run some games at mediocre
frame rates on last-century 1080p monitors, but not higher. But everything
else Intel will do just as well... Looks like all these integrated GPUs are
starved of memory bandwidth anyway. So what's the point of these APUs then?
Where's the AMD graphics advantage? And what's problem then with Intel's
integrated graphics?

~~~
ksec
You have to remember the GPU described in the article were in 2010 - 2012 era.
And it was iGPU in Atoms. They key to GPU performance were drivers, and Intel
in those days won't very active in GPU drivers development. At the time
updating GPU drivers for notebook were problematic. Lot of issues left
unresolved, and Mozilla blacklist lot of Intel drivers from GPU acceleration.
And they (purposely?) make a mess of dGPU graphics switching.

It wasn't until Broadwell, and later Iris Graphics era, which I think is 2012
/ 2013 before things starting to pick up. Intel actively optimising their
drivers for stability, OpenGL, and Performance.

AMD's Graphics Advantage is that their drivers are better tested. Over the
near three decades of history has taught us, the GPU hardware is absolutely
nothing with out top notch drivers support. Intel I740, Matrox, 3Dfx, S3
Verge, 3Dlabs, PowerVR......

~~~
mjevans
Actually the early Intel GPU efforts were hillariously bad partly because they
licensed someone else's GPU...

[https://en.wikipedia.org/wiki/System_Controller_Hub#Poulsbo](https://en.wikipedia.org/wiki/System_Controller_Hub#Poulsbo)

There /still/ isn't support for this closed platform on Linux/etc based
systems due to a combination of (now) outdated hardware, very few systems in
developer hands (even lower interest), and the lack of performance even with
proper drivers* (I don't have a good citation for that).

[https://en.wikipedia.org/wiki/Bonnell_(microarchitecture)](https://en.wikipedia.org/wiki/Bonnell_\(microarchitecture\))

It didn't help that the power requirements fell very solidly in to the uncanny
valley between 'crappy, but very low power so we can forgive it' and 'good,
but uses lots of power'.

------
bgorman
This article is missing many critical details. Most notably it doesn't touch
on the core identity crisis Intel has been having since the i386. When Intel
cut off third party "fabs" from producing its chips, it became reliant on high
margin chips that were completely vertically integrated. In a sense this
incentivized the company to ignore lower margin fields like embedded and
mobile. Obviously the iPhone changed the game in mobile, and showed that there
is a space for high-margin mobile parts. Intel was equipped for this
transition because they spent years optimizing their CPU->Factory connection.
Their factories were optimized to bump out CPUs, not mixed silicon, to the
point where Intel's own wireless group would produce wireless Chips at TSMC.
Intel faced dwindeling consumer demand, which led to factories not being full,
which led to less investment in chip fabrication. Keep in mind in the last
several years, Intel's main chip performance advantage was due to the superior
manufacturing abilities of it's factories. Better factories mean more
transistors per inch, more transistors means more cores and IPC (assuming
competent designers). Intel's inability to produce general silicon hurts them
to this day, as Intel's effort to open its fabs to the general public have
largely failed (Intel Custom Foundry). A further issue that plagued Intel was
it's inability to produce integrated mobile SoCs in a timely manner. Intel
Acquired infineon when they realized they needed a quick pathway to building
an integrated 4G modem. Ultimately the product was not built in time, but at
least now this investment is kind of paying off as Intel has won the modem
spot in modern iPhones. However Intel was not able to put together a product
competitive enough with Qualcomm due to the time it takes to mobilize enough
resources to build a high end SoC.

~~~
arbie
> Intel's inability to produce general silicon hurts them to this day, as
> Intel's effort to open its fabs to the general public have largely failed
> (Intel Custom Foundry).

Their refusal to fab ARM chips hurts them more than the design-rule complexity
of their process.

------
stcredzero
Intel was disrupted, for sure. Basically, I'm left wondering if there was some
key brain-drain that occurred to Intel. Back in grad school in the mid 90's,
most of the profs thought that Intel would collapse under the weight of the
x86 ISA, but one prof knew people in Intel and told me about their roadmap
going out to 2010, with plans for kicking butt. All of that played out! This
shows amazing foresight. However, in recent years, Intel seems to have gotten
itself painted into a corner with regards to die sizes, while AMD has
strategically shifted to combining chips with smaller die sizes to increase
yields and increase margins with more competitive pricing.

"Interposers, Chiplets and...ButterDonuts?" \--
[https://www.youtube.com/watch?v=G3kGSbWFig4](https://www.youtube.com/watch?v=G3kGSbWFig4)

Something has happened with Intel, which has lost its vaunted intelligent
"paranoia."

~~~
gwbas1c
When I worked for Intel in 2005-2007, a lot of the workforce joined near the
beginning, and planned to make their career at Intel. Most of those people
probably retired in the last few years.

After two years, I was still "the new guy."

They also totally goofed mobile, as the article explains. Everyone knew mobile
would be a big deal; it was "obvious." Even Moore's law predicted mobile.
(Every 18 months the number of transistors you can fit in a given area doubles
= every 18 months you can make the same chip use half the space.)

Why did they goof mobile? I think the article explains the symptoms well, but
even I have trouble explaining "why" given that everyone could see mobile
coming.

~~~
ak217
In 2003, Intel was able to reinvent its CPU design center in response to the
previous iteration of PC miniaturization (the laptop market was ready to
explode, while desktops were slowing down). They did this by running a
skunkworks design project in Intel's Israel division
([https://en.wikipedia.org/wiki/Pentium_M](https://en.wikipedia.org/wiki/Pentium_M)).
After the Israeli engineers were able to show how their design fixed Pentium
4's architectural issues, it became the basis for all successful Intel
processors for the next decade.

Intel did great in process technology and the ecosystem around their chips.
They were extremely well positioned to grab the smartphone market. Contrary to
popular belief, x86 is entirely capable of the TDPs required by mobile phones.
The Core m3 has a configurable TDP under 4W while making none of the
compromises of Atom. It might have taken another skunkworks project to cut
some modules and bring it down to the milliwatt range, but it could have been
done. Intel simply allowed itself to be outmaneuvered out of the smartphone
market, by complacency and unwillingness to lower the "tax". That worked fine
for a decade, but now they're seeing the side effects of the resulting brain
drain.

~~~
baybal2
>The Core m3 has a configurable TDP under 4W

No. This is best to say a safety measure: going over 4w over 5 seconds?
turning on deep throtlinng.

Real engineering datasheets you get as a client tell very clearly that TDP of
Y line chips is 17w.

Intel chips at such wattages are just a lot of dark silicon.

Your analysis is wrong.

------
ksec
This is a much better piece then the @stratechery one. And my thought on
@stratechery were the same as S.Sinofsky, it wasn't the integration that is
the problem. It was a go to market. I think the word "go to market" is better
than what I described as "vision and execution".

Had its execution been perfect, its Roadmap of better IPC, more Core, and 10nm
/ 7nm node, it would still have lots of room to adjust. Although I think a
better execution is only just delaying the problem. Like the article have
said, it really is a multiple things.

Had Intel set out to make the best SoC for Netbook.

Had Intel set out to make the best Graphics within those die space.

This is the classic case, where Steve Jobs describe companies with a monopoly
are ran by sales and marketing people.[1] And Product people ( Pat Gelsinger )
are driven out of management decision process.

I still remember in one of the video interview, Pat Gelsinger; I cant remember
if it was EMC or VMware era, clearly describe how x86 would rule Sever space
as long as it continue to improve and innovate. And trying create a half
hearted SoC with x86 into Mobile wouldn't work. And it was all too late. ( I
can no longer find that video )

And some people might think S.Sinofsky view on Intel graphics a little
strange, remember Intel graphics didn't became good until Apple forces them to
be better. And S.Sinofsky left M$ in 2012, the design of those Surface devices
must have happened during 2010 era, and to make matter worst the graphics on
Atom were generation behind what Intel had on the desktop.

[1]
[https://www.youtube.com/watch?v=-AxZofbMGpM](https://www.youtube.com/watch?v=-AxZofbMGpM)

------
bluedino
Intel is resilient, if anything.

They have had a ton of flops and misdirections. The 186. iAPX. i740. Xscale.
Itanium. The FDIV bug. Pentium 4.

AMD has had Intel up against the ropes more times than they can count. Faster
and cheaper 386 and 486 clones threatened them. The K6 came around and beat
the Pentium II. AMD leap-frogged them with their 64-bit chips. Intel has
always came back and stomped them right back into oblivion. They'd probably be
close to turning the lights off if it weren't for Ryzen.

~~~
farseer
Sorry to nitpick but Pentium 4 was a temporary flop until the HT version came
along for which AMD had no answer. And I don't think the K6 beat Pentium II.
Having bought a $2000 K6-2 computer in 1998 with maxed out RAM and graphics, I
instantly regretted that decision as all my friend's mediocre Pentium II's ran
applications and games better than my K6-2.

~~~
beagle3
The original HT implementation was so incredibly bad that people would turn it
off to improve performance on multithreaded loads.

~~~
georgeecollins
So true! It only worked on example code, it was very difficult to get an
improvement in a real world app.

------
airstrike
While it may be simpler, it may also be inaccurate. The fact that Intel's
dominance isn't as pervasive as before does not mean it's no longer #1. AMD
has a long way to go.

One a related note, to my knowledge AMD is still essentially absent from the
Autonomous Driving market, with NVIDIA and Intel fighting to be top dogs
through different approaches¹. With some estimates for the size of that market
(including non-chip portions) going as high as trillions of dollars², I hope
for AMD's sake it has a plan to catch up sooner rather than later.

__________

1\. [https://www.benzinga.com/top-stories/18/02/11148965/which-
ch...](https://www.benzinga.com/top-stories/18/02/11148965/which-chipmaker-
leads-the-autonomous-driving-space)

2\. [http://fortune.com/2017/06/03/autonomous-vehicles-
market/](http://fortune.com/2017/06/03/autonomous-vehicles-market/)

~~~
abvdasker
What Autonomous Driving market? From what I have heard, right now autonomous
driving even by leaders like Waymo is incredibly small-potatoes and the rigs
running in these cars are mostly prototypes built with commodity hardware.

It's just way too early to start picking winners and losers in the hardware
space for autonomous given that nothing is being mass-produced. And there is
no price-competition in the space because the major players are willing to pay
whatever it takes as part of their R&D efforts to win the race.

~~~
airstrike
This autonomous market:

[https://www.reuters.com/article/us-israel-tech-intel-
mobiley...](https://www.reuters.com/article/us-israel-tech-intel-mobileye-
exclusive/exclusive-intels-mobileye-gets-self-driving-tech-deal-for-8-million-
cars-idUSKCN1II0K7)

[https://www.nvidia.com/en-us/self-driving-cars/drive-
platfor...](https://www.nvidia.com/en-us/self-driving-cars/drive-platform/)

------
georgeecollins
I remember getting money from Intel to put MMX instructions into a game and
later getting money to use hyper-threading. In both cases, at the time, the
improvement was at the margins and not worth the work it hadn't been
subsidized. It seemed like the best thing you could say about it was that it
didn't work on an AMD chip.

~~~
tomerv
Nowadays, MMX (and newer SIMD instruction sets) are a crucial part of many
optimizations. Hyperthreading is provided in all modern processors. Basically,
those technologies proved themselves.

~~~
georgeecollins
At the time the SIMD instruction set did not exist and the MMX instruction set
was difficult to apply in a way that got us an advantage in our application.
When you say all modern processors have Hyperthreading, are you sure you don't
mean multithreading? I am not sure every processor has hyperthreading. I am
not knowledgeable enough to argue about the overall value of MMX and
Hyperthreading but I think the point of the article at the top of this page is
that it in the end -- although maybe not at first -- it was counter-
productive.

------
georgeburdell
As someone who left Intel last year for one of its customers in the Valley, I
have to say I'm sad that the future looks pretty dark for them. In the exit
interview, I said I might come back if the CEO and my department head left
(they both did), but I didn't know the company was such a mess.

That said, Intel is viewed from the outside as sort of an old monopolistic
monolith, but a lot of that profit gets plowed into a bunch of truly
innovative semiconductor R&D going on that I do not believe is being
replicated by competitors. 3D XPoint is probably the most visible example.
AMD, TSMC, Samsung, etc. I doubt will pick up the torch.

~~~
pdimitar
So where do you feel their competitive advantage lies right now, by the way?

You make it sound both like they do struggle lately but they have stuff lined
up that cannot be competed with. Slightly confusing for me, could you
elaborate?

~~~
georgeburdell
Well the two are not mutually exclusive. I worked there for close to 5 years
on a few different silicon-related projects that I characterize as unique in
the market and not a single one has even made it into an announced product.
These were not small endeavors, so either the development times for silicon
are extremely long or Intel cannot execute on its ideas. Regarding the former
possibility, I heard from people who were around since the beginning that 3D
XPoint was in development for a decade before it was announced in 2015.

~~~
pdimitar
Oh, I agree, but what you describe seem to be latent / possible-in-the-future
breakthroughs. Until they see the light of day they are irrelevant IMO.

------
seanalltogether
From the article it sounds more like Intel kept building up a moat that
everyone got sick of crossing.

~~~
erikpukinskis
Ya.. I go back and forth in my head a LOT about the importance of moats.
Constantly second guess myself. You sacrifice SO MUCH velocity building a
moat, and you cut off any possibility of other companies helping “rise all
ships”.

But at the same time, aren’t there competent cloners lurking behind every
corner ready to jump on your product market fit and outpace you?

This article pushed me a little further into the “don’t worry about moats;
make your partners successful” camp. But I waver.

------
33degrees
Reading this makes it very clear why Apple would be moving towards using its
own chips in laptops.

~~~
MBCook
Besides everything else going on, they left 68k because it was falling behind
and couldn’t compete or speed up enough. They left the G3/G4 line when once
again they were stuck and Intel was running away.

So they went to Intel. They’re now on the same treadmill as everyone else.
Can’t do better, can’t be totally stuck. But Intel isn’t progressing very
fast, and they don’t seem to be focusing on the things Apple wants like ultra-
low power.

But Apple’s chip division is kicking ass. The 10.5” iPad Pro is a monster, and
could easily power a MacBook faster than an Intel chip. The real question is
what to do on the MBP and iMac Pro.

Apple has never been keen on AMD for some reason, not sure if they have a
chance on the high end. My impression is they don’t compete on low end/low
power but I don’t know how accurate that is.

~~~
robdachshund
Pretty sure an ARM tablet chip is not going to beat Intel, even on the low
end. Otherwise we would have moved on already. Also, apple does like AMD and
uses their gpus because they have bad blood with nvidia. Apple primarily makes
laptops and AMD has not been competitive until they developed Ryzen.

~~~
godzillabrennus
[https://9to5mac.com/2017/09/22/iphone-8-geekbench-test-
score...](https://9to5mac.com/2017/09/22/iphone-8-geekbench-test-scores/)

The iPhone 8 CPU is competitive with a Core i5

~~~
21
That claim would be highly suspect in general, given that a desktop CPU has at
least 10 times more headroom in power consumption.

For such a claim to be true, you would have to expect that the phone CPU is
somehow 10 times more efficient per Watt, while having the same kind of
architecture (general CPU, not GPU/TPU/...).

~~~
Const-me
I think he might be correct.

Apple A11 has 297 GFlops CPU performance. (1)

That “7th gen i5” is apparently i5-7360U. 32 FLOPs/cycle/core (2) * 2 cores *
3.50 GHz (=max turbo) = 224 GFlops.

> the phone CPU is somehow 10 times more efficient per Watt

Not 10 times. The i5’s TDP is just 15W. I don’t know about A11 but high-end
Qualcomm chips can be up to 5-6W, just 3 times difference.

> while having the same kind of architecture (general CPU, not GPU/TPU).

That’s a continuum. They both CPUs, they both have SIMD that’s how they
achieve that high flop/cycle, but the architecture can be quite different. A11
has 6 cores. Intel spends many transistors, and therefore energy, doing very
complicated things: branch prediction, indirect branch prediction, cache
synchronization between cores, speculative execution… Apple has full control
over OS and software, they can probably get away with not doing some of these.
Intel and AMD have to maintain compatibility with all software (including
single threaded), built by all compilers and languages, that’s why they have
fewer tradeoffs available to them.

(1)
[https://forum.beyond3d.com/posts/2021926/](https://forum.beyond3d.com/posts/2021926/)

(2)
[https://stackoverflow.com/a/15657772/126995](https://stackoverflow.com/a/15657772/126995)

------
scarface74
_Everyone is ultimately proprietary even in the Open Source world. Once you
invest heavily in any sort of architecture or platform, you’re not moving
anywhere with your current investment._

I’ve argued this point repeatedly in real life and on HN. The people who don’t
want to use any of AWSs proprietary infrastructure or developers who want to
use the repository pattern for the sole purpose of “not locking themselves”
into one vendor.

I’ve never in 20+ years seen a major organization switch from a well known
vendor to save a few dollars. Even if you theoretically can change your
infrastructure, the risks and the amount of regression testing you have to do
hardly ever makes it worthwhile to switch infrastructure.

As far as open source coming to the rescue, if someone else doesn’t takeover
an open source project, more than likely, your company won’t either.

~~~
pdimitar
I am largely in agreement with you, however there are many well-documented
cases, right here in HN, about companies bleeding a lot of money due to their
usage of AWS and a few others. Such services are really good when you are
starting off -- they eliminate a huge upfront time cost, and for most startups
time to market is the difference between life and death. And as you start,
they are pretty cheap as well.

However, once the startup takes off and becomes cash-positive and has to
scale, then AWS in particular becomes a huge point in your balance sheet. It's
a very well executed vendor lock-in, I will give them that.

Many companies are using European VPS hosting and once they have mid-level
business contract that guarantees 99.9999% uptime because the provider can
replicate to 3 separate physical data centers, they are happy both with the
service and the cost. Granted they need their own Ops teams but long-term this
scales much better and is sustainable.

Managed cloud infrastructure is mostly overrated and the complexity of it is
by design. AWS in particular is borderline crazy lately, they have dozens and
dozens of inter-connected services and "AWS consultant" is now a commonplace
title in CVs.

~~~
scarface74
Amazon doesn’t “lock you in” with VPS hosting. If that’s what is costing you,
you can access all of AWSs other services over the internet. You can setup a
VPN from your colo center to AWS and still access all of thier managed
services.

How is it less complex to host your own database servers, load balancers,
queuing system, redundant storage, CI/CD servers, ELk stack, redundant
memcache or Redis servers, distributed job scheduler (I’ve used Hashicorps
Nomad in the past), configuration servers, etc. These are just the services
that you don’t have to manage the underlying servers and you get redundancy.

I’m first and foremost a developer. But the money the company I work for saves
by not having to hire dedicated people to manage and babysit servers more than
makes up for the cost of AWS.

It just so happens that I know AWS well enough and have experience as an
“architect” (and have the certifications to give them a warm and fuzzy) to be
competent at the netops and devops side of things.

~~~
pdimitar
> _How is it less complex to host your own..._

It is not less complex. It is definitely more complex and is harder. My point
was that financially it is more sustainable long-term. And once your org is
bigger, dedicated Ops team gives much more peace of mind. Whoever is on shift
flips 3 switches and things are back to normal in 2 minutes, 99% of the time.
That is not always the case even with a huge provider like Amazon and Google.

> _But the money the company I work for saves by not having to hire dedicated
> people to manage and babysit servers more than makes up for the cost of
> AWS._

Disagreed. Periodically, there pop up articles here in HN that prove with
numbers and historical timelines that AWS only saves you time and white hair
while you are smaller. Once you start to scale up and/or use more of their
services, bills start to pile up quicker than before (people have mostly
analyzed the exponential growth of their billing). Apologies that I don't keep
the links but I remember that I've read at least 5 such articles in the last
year, right here in HN.

> _It just so happens that I know AWS well enough and have experience as an
> “architect” (and have the certifications to give them a warm and fuzzy) to
> be competent at the netops and devops side of things._

Good for you, many of us don't however. I honestly have no intention to
either. It's a very specific vendor cloud system with a huge amount of
proprietary tech baked in, and I have no desire to entangle my career
prospects with its success. Career preferences. ;)

~~~
scarface74
_It is not less complex. It is definitely more complex and is harder. My point
was that financially it is more sustainable long-term._

Can you set up a duplicate infrastructure that’s close to your international
facilities worldwide cheaper? Even support would be cheaper because you can
have one central team manage your worldwide infrastructure. Netflix moved
their entire infrastructure to AWS. You’re not taking into account the cost of
enploying people to babysit and do the “undifferentiated heavy lifting”, the
cost of over provisioning just in case, the cost of the red tape to provision
backup hardware for failover, etc.

* And once your org is bigger, dedicated Ops team gives much more peace of mind.*

You can easily outsource netops to a dozen companies that can be cheaper
because they can manage multiple accounts and they can outsource the grunt
work to cheaper labor. I know, I’ve worked in two companies that outsource day
today management of netops.

 _Whoever is on shift flips 3 switches and things are back to normal in 2
minutes, 99% of the time. That is not always the case even with a huge
provider like Amazon and Google._

You’ve never had to deal with a colo center. Have you ever dealt with AWS
support as a representative of a large company?

 _Disagreed. Periodically, there pop up articles here in HN that prove with
numbers and historical timelines that AWS only saves you time and white hair
while you are smaller. Once you start to scale up and /or use more of their
services, bills start to pile up quicker than before (people have mostly
analyzed the exponential growth of their billing). _

If the fixed cost of your infrastructure is growing faster than your
revenue...you’re doing it wrong. Even if you can get simple VPS elsewhere (and
yes you can). Thsts not where you get the win from AWS. You can host VPS
anywhere and still take advantage of all of AWSs services.

 _Good for you, many of us don 't however. I honestly have no intention to
either. It's a very specific vendor cloud system with a huge amount of
proprietary tech baked in, and I have no desire to entangle my career
prospects with its success. Career preferences. ;)_

That’s just what the original article said. You always tie yourself to a
specific technology and risk that technology becoming out of favor. The trick
is to stay nimble and to keep learning. Right now, there is a lot more money
and opportunity in being a “Cloud Architect” than knowing how to set the same
things up on prem.

------
SubiculumCode
I can't comment on the veracity of the article, but I'd like to just add that
the article was a compelling read.

I am a fan of AMD because I like competition, especially from a spunky
competitor. I'd say the same about Nintendo, which keeps finding ways to win
by focusing on fun over specs.

------
microcolonel
> _...key assumptions...:_

> _..._

> _Discrete graphics_

I get the impression that Steven is willing to make up the history to match
the conclusion.

~~~
MBCook
How so?

~~~
microcolonel
Intel were the first to mass market with reasonable "integrated" (even if it
was just part of the northbridge, it would be in _every_ platform) accelerated
graphics.

~~~
sjm-lbm
He also totally ignores the i740 - not a major part of the story of Intel and
perhaps a misstep, sure, but to act as if they hadn't tried to take a discrete
graphics chip to market well before AMD bought ATi is crazy.

Also, he seems to act as if Itanium was developed in response to AMD64:

>Brilliant brilliant choice by NT team was the bet on the AMD 64 bit
instructions. Seeing AMD64 all over the code drove them “nuts”.

>That led to Itanium…more proprietary distraction.

It was pretty close to the opposite. IA64 was Intel's plan for the future for
more than half of the 90s, and the spec for AMD64 wasn't even published until
2000/2001 or thereabouts. Actual AMD64 processors came out about two years
after Itanium. His quote to support the above assertion even mentions this,
which confuses me more.

------
dan_hawkins
Another angle: BK's 2016 lay offs were done in such a bad style that
experienced engineers began leaving company in large numbers. I guess that
drained the company with a lot of talent.

------
yuhong
Thinking about it, I wonder if some of the reasons for the unethical tactics
Intel did in the mid-2000s was fear that the revenue would decline quickly if
the market share goes to say 10%.

------
treis
Am I missing something here? Intel's operating income is higher than it's ever
been. They aren't being disrupted by any meaningful measure of disruption.

------
theandrewbailey
Was this post written on a phone? The amount of abbreviations ("Ppl", "gfx")
and emojis is bothersome.

~~~
popsomoa
If that bothers you, maybe the internet is not for you... :P

Anyways, seeing people communicate and expressing themselves in different ways
is pretty cool to me.

~~~
ed312
Colloquial expressions and shorthand generally make a written work harder to
comprehend for a broad audience. Broad comprehension and dissemination is
generally what you want in written work.

------
dagenix
What's the deal with all of the random italics disrupting the flow of the
article?

~~~
MBCook
I believe they’re quotes from Ben Thompson’s article, but it took me until
about halfway through to figure that out.

