
Amazon’s Chips Threaten Intel - zone411
https://www.nytimes.com/2018/12/10/technology/amazon-server-chip-intel.html
======
maddyboo
Just yesterday, I was trying to explain to my partner (who isn't a programmer)
why I think open source software and hardware is so important. My argument is
that without enough core components in the industry standard tech stack being
open source, the more likely companies who develop solutions will restrict
user freedom.

For example, Apple has been able to own nearly their entire iDevice stack from
manufacturing to silicon to firmware to OS to ecosystem. They have very little
incentive to interoperate with external organizations via open standards
because they own many of the pieces which need to be interoperable. Thus, they
can force users and developers to use their tooling, the applications they
approve of, dictate what code will run on their platform, and how easily you
can inspect, modify, or repair their products.

This is all to say, it is easy to imagine a future where all performance-
competitive technology is entirely built upon proprietary, locked down stacks
– AND – it will be at a level of complexity and specificity that independent
people simply cannot gain access to the ability to create competitive
solutions. It could be back to the days of the mainframe, but worse, where
only the corporations who create the technology will have access to the
knowledge and tools to experiment and innovate on a competitive scale.

Amazon wants developers to build solutions entirely on top of their platform
using closed source core components. They also want to control the silicon
their own platform runs on. In 10 years, what else will they own, how much
will this effect the freedom offered by competitors, and what impact will it
all have on our freedom to build cool shit?

~~~
reikonomusha
For Apple, I find almost the opposite in terms of forced development. If I
want to write a program on macOS, I can expect the porting effort to Linux to
be simple if not trivial, thanks to UNIX.

Compare that to Windows, which has ostensibly less control, but I continue to
find to be a massive pain to develop for.

With that said, if macOS lost UNIX, I’d be done.

~~~
Meai
You are speaking from inside the walled garden here, from the outside the
effort of porting to macOS is monumental. The naive way is: I have to buy
hardware, learn new Apple specific languages, learn a new OS, learn a new IDE,
be compliant with their app store policies, distribution etc. (which applies
to Windows now too I suppose).

~~~
reikonomusha
I suppose I’m an outlier then. I just use normal UNIX tools. No Xcode, no
IDE’s, no proprietary toolchains. Maybe my programs are too boring. :)

There are some portability quirks for sure, though.

~~~
Koshkin
Why not Linux, then? Why pay "the Apple tax"?

~~~
aoeusnth1
(not the OP). I develop linux services on a Mac, connected remotely to a linux
workstation. My company pays the Apple tax, and the hardware is nicer. Plus,
my friends use iMessage.

I tried using a company Linux device only to find that its graphics drivers
weren't supported, the 4k scaling didn't work nicely without spending ~30
minutes looking up how to do it the hard way, and it didn't play nicely when
connected to a normal 1080 monitor. I returned the device after (thankfully)
it failed.

------
40acres
Mike Tyson said: "Everyone has a plan until they get punched in the face."
When it comes to semiconductors I'd say: "Everyone wants to make their own
chips until they have to do so at scale". (Doesn't roll of the tounge as
well!)

There is definitely a threat from Apple, Amazon, Google and especially China
that will put Intel's market share in target distance, but making chips at
scale is incredibly difficult. It's hard to see Amazon transitioning their AWS
machines to Amazon built chips, but if they display competency they'll
certainly be able to squeeze more out of Intel.

~~~
Nokinside
Intel is microarchitecture + fab corporation. They do it all.

1\. TSMC (also GlobalFoundries) is fab only. They design the node for the
process and way to fabricate it.

2\. Then ARM joins with TSMC to develop the high performance microarchitecture
for their processor design for TSMC's process.

3\. Then ARM licenses the microarchitecture designed for the new processes to
Amazon, Apple, Qualcomm who develop their own variants. Most of the the
prosessor microarchitecture is the same for the same manufacturing process.

As a result, costs are shared for large degree. Intel may still some scale
advantages from integrated approach but not as much as you might think.

~~~
dnautics
My personal suspicion is that the integrated approach can eventually be a
liability. If you have an integrated process/design house, process can count
on design to work around its shortcomings and failures. By contrast, if you
are process only, and multiple firms make designs for the process, you have to
make your process robust, which means that your process staff is ready and has
good practices down when it's time to shrink.

^^ Note that this is entirely baseless speculation.

~~~
Nokinside
Intel has always been a process first, design second company. The company was
founded by physicists and chemists. Their process has always been the best in
the world until just recently. Intel brings in or buys design talent when
needed, but their R&D in process technology is their strongest suit even
today.

~~~
stcredzero
_Intel has always been a process first, design second company. The company was
founded by physicists and chemists. Their process has always been the best in
the world until just recently._

So they had a particular advantage, and exploited the heck out of it, but now
the potency of that advantage is largely gone?

~~~
n-gatedotcom
I don't know what country you're in but in cricket there's a concept of
innings and scoring runs. There's this dude who averaged nearly a 100 in every
innings, most others average 50.

Now think of the situation as him scoring a few knots. Is he old and retiring?
Or is this just a slump in form? Nobody knows!

I worked for a design team and we were proud of our material engineers.

~~~
stcredzero
Back in about 1996, most of the profs were going on about how x86 would
crumble under the weight of the ISA, and RISC was the future. One of my profs
knew people at Intel, and talked of a roadmap they had for kicking butt for
the next dozen years. Turns out, the road map was more or less right.

Is there more roadmap?

~~~
hermitdev
Intel chips are RISC under the hood these days (for a long while - decade or
more). They're CISC at the ASM layer before the instructions are decoded and
dispatched by the microcode.

~~~
earenndil
CISC is still an asset rather than a liability, though, as it means you can
fit more code into cache.

~~~
chroma
I don't think that's an advantage these days. The bottleneck seems to be
decoding instructions, and that's easier to parallelize if instructions are
fixed width. Case in point: The big cores on Apple's A11 and A12 SoCs can
decode 7 instructions per cycle. Intel's Skylake can do 5. Intel CPUs also
have μop caches because decoding x86 is so expensive.

~~~
gpderetta
Sure, but intel CISC instructions can do more, so in the end is a wash.

~~~
chroma
That's not the case. Only one of Skylake's decoders can translate complex x86
instructions. The other 4 are simple decoders, and can only transform a simple
x86 instruction into a single µop. At most, Skylake's decoder can emit 5 µops
per cycle.[1]

1\.
[https://en.wikichip.org/wiki/intel/microarchitectures/skylak...](https://en.wikichip.org/wiki/intel/microarchitectures/skylake_\(client\)#Decoding)

~~~
jawnv6
... so what? most code's hot and should be issued from the uop cache at
6uop/cl with "80%+ hit rate" from your source

you're really not making the case that "decode" is the bottleneck, are you
unaware of the mitigations that x86 designs have taken to alleviate that? or
are those mitigations your proof that the ISA's deficient

------
kelp
I'm by no means an expert in this, and maybe it's a bit obvious, but hadn't
seen this mentioned yet.

I think as we run out of gains to be had from process size reductions, the
next frontier for cloud providers is in custom silicon for specific workloads.
First we saw GPUs move to the cloud, then Google announced their TPUs.

Behind the scenes, Amazon's acquisition of Annapurna Labs has been paying off
with their Nitro ([http://www.brendangregg.com/blog/2017-11-29/aws-
ec2-virtuali...](http://www.brendangregg.com/blog/2017-11-29/aws-
ec2-virtualization-2017.html)) ASIC, nearly eliminating the virtualization
tax, and providing a ton of impressive network capabilities and performance
gains.

The Graviton, which I believe also came from the Annapurna team, is likely
just the start. Though a general purpose CPU, it's getting AWS started on
custom silicon. AWS seems all about providing their customers with a million
different options to micro-optimize things. I think the next step will be an
expanding portfolio of hardware for specific workloads.

The more scale AWS has, the more it makes sense to cater to somewhat niche
needs. Their scale will enable customization of hardware to serve many
different workloads and is going to be yet another of Amazon's long-term
competitive advantages.

I think that will show up in two ways. Hardware narrowly focused on certain
workloads, like Google's TPUs that show really high performance, and general
purpose CPUs like these Gravitons that are more cost efficent for some
workloads.

I see echoes of Apple's acquisition of P.A. Semi that lead to the development
of the A series CPUs. My iPhone XS beats my MacBook (early 2016) on multi-core
Geekbench by 37%. (And on single core, it's only 10% slower than a 2018
MacBook Pro 15.)

If Amazon is able to have similar success in custom silicon, this will be a
big deal.

I think early next year we'll test the a1 instances for some of our stateless
work loads and see what the price/performance really looks like.

It does make me worry that this sort of thing will cement the dominance of
large cloud providers, and we'll be left with only a handful (3?) of real
competitors.

~~~
snaky
> the next frontier for cloud providers is in custom silicon for specific
> workloads

Sure, the cloud is a classical mainframe, and mainframes are famous for using
specialized hardware for pretty much everything.

------
comboy
This can be powerful. They don't have to build general CPU right away. Start
with storage and by the time you have database boxes on ASICS designed to
match your software you're already winning.

I'm surprised there's still not much effort to make FPGAs more affordable and
base everything on it. With this scale it seems like it should be a win on the
long run over deploying new ASICs every few years.

~~~
adamson
I'm pretty out of the loop here. Are there existing, widely used workloads
that are critical for storage for which FPGAs are competitive with CPUs?

~~~
comboy
I'm just guessing that if the only thing that given box is doing is handling
very specific internal S3 API, you can probably optimize a few things over a
multi-purpose architecture.

I'm totally blue and I'll be honest - when I want to learn about something, it
seems that stating some thesis gets me way more information from HN vs asking
a question, especially when it turns out to be wrong ;)

~~~
jawnv6
This is a horrendously disrespectful way to learn about a niche area. I'm
shocked to see it laid out so plainly like that.

Your first assertion here is incredibly wrong. ISA's don't split up as cleanly
as your fictional version, a NAS box has to support all the same branch,
arithmetic, and memory operations as a "multi-purpose" architecture. The only
conceivable things you'd bolt on would be things like NEON accelerators for
AES, and there's better ways to do that than mucking about with the ISA.

Do you get folks coming back for a second reply after this charade is made
apparent?

~~~
basilgohar
And yet, here he/she gets exactly the result they were looking for. It's a
well known online trope that you get your question answered faster and more
thoroughly by posting a wrong answer first rather than plainly asking. My
guess is it triggers something primal in us geeks.

See: [https://xkcd.com/386/](https://xkcd.com/386/)

~~~
jawnv6
It's still incredibly disrespectful way to approach a community, and all of
these replies ignore the thrust of my question about the expert re-upping
after this ruse has been made apparent.

Comboy spread a lot of disinformation in the first post, like "I'm surprised
there's still not much effort to make FPGAs more affordable and base
everything on it." before the lie was laid bare. Looking forward to arguing
with "FPGA experts" who harken back to that post as their primary source.

------
ChuckMcM
From the article: _" Amazon licensed much of the technology from ARM, the
company that provides the basic technology for most smartphone chips. It made
the chip through TSMC, a Taiwanese company."_

Amazon became an ARM architecture licensee and had their variant manufactured
for them by TSMC.

I find the characterization "home grown" as a stretch here, had they designed
their own instruction set etc I might agree.

That said, the interesting thing about this article is that given Intel's
margins, a company like Amazon feels they can take on the huge cost of
integrating a CPU, support chips, and related components to achieve $/CPU-OP
margins that are better than letting Intel or AMD eat all of that design and
validation expense.

This sort of move by Amazon and AMD's move to aggressively price the EPYC
server chips, really puts a tremendous amount of pressure on Intel.

~~~
projektfu
I think it's fair to call it home grown, like in house, but to say it
threatens Intel is like saying that Amazon has introduced a server chip that
will take over everyone's datacenters from Intel. Seems unlikely. Amazon also
has to be careful now not to run afoul of anyone's IP as they can't farm out
that responsibility to other providers.

~~~
Despegar
Well I'd say it's certainly a big risk to Intel. For one, a lot of Intel
customers are going in-house with chip design. Apple on its own might not hurt
much, but add a few more like Amazon or Google, and things can really unravel.
If you're an integrated chip designer and you lose volume, you're in for a lot
of pain.

The thing that defines Amazon in recent years is their desire to make
everything a third party service (AWS, Fulfillment, etc) and they may in fact
do that for their own chips. So while Apple may never sell their chips to
anyone, Amazon may decide to enter the merchant chip business (if they decide
it's not a competitive advantage). Maybe they wouldn't sell it to Microsoft or
Google, but certainly other companies that they don't compete with that
operate their own servers (Facebook). And then Intel would really be losing
volume.

~~~
stcredzero
_The thing that defines Amazon in recent years is their desire to make
everything a third party service (AWS, Fulfillment, etc) and they may in fact
do that for their own chips._

AWS was Amazon monetizing its own infrastructure. Maybe they're thinking of
monetizing AWS's infrastructure? Instead of being in the gold rush, sell the
pickaxes and backpacks in a general store. Then, when people realize there's a
lot of money in those stores, start selling store shelves and offer wholesale
logistics.

~~~
justicezyx
> AWS was Amazon monetizing its own infrastructure.

AWS builds infrastructure and monetize them.

AWS's hardware usage far exceeds what they need for their other businesses.

------
acqq
The article is non technical. For those who search for that:

[https://www.theregister.co.uk/2018/11/27/amazon_aws_graviton...](https://www.theregister.co.uk/2018/11/27/amazon_aws_graviton_specs/)

"Semiconductor industry watcher David Schor shared SciMark and C-Ray
benchmarks for the 16-core Graviton. In the SciMark testing, the AWS system-
on-chip was twice as fast as a Raspberry Pi 3 Model B+ on Linux 4.14."

[http://codepad.org/wZe5SrjI](http://codepad.org/wZe5SrjI)

""It does well on the Phoronix Test Suite," he said. "It does poorly
benchmarking our website fully deployed on it: Nginx + PHP + MediaWiki, and
everything else involved. This is your 'real world' test. All 16 cores can't
match even 5 cores of our Xeon E5-2697 v4.""

"The system-on-chips use a mix of Arm's data-center-friendly Neoverse
technology, and Annapurna's in-house designs. The 16 vCPU instances are
arranged in four quad-core clusters with 2MB of shared L2 cache per cluster,
and 32KB of L1 data cache, and 48KB of L1 instruction cache, per core. One
vCPU maps to one physical core."

------
ksec
Designing an ARM Chip from ARM blueprint and TSMC is relatively simple and
cheap for Amazon. And there is enough market and hype to justify the
investment as they will probably break even within 24 months. It has become
obvious that Intel isn't really willing to lower price that affect margin and
sales. So Amazon needs to make a statement to Intel to say they have lots of
options, EPYC and ARM.

I don't want to hype Zen 2 / EPYC 2, but I do think it will be very
competitive. And that is a Threat to Intel. And fundamentally, the REAL threat
is neither ARM, AMD or even RSIC-V, it is TSMC.

------
becauseiam
The article misses that months before the ARM announcement, AWS announced AMD
based instances being available in m5 and r5 classes, and are cheaper than the
default Intel offerings. If anything Intel might be afraid is _that_ because
the workloads that can be achieved are comparable.

~~~
TazeTSchnitzel
AMD are in a great place. TSMC's 7nm is in great shape (unlike Intel's 10nm),
and AMD's multi-chip architecture allows them to get vastly better effective
yields for high-core-count “chips”. They will be selling superior CPU to
Intel's and they will be making them at significantly lower cost, while Intel
is struggling to compete, stuck with monolithic chips on 14nm.

~~~
stdplaceholder
I have not met anyone who deployed the new AMD stuff at scale and is happy
with the outcome. The new architecture shines on small codes like SPEC and
then falls apart in large, branchy, pointer-chasing codes that everyone runs
in production. I would not say AMD is “in a great place” with their current
product. They are putting slight pressure on Intel on the very low end and
filling some very specialized niches but that’s about it.

~~~
evancox100
Any public sources? Just curious

~~~
stdplaceholder
I don't think anyone has an incentive to release findings because that will
just sour their future work with AMD and counteract any pricing concessions
they are getting for waving the AMD platform around under Intel's noses. Same
for POWER, for what that's worth.

------
twoodfin
Pretty funny to see “Dave Patterson” described as “a Google chip specialist”!

~~~
cbsmith
I was like, "is that the same Dave Patterson?", and then thinking about Google
& typical tech journalism I realized, "yeah, of course it is".

------
Solar19
Why ARM though? The article touts how this is a homegrown chip, and Amazon
obviously has the resources to build a truly homegrown, optimized CPU. Why use
ARM instead and import all of its idiosyncrasies?

I guess I could ask the question more broadly. Why does every company that
"designs its own chip" use ARM instead of designing its own ISA? How much work
does it save? How much optimization does it forsake? I'm reminded of John
Regehr's post on discovering the optimal instruction set:
[https://blog.regehr.org/archives/669](https://blog.regehr.org/archives/669)

~~~
twtw
> how much work does it save?

Really a lot of work. Software compatibility and availability will make or
break a processor. Unless you own the entire stack and are willing to deal
with the struggles of a custom ISA, nobody wants to make a new one - the
benefits probably aren't that great.

~~~
kelp
This can't be overstated for a cloud provider, especially one like AWS that
wants to run everything for everyone. ARM has good OS and compiler support,
and if you want people to move some workloads off x86 to another architecture,
you're gonna want that migration to be as painless as possible.

------
j1vms
Well, guess Intel's thinking "there's always Microsoft (Azure)."

I don't think it's settled which of Azure or AWS captures the most market
share in the next decade. AWS has a lot going for it but MS is coming in hard
and fast on the OSS to cloud integration front. Probably would have made sense
for Amazon to pickup Red Hat from IBM.

~~~
stcredzero
Microsoft can compile everything on ARM if they want to. No reason why Azure
would be completely stuck on Intel chips.

~~~
cbsmith
They can compile "everything" that Microsoft has written to ARM. There's a
small matter of, you know, all the other stuff.

~~~
MBCook
Make the server is cheaper and people will use them.

It already doesn’t matter for a lot of people. Java? PHP? JS? C#? If you’re
using in interpreted language then as long is the interpreter is updated you
don’t need to care.

Outside of that costs rule the day. If the arm servers are noticeably cheaper
then people will be incentivized to make their software run on it.

Tons of open source software runs on Windows. Why isn’t windows more common on
Azure and AWS? It costs more per hour. So those who could moved.

~~~
jusssi
> Tons of open source software runs on Windows. Why isn’t windows more common
> on Azure and AWS? It costs more per hour. So those who could moved.

Until we get something more practical than RDP mouse slinger GUI for remote
admin, I doubt serious people want to use Windows, even if it cost the same.

~~~
cbsmith
We crossed that threshold a very, very long time ago.

------
erikpukinskis
Apologies in advance for the layperson question:

It's my understanding that a lot of CPU gains come from caching. That suggests
to me that there is potential performance to be gained by caching across a
larger number of machines.

Is that something Amazon could do here? Somehow connect all their machines and
cache in a huge space?

Maybe individual physical machines would be more like a front end for a cache
space, and when I get an Amazon instance, it's actually a "virtual" CPU that
pieces together instructions that are mostly already cached in various places
throughout the network?

Is that even theoretically possible, or is it total fantasy?

~~~
rblatz
Cache is insanely fast, orders of magnitude faster than ram, and basically
instant compared to going to disk or another machine on the network. I would
find it unlikely that they could overcome the added network latency introduced
in such a system.

Edit: check this out for more info
[https://people.eecs.berkeley.edu/~rcs/research/interactive_l...](https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html)

~~~
sharpneli
And to give the 1ns L1 access time some physicality. During 1 nanosecond light
in vacuum travels 30 centimeters, or 12 inches. Signals in conductors travel
slower. It is ridiculously short amount of time.

This means that there will absolutely never* be anything that can give faster
access that is going to be farther than that from the CPU. Or more
specifically half of that distance as the message to request what part of the
cache to read must travel to the cache itself.

* Unless we find out FTL is possible. But it's a rather safe bet to assume no.

------
DeathArrow
It's not exactly a threat. Amazon use cortex A72 in their CPUs and there's no
way they can replace most Intel CPUs with that. The performance isn't there.

~~~
simonh
The threat is the potential for this to encroach on Intel's territory in the
future, not necessarily this specific chip. The article makes this clear by
opening with talk of a 'new line of work' and 'going the do-it-yourself
route'. This is about the trend, not the moment and I thought they did a good
job clearly framing it that way.

------
plinkplonk
(Tangential) What I'd really like to see is a competitor to Google's TPU that
I can actually buy (vs rent on the cloud)

~~~
DeathArrow
Check here: [https://www.amazon.com/PNY-TCSV100MPCIE-PB-Nvidia-
Tesla-v100...](https://www.amazon.com/PNY-TCSV100MPCIE-PB-Nvidia-
Tesla-v100/dp/B076P84525)

------
mmaunder
Amazon sell compute to the world. This is vertical integration and makes sense
at a certain scale. That scale has to be massive, but they appear to have
reached it. They may have chosen to execute sooner if they also plan to sell
the chips to hardware vendors like Dell and the financials check out.

------
karakanb
Truly lame question, is there any possibility for Amazon or other cloud
providers to monitor the executed instructions and their distribution? Could
this allow more optimized architectures for specific loads, or would this not
bring any actual benefit?

------
syntaxing
I thought Amazon is just licensing a custom design from ARM and manufacturing
from TSMC. It's a step in the right direction but it'll be a good amount of
years before Amazon has their own Fab making their own chips.

------
nickik
To bad they are not doing this with RISC-V. Getting a large costumer and high
performance implementations would been a great boost.

Since this is about vertical integration for them it would make a certain
amount of sense.

~~~
DeathArrow
Bad for RISC-V, good for Amazon. RISC-V is in a far less usable state than ARM
ISA.

~~~
nickik
Why would it be bad for RISC-V if a large company invest money and moves
workloads to it?

Of course RISC-V is not as a far as ARM but if Amazon really had that strategy
the ISA would hardly be their primary problem.

------
novaRom
In the near future we will see more and more chips coming from Asia. Not just
final silicon production, test, and packaging, but also complete hardware
design. They will produce GPUs, FPGAs, and CPUs of all classes.

Look how many students in Computer Design, Digital Design, Electrical
Engineering graduate every single year. Multiply that by low costs with high
productivity and you will find Silicon Valley will face a very strong
competitor.

~~~
yonkshi
I think we will also see proportionally more and more chips coming out of US
companies as well.

When Google started their own chips (TPUs), they just kickstarted a vertical
integration race amongst the tech giants. Apple, Intel, Nvidia were the old
players, Google is dipping their toes in with TPU, now Amazon and probably
soon Microsoft and FB.

I think overall we will see more chips from both Asia and US

~~~
evancox100
Microsoft has been doing custom silicon longer than either Google or Amazon,
as a part of the Xbox security-related functionality I believe. And then other
consumer products like Hololens, custom pen ASIC in Surface, etc. Now Azure
Sphere (more so giving away silicon IP to others, but still have their fingers
in it.)

Edit: All this in addition to their widespread and well documented usage of
FPGAs for Azure network offload and Bing acceleration.

------
mehdix
Open source has won the software, however it mostly runs on closed-source and
proprietary hardware. Perhaps open hardware is the ultimate answer.

------
aneutron
They're ignoring the fact that it's not even the same ISA, not even the same
use cases or maturity for these ISAs, and that for them to actually produce a
x86 chip, the only way to do that is to licence either AMD or Intel as IIRC
they're the only ones to hold the x86 patent.

That or change the way a majority of the software stack was written for the
past 15 years.

~~~
gpderetta
A lot of the software stack was rewritten in the past 10 year to make sure
that it would work well on ARM though. ARM itself contributed to the effort.

------
tmaly
As much as it is easy to license from ARM and use TSM as a foundry, it does
not make much economic sense.

Look at Google's purchase of Motorola. They ended up selling Motorola to
Lenova.

The hardware industry and specifically the chip industry is very specialized.
If your code competency is not chip making, it really does not make sense to
try to enter the industry.

------
skybrian
You might compare this with Google's partnership with IBM, using POWER chips
in their data centers.

[https://www.fool.com/investing/2018/03/22/googles-data-
cente...](https://www.fool.com/investing/2018/03/22/googles-data-centers-now-
have-ibm-inside.aspx)

------
atonse
I wanted to try to use an AWS ARM server as a bastion host running Wireguard.
But a t2.nano was cheaper.

Anyone get it working? (I haven’t tried)

~~~
broknbottle
The t2.nano without unlimited burst credits relies on staying under a usage
threshold. Why wireguard over something like sshuttle?

~~~
kelp
FYI, t3 instances have and option to either be throttld when you exceed your
burst credit or just pay more for burst.

~~~
cthalupa
T2 offers this as well.

~~~
kelp
Oh you’re right! I’d missed the launch of t2 unlimited.

Looks like with t3 they’ve swapped the defaults. T3 now defaults to unlimited,
t2 it’s an option.

------
fareesh
How does competition work with regard to trade secrets like chip design etc?
If I hire the top dogs at a chip maker and they design a similar chip for my
company from memory, how is this prevented from happening?

~~~
wmf
There are patents and non-competes in some states but basically nothing
prevents it. Silicon Valley exists because of the cycle of engineers leaving
established companies to start startups (or now, leaving established companies
to go to companies that are branching out into every area possible).

~~~
fareesh
Would patenting the chip design also expose the very secret that is being
guarded? So someone in China could just read the patent and make the same
thing, is that right?

~~~
wmf
Some companies patent all their ideas, including the ones they don't use. This
creates a chaff effect where you can't be sure which ideas you should copy.

There are certainly trade secrets that aren't patented but the whole point is
that they aren't protected as much.

------
sunstone
And Amazon is not alone. Other potential candidates to add to list include
AMD, Apple and Qualcomm. Maybe a few others that don't come to mind right
away.

------
StreamBright
Not only Amazon's but Apple's, Jiāngnán's as well. I think Intel really should
crank up innovation if they would like to stay competitive.

------
cronix
I could almost hear Larry Ellison snickering in the background as I read this.
I wonder how much Amazon pays Oracle, anyway? I think in '16 is was around
$60M (according to Ellison:
[https://www.forbes.com/sites/bobevans1/2017/12/12/oracles-
la...](https://www.forbes.com/sites/bobevans1/2017/12/12/oracles-larry-
ellison-challenges-amazon-salesforce-and-workday-on-the-future-of-the-
cloud/#62e137513522) )

~~~
PowerfulWizard
Some more info here:
[https://perspectives.mvdirona.com/2018/11/1227/](https://perspectives.mvdirona.com/2018/11/1227/)

------
kev009
Yawn, the amount of publicity ARM gets for free for the past decade to enter
the data center is perplexing. The economy of these designs are not good
versus the competition full stop.

There are three chips that work well in the data center: EPYC, Xeon, and
POWER. All of these are billion dollar designs. Nothing about the ARM
ecosystem supports designing to these same constraints or spending that amount
of money to enter this space seriously.

------
srinikoganti
This is oversimplification.

Binary compatibility, of millions of third party libraries out there, is the
biggest hurdle.

How many years did it take to switch from Python 2.7 to Python 3 ?

AWS lockin is a bigger threat.

I prefer to keep my binaries cloud agnostic and X86 and X64 compatible rather
than AWS only code or even ARM binary.

By the way what happened to Amazon phones ?

~~~
leowoo91
What is worrying me more is that we are all stuck on idea of being locked-in
to any vendor. Why can't we focus on innovation instead? Take Instagram for
e.g. they started with AWS right? Then moved on the real hardware within few
weeks (if I remember correctly) after acquisition. I understand, cost of the
unseen problems are high, but it might be good idea to remember it also much
depends on delivering speed.

------
ejz
Um, and AMD.

------
trumped
an amazon chip would be the last thing I would want to buy seeing how awful
their tablets are....

------
imtringued
Why should intel be scared of AWS selling overpriced ARM servers? Scaleway
offers 8 core servers with 8GB for almost half the monthly price of the
cheapest ARM instance (with 1 core) amazon offers. Even the 16 core instance
doesn't make sense. Packet.net will give you a whole 96 core server for the
same price with 4x the memory. Capable ARM hardware has existed for years
already and AWS is barely able to catch up.

Yes, I know AWS' target group is small startups and big enterprises who
couldn't care less about how much their servers costs them. But at the same
time this means switching to a new architecture for meager cost savings isn't
attractive to them at all. They are willing to pay a premium if it means that
things "just work". As long as this barrier exists ARM servers have zero
chance of threatening intel.

~~~
blihp
Because this is just their first generation of chips. Should Amazon stick with
it, they will get better at it and costs will come down while performance goes
up. This should scare Intel because they have been banking on the data center
for future growth. This looks like pretty much their last stand for a place
where they can maintain margins. They lost mobile very early on. More recently
it's looking like PCs are at risk thanks to competition from AMD. If they
start losing their large volume enterprise customers, they'll need to come up
with a plan D.

~~~
beagle3
That might be the first generation under the Amazon brand, but my 3 year old
Synology is using an Annapurna CPU; Amazon's chips are built the Annapurna
(acquired by Amazon) team.

~~~
blihp
There's a world of difference between designing a chip for a range of
customers (i.e. pre-Amazon acquisition) and designing a chip for one customer.
So provided Amazon does a reasonably good job of managing the acquisition,
this really should be looked at as first generation (despite however many
iterations occurred before the acquisition) since their design constraints
(workload/environmental/power/thermal) have likely been altered significantly.

------
paulie_a
Amd has been kicking the shit out of Intel for 30 years. No seemed to notice.
Their 386 was faster than Intel 486. The octa core that is tenish years old is
still competitive against non Xeon chips. At the time it completely destroyed
intela chips for half or less of the price. Intel is doing nothing new and
just gliding. They haven't done anything interesting in 20 years, the p4s and
rdram were an incredible pile of junk. The itanic.. that says it all

It's cute I get a downvoted instead of an actual rebuttal. Amd has been doing
great work for a far less price, Intel has been phoning it in for decades, and
for a good bit of time making inferior technology.

