
AMD Ryzen 7 1800X Benchmarked – Giving Intel’s $1000 Chips A Run For It - mrb
http://wccftech.com/amd-ryzen-7-1800x-8-core-benchmarks/
======
nottorp
More importantly for some, it's a 95W AMD part vs a 140W Intel part. If the
power delta stays the same even for mainstream parts, does this mean AMD will
become a good quiet option too?

By quiet I mean "I can't hear it at night" not "I can't hear it over the
music".

Edit: Also, I think data centers these days are limited in density by power
delivery and cooling. Even a 25W delta adds up when you're talking about
thousands of servers.

~~~
jsheard
TDP is a bit tricky to compare in this situation. The TDP of Intels HEDT chips
is dictated by their 256-bit wide AVX units which drive power consumption
through the roof when fully loaded, and the chips don't get nearly as hot when
AVX isn't being used. Ryzen doesn't have this edge case because it sticks with
128-bit SIMD units and runs AVX instructions over two clock cycles.

I expect the "140W" Intel part will have closer power consumption to Ryzen in
non-AVX loads, and outperform Ryzen in AVX loads while using more power.

~~~
Symmetry
Intel chips use a lower base clock frequency when they're running AVX code
than when they're running non-AVX code for the very reasons you mention. In
Intel's example[1] they talk about a base freqeuncy of 2.3 GHz without using
AVX dropping to 1.9 with AVX[1]. That drop isn't much in frequency terms but
you also have to figure in the voltage drop that the lower frequency allows
you to cut power much more than you cut frequency. The end result is somewhere
between power dropping with the cube or square of frequency depending on
whether active or leakage power is dominating. Which is to say that they can
save between 30% and 40% of their power by reducing the frequency at the same
time they're lighting up a large area of new silicon. So I assume they chose
the specific frequencies so that they're always hitting the same TDP whether
they're making us of AVX or not.

Of course, performance scales not to far from linearly here so going to AVX is
a large net performance win, as you say. It's just that I'd assume equal power
consumption on non-AVX loads.

[1][http://www.intel.com/content/dam/www/public/us/en/documents/...](http://www.intel.com/content/dam/www/public/us/en/documents/white-
papers/performance-xeon-e5-v3-advanced-vector-extensions-paper.pdf)

~~~
iheartmemcache
I agree with you but when I clicked your footnote source, oh god - the
marketing wank. The problem with all of these benchmarks (starting with LAPACK
in 1979, up until modern day benchmarks used in the TOP500 or the TPC which
models a bank for RDBMS performance) is the synthetic nature of the tests and
the unreasonable locality of what they end up testing. One of the bazillions
of reasons why the base frequency can drop in those tests Intel used is
because your CPU(s) isn't(aren't) context switching or having to do things a
normal MSSQL or Oracle DB will do.

I.e., LAPACK/BLAS benchmarks are just really big linear algebra matrix
problems, so obviously your pre-fetch and branch prediction performance will
be significantly better since you aren't dealing with interrupts, locatedb, or
Windows DCOM events firing off in the background. You have a huge set of
matrices with a _very_ predictable set of branches, fetches, and decodes, so
obviously your CPU can optimize for that load, you're just paying for it in
latency on the back-end (RAM fetches are the new disk swap ;)).

On all those benchmarks, (i.e. your standard LU matrix decomposition which
previously was the basis of the LAPACK benchmarks, though things might have
changed in the ~10 years since I've really looked at things) isn't CPU-bound
anymore, so of course your instruction-per-cycle load on the CPU isn't where
you'll be bottlenecking (and hasn't been since "let's avoid floating-point
operations and just use static look-ups instead since we don't want the 10x
cost of using the FDIVP instruction!"). Your processor can very easily
anticipate from where in that sparse-matrix your next data fetch is going to
be. It's the cost of that RAM fetch[1] going along that copper trace which is
going to be where you're going to bottleneck on any heavy numerical
computation.

The power consumption on your CPU might drop a nominal amount which is great
for those marketing white papers, but for a numerically heavy load, you're
paying just as much (in total power consumption per 4U in the data center,
total heat generation/dissipation within the case, and total processing time)
on the back-end for those fetches.

[1] [https://i.stack.imgur.com/a7jWu.png](https://i.stack.imgur.com/a7jWu.png)
(I normally cite academic references, but this is 'good enough' to convey my
point, I hope).

~~~
slizard
> On all those benchmarks, (i.e. your standard LU matrix decomposition which
> previously was the basis of the LAPACK benchmarks, though things might have
> changed in the ~10 years since I've really looked at things) isn't CPU-bound
> anymore, so of course your instruction-per-cycle load on the CPU isn't where
> you'll be bottlenecking [...]

That's incorrect, DGEMM and most BLAS3 is way above most if not all processor
uarch's aritmetic intensity threshold [1]. Broadwell CPUs are at 10 Flops/byte
[1] while e.g. DGEMM is 32 Flops/byte [2], so that's definitely
FLOP/instruction bound and not memory.

> Your processor can very easily anticipate from where in that sparse-matrix
> your next data fetch is going to be. It's the cost of that RAM fetch[1]
> going along that copper trace which is going to be where you're going to
> bottleneck on any heavy numerical computation.

You're mixing things up, it seems! LAPACK/BLAS is _dense_ matrix, not sparse,
so now you switched topics. Sparse matrix ops are generally >1 Flops/byte (see
[2]), so that's indeed memory bound.

[1] [https://www.karlrupp.net/wp-content/uploads/2013/06/flop-
per...](https://www.karlrupp.net/wp-content/uploads/2013/06/flop-per-byte-
dp.png) [2]
[http://www.siam.org/pdf/news/2090.pdf](http://www.siam.org/pdf/news/2090.pdf)

------
chx
These last few years Intel was much working much more on bring power
consumption down than ... anything else? The performance changed little:
[http://www.hardocp.com/article/2017/01/13/kaby_lake_7700k_vs...](http://www.hardocp.com/article/2017/01/13/kaby_lake_7700k_vs_sandy_bridge_2600k_ipc_review/4)
20%? But now the 15W U series mobile chips deliver the same performance as 35W
M chips in the Sandy/Ivy Bridge era which is quite impressive.

This, of course, leads to a tremendous advantage in server CPUs -- if you have
capable cores at a lower wattage, you can add more of them. Hence the 14 core
E5-2690V4 @ 135W at 2.6Ghz vs 8 core E5-2690 @ 135W at 2.9GHz. So in just four
generations from Sandy Bridge to Broadwell, no TDP change, roughly the same or
a bit better single thread execution but almost double the core count. If you
are willing to drop a tiny bit your base clock further and your TDP higher,
perhaps 10% less single thread performance, you can get a hulk of a 24 core
E7-8890V4 at 165W. And that's where the big profit is -- currently.

Now some unfounded nonsense: what if Intel is not pulling these crazy prices
out of their sorry behind but there's in fact some reality behind them? It
just bothers me that the price of the 24 core chip is so close to the 6*6
times of a 4 core chip. It could be a coincidence.

~~~
Matthias247
I think you sum it up quite good: The thing that Intel did mostly was putting
more cores on the chips. However that happenend only for the server lines.
With Sandybridge and before the top desktop and server chips were nearly
identical, now there's a huge difference. Which also means that during the
last years the performance of desktop CPUs increased barely. That might be ok
from the point of view that normal users would mostly not benefit from more
than 4 cores. However assuming that developing and manufacturing the top-end
model of each generation costs roughtly the same the new generation desktop
models should probably be a lot cheapter than they are currently sold for.

~~~
barrkel
There's been a fork in CPU design goals. Before, the fastest chip was the
goal. Now, we have fast single-threaded vs high parallel throughput - and
while there's a spectrum between them, the extremes matter, it's not a case of
a Goldilocks solution.

Consumer loads are mostly limited by single threaded performance, though
software is increasingly written for more cores. Best choice for a consumer
used to be dual core, now it's probably quad core with thermal limited
boosting on less parallel workloads.

It's only prosumers doing lots of transcoding or rendering, or CPU intensive
VMs, that typically benefit from increasing cores above 4.

Whereas the server space prizes throughput and much of its workload is
trivially parallelizable. On the server, higher single core speed mostly just
decreases latency for small tasks; if you're happy with the latency, you can
get more cores working on shared memory and potentially get big wins in perf.

There are diminishing returns though, scaling up boxes is expensive and unless
it's being forced by software licensing or architectural models, things like
Hadoop and Spark for spreading the load across a whole cluster are
increasingly attractive. This helps solve the I/O throughput problem too.

~~~
digi_owl
You also had the whole thing AMD seemed to be attempting with their first gen
APU design, shifting the floating point workload over to GPGPUs rather than
use a dedicated floating point unit.

------
mindcrime
Wow... I really hope this is true and that it's a sign that AMD have their
mojo back. Partly because I have a sentimental favoritism towards AMD, and
partly because Intel (and Nvidia) need competition.

~~~
vegabook
An AMD 40Mhz 80486 was my first build so I also have a soft spot. At the time
it was the fatest x86 one could buy. Gees I even remember lusting after the
Ati 8514 Ultra, one of the earliest GPU accelerators, clone of IBM's high end
workstation standard of the early 90s, as a teenager. Wow I'm getting old.

Conversely, though I admire Intel for its central place in advancing computing
itself, I cannot love the company because it has shamelessly monetized its
monopoly over the past 7 years or so with vast overpricing on higher end
lines. I for one am definitely doing a Ryzen 7 build as soon as I can get a
chip, and the same goes for Vega. So happy to see AMD back in the game.

~~~
jcoffland
I remember those days and bought AMD then too. There was even a third
competitor in those days, Cyrix.

~~~
makapuf
I remember cyrix being later in the game with its 686/166 Mhz proc which did
wonders.

~~~
jcoffland
Cyrix had a 486.

~~~
barrkel
They marketed 5x86 and 6x86. Faster integer performance than Intel; the FPU
was the weak spot.

------
elementalest
There is so much hype with AMD atm. In particular sites like WCCFTech have
gone almost rabid with hype. My experience is that AMD tend to over promise
with their marketing and rumours. So whilst the hype _may_ well be warranted
this time, I'm going to wait for the actual reviews to come through and take
these leaks with a grain of salt.

~~~
qeternity
Fair enough, but this is a bit more concrete than marketing fluff.

------
kayoone
Take everything WCCFTECH writes with a pinch of salt, all of this could still
be fake. The Ryzen hype train is on full steam and i really hope they deliver,
but i'll wait for official benchmarks.

In the last few years, more often than not AMD hardware leaks have shown
potential that in the end wasn't met. Things look good for Ryzen though, maybe
they can finally make 6-8 Core CPUs mainstream.

~~~
nindalf
Which website would you consider "official"? Anandtech?

~~~
chinhodado
By "official" I think he meant the benchmarks done by the websites themselves,
using proper methodologies. In this article the benchmark was submitted by a
user, using synthetic tests from a single application.

------
Nokinside
I'm trying to put this into perspective.

AMD having temporary edge over Intel has happened before. Remember when AMD
had Athlon in 1999 and they made huge $1 billion in profits.

Intel's response has always been the same. They cut their profit margins and
start selling chips cheaper. They undercut AMD with volume and price and
suddenly AMD is in the doghouse again struggling. AMD's best efforts can cut
into Intel's profits, but Intel's response is to remove all profits from AMD
until it's left behind.

Just compare these two:

Revenue/gross profit/gross profit margin (Sebtember 2016)

AMD $1.1B/$930 mil/4.5%

INEL $60B/$16B /60+%

Even if you add $5.5B revenue from GLOBALFOUNDRIES (manufacturer of AMD chips)
to make AMD camp comparison more relevant, there is large difference.

~~~
tw04
>Intel's response has always been the same. They cut their profit margins and
start selling chips cheaper. They undercut AMD with volume and price and
suddenly AMD is in the doghouse again struggling. AMD's best efforts can cut
into Intel's profits, but Intel's response is to remove all profits from AMD
until it's left behind.

The last time Intel did that it was through illegal tactics - unlikely they'll
get away with it a second time.

[https://www.engadget.com/2014/06/12/intel-loses-eu-
antitrust...](https://www.engadget.com/2014/06/12/intel-loses-eu-antitrust-
appeal/?ncid=rss_truncated)

~~~
Nokinside
Doing illegal is just optimization.

Intel can undercut AMD lawfully if it wants. Instead of hidden rebates, Intel
can openly cut prices.

~~~
tw04
You're failing to acknowledge that AMD can just as easily cut prices to
compete. If Intel starts selling at a loss they'll be subject to even more
punitive damages than they faced for bribing OEMs. Not to mention they'd be
hung out to dry by shareholders.

~~~
Nokinside
> AMD can just as easily cut prices to compete.

They can't. They don't have profit margins or cash in hand to do that.

> Intel starts selling at a loss they'll be subject to even more punitive
> damages t

I think you failed to see my argument. Intel has 60% profit margins. They can
cut their prices a lot without making a loss. AMD cant.

------
Philipp__
I really want AMD to succeed! Intel's lack of progress and monopolistic
behavior have become embarrassing. Competition is best for the consumer. I am
really thinking of making all AMD rig after all chips are out.

------
sufiyan
One must bear in mind that wccftech has always been one to _leak_ stuff. Take
what you might want to from that.

More importantly though, Intel definitely needs competition. This is really
good news and I hope it plays out well.

~~~
astrodust
It'll put price pressure on Intel, that's for sure, but I don't know if it'll
make Intel work any faster on boosting performance. They're having enough
trouble with process and yield as it is.

It's interesting that this time around, if these numbers are true, AMD is not
only faster performing and less costly, but also lower power. In the past
they've always run hot, noisy, but cheap and fast-enough.

Maybe their experience in optimizing GPU production is paying dividends here.

------
std_throwaway
What about ECC memory? Intels desktop processors fail hard on that feature.

~~~
krzyk
Why do you need that in desktop env? What's the usecase when it can be
beneficial to have expensive ECC RAM instead of the ordinary one?

I was always thinking that ECC can prevent blue screens/kernel faults, but I
haven't seen those in years on my laptop without ECC.

~~~
kogepathic
> What's the usecase when it can be beneficial to have expensive ECC RAM
> instead of the ordinary one?

Anyone using ZFS will (or should!) care about ECC support. [0]

Lots of people build their own NAS/SAN boxes, so ECC support on a desktop CPU
at a reasonable price point would be very appreciated. Currently you need to
buy specific model CPUs (Celeron or Xeon, IIRC) to get ECC support from Intel.
[1]

[0] [https://serverfault.com/questions/454736/non-ecc-memory-
with...](https://serverfault.com/questions/454736/non-ecc-memory-with-zfs-a-
stupid-idea)

[1]
[https://ark.intel.com/search/advanced?ECCMemory=true&MarketS...](https://ark.intel.com/search/advanced?ECCMemory=true&MarketSegment=DT)

~~~
seanp2k2
Great use-case for the server-grade Atoms. I have a freenas build with an
Avoton 8-core and it's fantastic. Asrock Rack c2750D4i, check it out. Can't
wait for the new Atom server chips as well, and I hope AMD launches a
competitor in the low-power / storage server market.

~~~
lultimouomo
I am looking into replacing my home server / NAS, and I considered the atom
server chips, but they seem just so expensive for their computing power, and
also not very power efficient. The 8 core C2750 has a passmark of 3800[0],
with an abysmal single thread performance of 579, and still has a TDP of 20W,
and the mobo+cpu assemblies go for ~400$.

For way less, if you're willing to five up on ECC, you can get an i5-7500T and
a pretty good motherboard, bringing your TDP to 35W but having a passmark of
7055, and most importantly, a single thread rating of 1924.

This might not matter much if you're use is exclusively NAS, but I will
probably end up running some virtualized or containerized server, or streaming
video possibly transcoding on the fly, and I'm afraid the Atom might become a
bottleneck.

Is there a solution that is somewhat competitive with the i3/5/7 on price and
power, and has ECC? And, ideally, that comes in a Mini-ITX form factor?

[0] Obviously the PassMark score is only a ballpark estimate to get an idea of
how fast is a chip, but still.

~~~
yuhong
Look at Xeon E3, for example:
[https://www.newegg.com/Product/Product.aspx?Item=N82E1681911...](https://www.newegg.com/Product/Product.aspx?Item=N82E16819117611)

~~~
lultimouomo
Thanks. It looks there are reasonably priced mini-ITX boards that support
Skylake Xeons (a note to this that are considering this: v5 E3 (Skylake) use
the same socket as Cores, but need a different chipset).

The 80W TDP worries me about though, cooling, noise and money/pollution wise
(the extra 45W make around 400kWh/year).

Considering the CPU will be idle most of the time, do you have figures about
the actual idle consumption of an E3 based machine?

~~~
yuhong
There are also lower TDP models like
[http://ark.intel.com/products/53401/Intel-Xeon-
Processor-E3-...](http://ark.intel.com/products/53401/Intel-Xeon-
Processor-E3-1220L-3M-Cache-2_20-GHz)

------
illuminati1911
Competition please! Intel/Nvidia monopoly has brought prices of CPUs and
especially GPUs to insane levels.

~~~
msimpson
Ditto. Here's hoping the Ryzen will pan out and force Intel's hand.

------
wmf
Ryzen leaks conspicuously don't mention gaming or the i7-7700K. AMD may win in
a very small market yet lose the mainstream.

~~~
anonymfus
Gaming performance is irrelevant for top CPUs, the difference appears only in
low resolutions and old games.

~~~
Strom
The "games don't use CPU" line is only really true for multiplatform games
that must also run on the low performance PS4/XBox One AMD Jaguar CPUs.

If we look at PC exclusives [1] then we can see that these are extremely CPU-
hungry games. This hunger only goes up if we want to achieve a framerate
higher than 60, say going for 144.

I personally have an i7 @ 3.8 GHz with GTX 1060 and none of these games can
hold a stable 1080p @ 144 Hz. What's more, I've benchmarked the effect of
changing GPU/CPU and increased CPU power increases FPS far more than increased
GPU.

I upgraded from Radeon R270X to GTX 1060. In multiplatform games this is a
huge leap. Battlefield 4 (1080p Ultra) goes from 43.1 to 94.8 [2], a whopping
+120% increase. While in Dota 2 (1080p Ultra) that only netted me a +11% gain.
Then when I overclocked my i7 from 2.8 GHz to 3.8 GHz I got a +23% increase.

\--

[1] For example Dota 2, H1Z1, DayZ, Civilization 6, Guild Wars 2.

[2]
[http://www.anandtech.com/bench/product/1043](http://www.anandtech.com/bench/product/1043)
&
[http://www.anandtech.com/bench/product/1771](http://www.anandtech.com/bench/product/1771)

~~~
ido
Just curious: why do you want to run a turn based game like civ6 at 144fps? Or
for that matter anything except super reaction-dependent FPSs?

~~~
Strom
You're correct that it has a significantly bigger impact in reaction-dependant
games. [1] Civilization 6 was just a good example of a CPU-heavy game,
regardless of whether those frames are that useful. However there's also the
case of normalization. After using a high refresh rate monitor, even something
as simple as moving the cursor in Windows feels laggy with lower refresh rate
monitors.

\--

[1] This being all reaction-dependant games, doesn't have to be FPS. Even fast
paced pong qualifies.

------
jjm
If I were Apple, I'd jump on this. Maybe then battery life will be better. And
hopefully that famed form fitted battery comes out of the woods.

------
arjie
Would have been an interesting validation to put the i7-2700K in there. It was
nearer the 8350 and would have provided a good baseline to see that the
benchmark is actually useful. As it stands, I'm a little sceptical (but also
very hopeful, since I'm buying desktop hardware soon).

------
lostmsu
That does not really make sense. Why would physics simulation and prime
calculation be so much slower? I would not trust those benchmarks.

~~~
wyager
My guess would be branching behavior. The Intel branch predictor could be
better for that sort of stuff.

~~~
Symmetry
That might be true of the physics simulation but I think that's more likely
Intel's better vector processing resources. With a prime search the branches
involved ought to be a combination of the trivially easy ones that any
predictor should be able to handle and the super high entropy ones that no
predictor can guess. So if anything doing badly on the prime test is an
indication that it was _AMD 's_ good branch predictor that was bumping up its
score on the other tests.

~~~
wyager
The test also said that AMD beat out intel on SSE stuff.

------
orliesaurus
Execs at Intel ain't gonna be too worried, they probably have signed contracts
that lock partners up for many years to come

~~~
reitzensteinm
In a world where progress has slowed down, being two to three years behind
Intel means you're practically interchangeable. This chip isn't an
abnormality, it's a sign of things to come, and there's no way it isn't
keeping Intel's execs up at night.

~~~
vegabook
Amd is prepping a 32 core server beast. Considering Intel just launched a 24
core chip costing _9000 dollars_ , and considering the AMD IPC is looking
good, Intel execs must be gnawing down to their elbows at the prospect of AMD
walking in to that market at 50% pricing deltas. Over to GloFo though on
process. That appears to be the a possible achilles heel here for AMD.

~~~
beagle3
> Intel execs must be gnawing down to their elbows at the prospect of AMD
> walking in to that market at 50% pricing deltas.

That's not how it works. If they think AMD can make a dent, they'll mark their
chips down to be competitive (for whatever definition of competitive they use,
which is probably supported by data, though it might not be YOUR definition).
Poor Intel will have to deal with only 100% profit instead of the 300% or so
they've become accustomed to.

Intel got a knock out from AMD some 15 years ago. Enough people working at
Intel still remember it will enough. This aspect of history is very unlikely
to repeat or even to rhyme.

------
tetraodonpuffer
not wasting a lot of silicon space on an integrated graphics solution seems to
be a big win in terms of increasing the effective power of the CPU without
increasing the price as much since in the end the die will likely be smaller.

I am surprised Intel has never come up with some more gaming oriented i7s with
more cores and no integrated graphics as pretty much no gamer would run
without a discrete video card anyways, but then again without any competition
from AMD it probably wasn't worth the engineering effort or the risk of
cannibalizing their Xeon offerings.

------
runeks
Not a CPU expert here, but isn't this a comparison of AMD next gen to Intel
current gen? Won't Intel have a new CPU out, too, when the Ryzen is actually
released?

~~~
deaddodo
These release in less than two weeks. Intel's "current" gen released less than
a month ago.

Seems pretty fair to me.

------
ksec
One of the best thing about Zen is that it "should" substantially changes the
dynamic of Cloud / VPS hosting.

High Memory Instances are Cheaper due to support of 8 Memory Channels. Lower
End Instances could be cheaper due to Lower Cost per Core Count.

------
ndesaulniers
Any thoughts on the huge discrepancy in SSE performance? Something seems wrong
there.

~~~
TazeTSchnitzel
As mentioned elsewhere, these chips only have a 128-bit SIMD unit (either for
power or for space savings?) and so 256-bit SIMD executes half as fast as it
would on Intel.

~~~
acqq
But if I read the picture properly, AMD appears to be 8 times(!) faster than
anything else:

[http://cdn.wccftech.com/wp-content/uploads/2017/02/AMD-
Ryzen...](http://cdn.wccftech.com/wp-content/uploads/2017/02/AMD-
Ryzen-7-1800X-CPU-Mark-SSE-Benchmark-WM-840x370.jpg)

That seems to be untypically good, in the market where the new generations
brought typically 10-20%.

I still wouldn't say it's impossible, as, for example, Intel had (apparently
until Sandy Bridge) much slower NaN handling than AMD, at least without SSE2.
So maybe AMD discovered some weak point like that. But until I read some
explanation, it does seem to be too good...

~~~
TazeTSchnitzel
Ah. Yeah, that looks impossible.

------
angry_octet
Looking forward to the day when reviews include HPCG (the replacement for
LINPACK) as a benchmark, with a little animation showing how hot the different
bits of the core run.

~~~
arcanus
Should be soon.

HPCG is a memory bound application so I doubt it runs the system as hot as HPL
(high performance linpack)

------
shmerl
Looks like my next desktop build can be all AMD. With radeonsi/radv catching
up and upcoming Vega and Ryzen, it looks quite attractive.

------
alanfranzoni
I'd really like to see what AMD can do in the ultrabook and notebook segment.
Good mobile CPUs with a good integrated GPU could really be a killer feature.
I don't know whether the desktop segment is enough for AMD to reboot its cpu
brand (although the server market is a more interesting beast).

------
matt_wulfeck
I can't help but feel AMD is shooting the puck where the goal was, but won't
be in the next few years. We desparately need GPU-like devices for advanced
ML/AI. Even intel knows this and is investing huge in that area to be
competitive against NVIDIA[1].

I predict in the future that AMD will come to dominate the home/enthusiasts
CPU market, and intel's low-power CPUs will dominate enterprise along with
whoever comes out with a CUDA competitor.

1\.
[https://www.google.com/amp/s/www.fool.com/amp/investing/2017...](https://www.google.com/amp/s/www.fool.com/amp/investing/2017/02/13/inside-
intel-corporations-artificial-intelligence.aspx?client=safari)

~~~
muyuu
AMD already do that, but no it's not like ML/AI is the main market right now.
Nowhere near.

------
ekr
I don't know why I get the feeling that most of this advancement is the result
of Jim Keller's contribution. Surely, AMD must have other engineers capable of
similar achievements, right?

------
xchaotic
Fingers crossed that this is not a paper release. If AMD can deliver and I can
actually buy, I will, to keep the x86 market competitive.

------
asdads
Sadly for those of us involved with floating point stuff, Intel is still king
(and Nvidia is still the Queen).

------
vegabook
AMD has been really smart. Freesync is a huge success. It's outperforming
Nvidia by about 20% on Vulkan and DX12 on similar hardware because game makers
can use their optimized code from the AMD-equipped consoles. On Ryzen, it
appears strategically to have reduced the silicon usage from AVX in order to
improve mainstream performance and increase cache, while maintaining a smaller
(and therefore cheaper) die than Intel's increasingly bloated silicon. It all
points to a company, back to the wall, really fighting its corner, optimizing
its much smaller resources. If Ryzen and Vega work out as expected, I hope the
people responsible at AMD are in for big rewards. BTW the stock has ramped
200% in the past year.

To get an idea of how smart AMD people are, this video instructive:

[https://www.youtube.com/watch?v=FwcUMZLvjYw](https://www.youtube.com/watch?v=FwcUMZLvjYw)

~~~
agumonkey
AMD last decade has been so stressful even from the public POV, I'm surprised
they managed to survive up to Zen.

If ryzen is enough of a success this will be heaven for the company.

I wish them all the best.

------
frik
What about Win7 drivers? Even if the Redmond based company doesn't like
marketing about it (an understatement) - please support it. (Recently AMD
announced Win7 support only to remove the announcement after after a few
days.)

Even Intel supports Win7 with their most recent CPU as well, even if it's not
officially marketed, and one has to search around for the driver.

~~~
grndzro
Grab a copy of Win10 LTSB-N and get rid of Win7. It has all that annoying crap
stripped out.

You should be able to find a copy fairly easily floating around here and
there.

~~~
prodmerc
Thanks for mentioning this, I didn't know the Enterprise version had all that
removed. Do you know if it still sends data to Microsoft or not?

~~~
washadjeffmad
You can always 'unfuck' a standard edition:
[https://github.com/dfkt/win10-unfuck](https://github.com/dfkt/win10-unfuck)

It lists two other repos at the bottom for privacy and de-bloat. I version
froze my only Win10 VMs, so not sure if the projects are current for post-
Anniversary Update.

