
AMD is having its day - jkuria
https://www.economist.com/business/2019/10/31/amd-a-chipmaking-underdog-is-having-its-day
======
ConcernedCoder
Jim Keller, the designer of "Zen" architecture, is now working for Intel...
[https://web.archive.org/web/20180426124248/https://www.kitgu...](https://web.archive.org/web/20180426124248/https://www.kitguru.net/components/cpu/matthew-
wilson/zen-architecture-lead-jim-keller-heads-to-intel/)

~~~
kick
It's kind of his "thing" to bounce around to different, interesting projects.
He wrote the x86_64 spec while working at AMD (and was responsible for the K8s
ala Athlon 64s), he played a big part in Apple getting their SoCs to best-in-
class, and that's not nearly all he's done.

Incredibly accomplished man, I don't think he's going to stay in one place for
very long. He'll probably come back to AMD at some point or another.

~~~
Roritharr
I wish I knew what his secret is.

He can't be a genius of THAT magnitude, that would be a miracle.

~~~
kick
If you ask most people who have familiarity with x86_64, I'm certain they
would use a different term than "genius."

Though the first step for you would be to work on projects you're more
passionate about, and that are more technically challenging. A billing site
(judging by your HN about section) won't challenge you in a way that will make
you get better.

~~~
mzarate06
_> If you ask most people who have familiarity with x86_64, I'm certain they
would use a different term than "genius."_

What term do you think they'd use, and why?

~~~
kick
"The reason we're stuck with this shitty fifty year old architecture that's
steadily gotten worse through every iteration," "Satan," "Evil Computational
War Criminal," a few less nice terms.

If you ask a Sun employee: "Enemy #1."

Less sarcastically: "slightly above average."

x86_64 is a nightmare, and we're a decade behind where we could be because it
was what the industry settled for.

~~~
CamouflagedKiwi
That seems extremely unlikely. If a new architecture could offer that much
advantage (a decade is _massive_ in this space) surely there'd be some
pressure to move. One could easily imagine Apple moving to ARM, for example.

Unfortunately, I think a much more likely situation is that x86 (/64) is only
slightly hobbling things, and Intel are easily able to push past that with
technical craftiness. As in many other cases, implementation trumps
theoretical design.

~~~
kick
Apple _is_ moving to ARM, the most recent iPad Pro outperforms its laptop line
(last I saw, at least), they've spent a fraction of what Intel, AMD, and VIA
have invested into x86_64, and basically every major Apple leak mentions that
they're investigating moving the Mac line to ARM in a generation or two.

A new architecture _can_ offer that much advantage; x86_64 wasn't even the
third most-performant implemented ISA when it was released, and every other
ISA makes gains far faster than it. SPARC and POWER are still competitive with
it despite having 1/1,000,000th the amount invested in them, and in just a few
years and with comparatively nothing invested into it, RISC-V is starting to
rival a portion of the chips (though not the upper line of them yet).

It "won" because of backwards compatibility, nothing more.

~~~
CamouflagedKiwi
That's just not the case; Apple were already well behind in performance at the
point x86_64 came out, due to them being stuck on POWER. They actively moved
_to_ x86, despite big compatibility issues, because of how much better they
were.

The evidence is that architectures are just not as important as all that. x86
is clearly pretty bad in many ways, but clever tricks and microcoding have
been able to overcome those issues.

~~~
bsder
> Apple were already well behind in performance at the point x86_64 came out,
> due to them being stuck on POWER.

The issue was chipsets and peripherals, not POWER performance (which generally
always beat x86 at the same point).

The problem was that the entire ecosystem was built around communicating with
an x86. So, you couldn't get a Northbridge or Southbridge equivalent that was
even remotely close in performance or power consumption to those in x86 space.

Unless you decided that you were going to take on _everything_ in chip design,
you couldn't compete. And Apple didn't decide to take on everything until
Intel told Apple to pound sand and pissed Jobs off.

~~~
fluffy87
There has never been a PowerPC system that beats x86 at any price point.

~~~
bsder
You are far too young.

The 601 based PowerPC's were the first to be able to do 3D graphics on the
microprocessor well.

The G4 based titanium Powerbooks were sufficiently better that they became
iconic at a time that Apple wasn't regarded that well.

Sure, the G5 and up were disastrous, but the writing was on the wall well
before that. Chipsets on the G3 and G4-based systems used _more power than the
processor_ and that only became untrue because the G5 was quite so poor.

------
unfocused
Interesting timing for this article.

I'm in Canada, and for the first time in over a decade, I decided I couldn't
pay the premium of $3200 (taxes included) for a Macbook Pro with 512GB SSD and
16GB of RAM.

I bought a Lenovo 2 in 1 touch laptop with lots of ports, and an AMD Ryzen
3700U with RX Vega 10 graphics, for $700 (taxes included).

The AMD Ryzen 3700U is just as good as the 8th Gen i5. Plenty for me at home.
And with the money I saved, I can upgrade my NAS.

This combination of AMD + Windows + Lenovo has finally pushed me to try the
world outside of Apple.

~~~
ralusek
How's the trackpad? I have heard people say that they've found an apple
comparable trackpad, but I have yet to encounter one.

~~~
fyfy18
I switched to ThinkPads last year (first a X250, now T470s) and this was my
big concern too. I just sat down one day and spent an hour or so tweaking the
various options for the trackpad driver until I got it how I was used to from
Apple. I'd say it's 95% as good, and some things like dragging are infact
easier as there are physical mouse buttons (as well as tap to click).

I have no idea why the defaults are so bad, but I've kind of got used to that
as a Linux user ¯\\_(ツ)_/¯

~~~
nsomaru
Can you detail your distro and the tool you used to tweak? What settings
worked well?

------
jangid
AMD lags not because of hardware only. There is very weak support for drivers
and libraries in software. Intel and Nvidia spend huge amount of money in
supporting library maintainers and thereby creating a lobby.

For example, look into the list of supported GPUs in the Github repositories
of popular machine-learning libraries PyCharm, Tensorflow etc. AMD and OpenCL
is nowhere as compared to Nvidia's CUDA.

~~~
lone_haxx0r
On the other hand, Nvidia's Linux drivers are blobs (that don't even support
Wayland) while AMD and Intel actively contribute to Mesa, and treat their
users well (or at least much better than nvidia).

~~~
Erlich_Bachman
There have been numerous situations just this year where I've stumbled on
problem from having an AMD card, whereas Nvidias with their "blobs" have
worked just fine. Gaming, video editing, even ML, they all seem to work
perfectly on Linux with Nvidia, and AMD always has some problems. So at the
very least this is contested. Nvidia seems much more supporting of the open
source OS.

~~~
tankenmate
I have had exactly the opposite experience, issues with Nvidia cards / drivers
that I couldn't fix. Over the last 5~10 years AMD support on Linux (and other
open source OSes) have been much better. To the point that for some of my
workloads AMD on Linux is faster than AMD on Windows.

~~~
garaetjjte
It is not surprising, AMD OpenGL drivers for Windows are horribly slow.

------
arcanus
"For now, AMD’s resurgence is good news for consumers, IT departments, cloud-
computing firms and anybody who uses software. Like any good monopolist, Intel
charges a steep price for its products—unless AMD is doing well. Sure enough,
Intel’s newest set of desktop chips, due in November, are some of its
thriftiest in years."

------
puranjay
I just built a new PC with Ryzen 3700x. I use it mostly for music production.
Fantastic performance so far. 10 instances of Serum and my CPU doesn't even
cross 15%.

Moreover, I feel like I'm buying for the future with Ryzen. Intel is going to
change its architecture after the 10th gen. Buying a 9th gen Intel right now
seems like throwing money at a dead end

~~~
pier25
Same. My new 3700x build can push dozens of Diva instances.

If you want to see more details: [https://vi-control.net/community/threads/my-
ryzen-3700x-buil...](https://vi-control.net/community/threads/my-
ryzen-3700x-build-for-music-production.86352/)

~~~
tuananh
ryzen 3000 benefits a lot from faster ram. you may consider upgrading.

~~~
pier25
Depends on the type of workload.

In Cinebench or Blender rendering it makes practically no difference.

I updated my RAM from 2133 to 3200Mhz and it made no difference either for DSP
audio processing.

~~~
tuananh
yep it depends on workload. cb20 doesn't scale well with memory.

------
yumario
From the game theorist point of view I think its better if one company is the
underdog. Think about it, if one company is the underdog, one company has a
lot to gain by competing, while the other has a lot to lose if they don't
compete. Therefore we get competition. Now, the more equal the market share of
the companies the grater the risk and less the reward for competition... A
better strategy would be not to undercut your competitor and instead divide
the market share. Which leads to stagnation.

Do people here think it sound reasonable?

Edit: Mathematically the argument would be as follows:

Consider two company A and B. A has market share 'a' and B has b. n is the
total market. Then a + b = n.

A's reward for competing will be n - a = b.

A's risk for competing will be a, (it's remaining market share).

A's will compete as long as the reward is greater than risk.

This will reach an equilibrium at a = b.

~~~
SmirkingRevenge
Take it for what you will, but an Intel engineer relayed to me (many years
ago, in the aftermath of the Microsoft antitrust trial), that they attempt to
ensure that AMD maintains a certain level of competitiveness as a hedge
against antitrust. Sometimes that involves actually helping AMD via
partnerships or technology sharing, if they are struggling too much - other
times it means giving them a swift kick to the crotch if they are gaining to
much ground. It may have been BS, it may not have - and much has certainly
changed over the years... but...

AMD has always been nipping at Intel's heels, for 20+ years now, never really
losing or gaining too much to pose a real threat. Yet we've seen how
ruthlessly Intel will snuff out potential contenders (such as Transmeta, RIP),
it does kind of make you think there's something to it.

~~~
georgeburdell
This actually would mesh pretty well with that one Kaby Lake G NUC, released
in 2017 and probably had roots in 2016 when AMD's stock was at $3, that had
Radeon graphics inside [https://www.tomshardware.com/news/intel-discontinue-
kaby-lak...](https://www.tomshardware.com/news/intel-discontinue-kaby-lake-g-
amd-graphics,40577.html)

~~~
wtallis
There's a simpler explanation for why Intel used AMD GPUs for a while: Intel's
new integrated GPU design couldn't ship until they sorted out their 10nm
issues, and their older design wasn't competitive. When Intel went shopping
for GPUs on the open market, they could get them cheaper from AMD than Nvidia
(though the HBM2 requirement was a clear downside). They actually paired AMD
GPUs with both 14nm Kaby Lake and their failed 10nm Cannon Lake processors
that had broken integrated GPUs. The short-lived Intel/AMD GPU partnership
came to an end because Nvidia's lead over AMD got too big, but it was doomed
to be cancelled as soon as Intel gets 10nm working.

~~~
lonelappde
What is Nvidia's lead over AMD? Intel's non-AMD gpus are Intel gpus

Do Nvidia integrated gpu (mpgu) exist?

Nnvidia claims they exist but all their weblinks to details are dead.
[https://www.nvidia.com/object/main_mobo_gpus.html](https://www.nvidia.com/object/main_mobo_gpus.html)

~~~
wtallis
Nvidia and AMD both make discrete mobile GPUs, and those are the only two
options for offering better GPU performance on an Intel laptop when Intel's
own GPU is inadequate. Nvidia's GPUs have for years generally had a
substantial power efficiency over AMD's.

------
sliken
Nipping at the heels, market share wise.

Crushing it on the actual CPU performance side. The new Epyc Romes are pretty
amazing chips compared to the Intel warmed over Xeon 82xx series.

~~~
ekianjo
Problem is that in the enterprise segment there is almost no AMD offering with
the big brands so it is going to take a while to displace the Xeons. I looked
at Lenovo workstations recently and their offering is 90% Intel.

~~~
godzillabrennus
Time to go super micro or Gigabyte for a cycle. Let the big brands feel the
pain of not incorporating AMD.

~~~
jzwinck
SuperMicro's ILO and rack mounting hardware were really poor a few years ago.
Has that changed? Keep in mind the big brands have improved those parts since
then, so unless SM made amazing progress, they're not a drop in replacement.

~~~
sithadmin
SuperMicro's rails and OOB management are still horrendous.

~~~
detaro
What's the problem with their OOB?

~~~
snuxoll
Same Avocent garbage as everyone else I assume, it’s not like any of the OEMs
make their own.

~~~
SteveNuts
iDrac is MILES ahead of SM ipmi, in my experience.

~~~
greatpatton
Be more specific, because the basic stuff needed by 99% of setup has been
covered for ages. We are managing hundreds of machine with just SP IPMI.

------
sandGorgon
AMD needs to figure out a way to get CUDA support on its integrated GPU or get
Tensorflow/Pandas/etc etc to adopt its GPU acceleration libraries.

Their brand is still not thought of as equal to intel. So the other way to
build the brand is through developer adoption.

~~~
selimthegrim
So why aren’t they handing out free GPUs to colleges and universities like
nVidia?

~~~
sandGorgon
because nobody wants their GPU.

AMD integrated GPU laptops are very common in India and are very cheap
compared to equivalent intel. For example Ryzen 5 Vega 11.

People would rather shell out for expensive nvidia. Because no
academic/research software can use the AMD GPU.

If CUDA was available on AMD, then the sale of their laptops (and servers)
would 10x overnight.

------
boyadjian
AMD HD 4770, HD 6870, HD 7870, R9 290, RX Vega 64, all the graphic cards that
I bought recently have been AMD. And thought my previous CPUs where Intel, my
next CPU will be an AMD.

------
achow
[http://archive.is/c6fKX](http://archive.is/c6fKX)

------
m0zg
And I sure hope they keep it up. Intel was getting way too comfortable in its
dominant position, and that hurts everyone (including Intel in the longer
term). The best outcome for all is to have two companies with approximately
equal marketshare competing on merit. There isn't much you can squeeze out of
general purpose CPUs anymore, but I'd be quite grateful for the next 2 or 3x.
And then they can start competing in acceleration hardware and GPUs.

------
tempsy
There's a strange cohort of "meme" stocks that I see in certain male-dominant
internet subcultures. Tesla and AMD seems to be at the top of that list.

~~~
tristor
Can you expand on what you mean by this? I don't understand. Are you saying
people pick these stocks because of cultural reasons and they're actually not
good investments? Are you saying that male-dominated internet subcultures have
a cultural reason to pick these stocks, specifically?

~~~
KaoruAoiShiho
Probably the second one leading to a bias causing the first one. Some
tech/geek communities really like those 2 stocks. There are other male
communities as well that are into other stocks. Even in tech the culture leads
to some stocks becoming overvalued and others undervalued. AMD is overvalued
while FB is undervalued because of the cultural feelings toward those
companies.

~~~
jsf01
AMD is worth ~40B today. FB is worth ~550B today. AMD supplies a product in
growing demand while FB supplies access to demographics in growing demand. But
the downside risks are not the same. It only took FB’s viral growth to end
MySpace. User loyalty is far more fleeting than the market for CPUs. Within
five years or your preferred investment time horizon, compare the valuations
of each firm and see how your assessment stood the test of time.

~~~
mantap
Facebook has a plan A and a plan B for dealing with competition. Plan A is to
buy out competitors to acquire users and diversify their brand (Instagram,
Whatsapp). Plan B is to use FB's overwhelming resource advantage to copy their
product (Snapchat). It's fair to say that FB is well aware of what happened to
MySpace.

Any potential "Facebook killer" needs to circumvent both of these tactics. Not
impossible but not easy either.

~~~
webninja
Facebook’s Lasso doesn’t seem to be killing TikTok

~~~
adventured
Zuckerberg went to Washington in September & October and took care of that.
While he was busy with several other topics, you can be sure he leaned on the
right members of Congress about TikTok.

The US has a very strong interest in every possible regard to protect
Facebook's dominance and to kill or restrain TikTok by forcing it apart from
Bytedance or burying it through the app stores.

If it gets separated from Bytedance it's either toast or it ends up in the
belly of a US giant (or maybe even Spotify depending on the valuation).

~~~
riffraff
Even if the us Congress somehow blocks tiktok, FB appears to be losing a lot
of ground to it in the rest of the planet.

------
m15i
Are there any good alternatives to Nvidia GPUs/cuDNN for deep learning?

~~~
rarecoil
ROCm doesn't completely suck with the Radeon VII, which is a Radeon Instinct
MI50. Deep learning is not my day job and I'd like to avoid supporting
NVidia's insane prices for DL-capable cards, so I've been dealing with the
performance hit, and only using the R7 for DL tasks then switching it off when
the power isn't needed. The XFX Radeon VII is actually on sale for Newegg for
$569 so it's a lot of power (and 16 GB HBM2) for that price.

~~~
jamesblonde
Agreed. The Radeon VII is currently the best price/compute GPU out there for
deep learning. It's performance on RESNET-50 is about the same as the 2080Ti -

[https://github.com/ROCmSoftwarePlatform/tensorflow-
upstream/...](https://github.com/ROCmSoftwarePlatform/tensorflow-
upstream/issues/173)

~~~
bitL
That's only theoretical. Try to use ROCm on latest frameworks or on external
models that write custom CUDA operations/losses. Basic stuff might work in a
more complicated way than on NVidia, advanced stuff is guaranteed to either
not work or work in a couple of months when it lands into ROCm.

Radeon VII is a beast for FP64 computation, if you do simulation or heavy
computations that require supercomputer-level of precision, then grab one
while you can, it's the best price/performance of all GPUs on the market.

However folks, please don't follow the advice about using for it Deep Learning
if you want to actually have Deep Learning business in any way.

------
dredmorbius
No mention of Spectre / Meltdown?

[https://www.csoonline.com/article/3247868/spectre-and-
meltdo...](https://www.csoonline.com/article/3247868/spectre-and-meltdown-
explained-what-they-are-how-they-work-whats-at-risk.html)

------
jimbo1qaz
If I block web fonts, all references to AMD are lowercase. MiloTE and MiloSCTE
(small caps).

    
    
        .blog-post small {
            text-transform: lowercase;
        }

~~~
Finch2192
Can I ask -- What was your line of thinking that led you to find this out?

I'm the kind of guy who could probably figure out how to do this if it was,
like, given to me as a task. But never in a million years would I just stumble
across this.

~~~
jimbo1qaz
I don't like it when different websites look very different. So in my primary
browser (Firefox), I only allow sites to use my preferred sans-serif, serif,
and monospace fonts. (In the options page's font dialog, I unchecked "Allow
pages to choose their own fonts, instead of your selections above".)

I noticed that all occurrences of AMD are lowercase, so I noticed that it's
set to lowercase. Opening in Chrome, I noticed that there was a giant banner
on the bottom covering 1/3 of my page, and another on the top which closed
when I closed the bottom banner. In Chrome, I noticed that AMD was written in
small-caps with font MiloSCTE, and the rest of body text was using MiloTE.

------
pschastain
Full article is behind a paywall :-/

------
Narishma
Is there a non-paywalled version?

~~~
ShinTakuya
Check the FAQ.

~~~
kick
_In comments, it 's ok to ask how to read an article and to help other users
do so._

~~~
j4kp07
So where do we complain? This posting is essentially an advert.

~~~
zaroth
Often times people will upvote the topic / headline as something interesting
or noteworthy or because the discussion in the comments is interesting. I’m
sure a double digit percentage of people are not even clicking over to TFA,
and then another double digit percentage click and then bounce back to the
comments (after hitting the paywall) for the discussion and may still upvote.

------
hyperpallium
Intel has had the smallest nodes since the beginning.

With Moore's law's death, everyone is catching up.

Isn't it that simple?

~~~
trynumber9
I don't think so. Because now TSMC has the highest transistor density. It is
more that Intel fumbled 10nm so badly that others have passed them. Intel's
7nm had better be good and timely or they're in some serious trouble. TSMC's
5nm is starting early next year and with 1.84x scaling.

~~~
hyperpallium
But nm are marketing terms now; TSMC's 7nm is said to be equal to Intel's
10nm.

Nodes are still shrinking, but not at the rate implied by their nm names. In
addition, thermal constraints mean they can't actually be used at their
theoretical capacity.

~~~
trynumber9
Intel's 10nm is finally shipping now, according to Intel, but it seems small
scale yet. TSMC's 7nm has been shipping for over a year, in very ubiquitous
devices (iPhones).

------
known
I think China will try to acquire AMD

[https://en.wikipedia.org/wiki/Advanced_Micro_Devices](https://en.wikipedia.org/wiki/Advanced_Micro_Devices)

------
rgbrenner
Congrats to AMD, but I'm still very pessimistic on their long term prospects.
It seems like Intel's advantages come from a system that produces improvements
over years. You can see this just in their R&D spending: Intel spends nearly
2x AMD's revenue just on R&D. Whereas this development from AMD was thanks to
Jim Keller (who now works at Intel)... It was a one-off event.. and once
they've extended it as far as it'll go.. then what? Unless they develop their
ability to innovate (they've had issues keeping top talent at the firm), this
will probably be another one of AMD's boom and bust cycles.

~~~
jsf01
Intel decided to invest more heavily in share buybacks than R&D as of their
recent earnings call.

But that’s only half of the story. They need that R&D budget because they have
the (massive and growing) expense of building and upgrading their own fabs,
which have undergone a series of costly mistakes in the last decade. I wonder
what % of intel’s R&D budget is actually comparable to AMD’s if you exclude
the amount poured into fabs—betting those figures are much closer despite
enormous differences in market cap. TSMC, who along with GloFo fab the AMD
chips, is basically all-in on R&D investment and taking on debt to facilitate
the construction of the most advanced fabs to date. And their prior
investments in 7nm have scaled rapidly and panned out well. I think it was the
fastest ramp for a node shrink that they’ve done.

Oh and Keller is definitely smart but you imply that he’s got a monopoly on
talent in the semiconductor industry. There’s no way that’s the case lmao

~~~
NonEUCitizen
TSMC is all-in on R&D and latest fabs, but GloFo is not. GF gave up on 7nm:

[https://www.eetimes.com/document.asp?doc_id=1333637](https://www.eetimes.com/document.asp?doc_id=1333637)

AMD's recent success partially has to do with renegotiating its contracts with
GloFo, allowing AMD to use TSMC more:

[https://wccftech.com/amd-is-negotiating-a-7th-amendment-
to-t...](https://wccftech.com/amd-is-negotiating-a-7th-amendment-to-the-wsa-
wafer-supply-agreement/)

