Ps. Long AMD, perfect execution the last two years. It's crazy, even their late firmware fixes improve performance as it had bad press for a moment for false benchmarks advertisement ( it was not false so it seems, the firmware was just not ok yet. Not everything is fully ok now, it's the only minor nitpick as far as I can tell )
And they are only just now starting in the server space and soon the laptop + graphics space. They are going mainstream ( OEM offering is still lacking, Microsoft Surface changed this) AND they still customize their superior offering for big customers.
In the contrary, Intel has a lot of failures the last 2 years. Spectre, meltdown, ... This isn't bad luck anymore, it's more like bad karma if you are religious.
If you think I'm opinionated. Just check /r/AMD vs. /r/Intel . AMD is even suggested in the Intel threat, lol
are you expecting a higher stock price? because their 90 PE TTM doesn't look like it's in line with the rest of the industry.
don't get me wrong, i was also long AMD, and still partly am, but i don't think the stock can go higher than 36-37, or even maintain that level long term, without some some mega quarters. which according to their own estimates will not be the case.
it was 40 in 2000, the current buy prediction is 40. They are doing way better now and they have another full year to win market share. I compare against Intel, not against AMD at the current ( minimalistic) state, mostly because OEM's weren't onboard in the previous financial quarter.
I have also explained my financial relation to them and my belief. Which seemed enough, no? PS. I'm probably one of the few to share my financial relation. Not because i'm scared, but because i'm confident :)
And also, i feel kinda sorrow for you that you don't attack my comments/facts. You just attacked my personal gain, which is a feedback loop to my arguments.
PS. I don't know of any other company that is in such a positive position. One full year untill their competition has a competitor to their "current" product line. That's insane. I haven't even mentioned TSMC 3 nanometer process for 2023 and theri future product line :)
So if we assume no other fixed costs, then to get $1800MM of revenue costs (1800 - 400 - 120) $1280MM. So doubling revenue would cost $2560MM. Then net profit could be (3600 - 2560 - 400) $640MM, or about a 5x of current net profit, which would put the PE at 18.
A PE of 18 would be very nice. And this is assuming there are no fixed costs besides R&D (like marketing of office buildings) and AMD only taking 10% of Intel's revenue.
I'm rooting for them, and I want to put money on them, but I sold my shares I got for <$10 and I can't imagine getting back in at over $30.
And if you believe Intel, it's 2021 instead of 2022 ;)
PS. Facts related: AMD Gains Chip Market Share in Nearly Every Category.
They have nowhere to go but up, but that can still mean their stock price is too high.
High P/E can mean that company reinvests profit to the business, instead of stockpiling it. Amazon has 80 P/E for example.
I would be more concerned about very moderate revenue growth, looks like consumers don't rush switching to AMD.
I think one reason their PE is so high is because their price is very competitive and they are very slow on raising average sale price.
It's only very recently that the flagship of Microsoft changed to AMD. I'm waiting for some OEM's to follow after this. This happened on the end of previous quarter. As explained before.
I think the cloud custumors can change faster than the consumer market though. As they will analyze price with performance and AMD has another year of "playing around", until 2021 or even 2022
Granted I'm a penniless student so I'm not longing or shorting anything but you get the point
Such is the way things work, scales tip one way then the next.
What about NVIDIA??
Also isn’t Amazon working on their own chips?
So, effectively AMD gets this private 1v1 game with Intel, and has a strong part in the GPU side of the market as well.
GPU is certainly a more open playing field compared to x86 (Intel is trying to get into it), but AMD is very competitive here as well. I think we will start seeing a lot more unification of their concepts between CPU and GPU. Multiple GPU dies interconnected using things like I/F, etc.
Dr. Su said AMD was more about architecture than process technology, so perhaps they can prove what most of us suspected all along - Intellectual property is the most valuable strategic asset we have. Even if Intel reverse-engineered AMD's reticles, by the time they could implement the same architecture, AMD would be pushing products to market that are 2-3 generations ahead.
My question right now is, how long will it take Intel to reach parity on this architecture front (especially if it's the key to success as AMD's CEO claims)? The other question is, even if Intel figures out the architecture and comes up with something even better, will they be able to catch up on process technology? Is it only going to be TSMC and Samsung left standing at 3nm and beyond? What sort of damage would offloading the foundry business cause Intel if they had to go pure-play to become competitive on x86?
>Dr. Su said AMD was more about architecture than process technology, so perhaps they can prove what most of us suspected all along - Intellectual property is the most valuable strategic asset we have. Even if Intel reverse-engineered AMD's reticles, by the time they could implement the same architecture, AMD would be pushing products to market that are 2-3 generations ahead.
My question right now is, how long will it take Intel to reach parity on this architecture front (especially if it's the key to success as AMD's CEO claims)?
This is not true. As much as I support AMD and loathe Intel, architecture wise I would say even Zen 2 and Intel SomethingLake is about the same, with many benchmarks backing the claim. And Icelake being slightly in front of Zen 2. Icelake did deliver a roughly 15% improvement as promised, so there is no sign Intel's architecture department has any problems ( yet ).
The problem is Icelake performance improvement doesn't matter if 10nm cant yield.
And when Dr Su said about IP, she was referring to the whole chipset strategy and how it blinds everything together.
It's more like regression to the mean.
AMD also suffered from Spectre and Meltdown. Admittedly not all the variants of it that have since come out, Intel has done way worse there, but AMD instances _are_ vulnerable if they lack microcode updates and OS patches.
> Presently, 13 Spectre variants and 14 Meltdown variants have been identified. Initially, AMD processors were thought to be immune to Meltdown, though one variant has been successfully demonstrated on AMD systems.
ONE variant. So technically true. But the situation on the Intel side is far worse.
Intel got way too greedy with their optimizations.
They are really trying, much harder then making better chips :)
Anything running a browser needs it because of js.
Anything running connected to the internet at all would be well advised to have them because you can take advantage of the exploits via the network stack.
Anything running on a vm has to have them since microcode is set by the hypervisor.
So, we're talking about desktops not used to browse the internet, and bare metal servers whose network stacks aren't exposed to hostile actors?
To be fair the context here is HPC. Nobody is installing Chrome on something like IBM's Summit supercomputer. Nor is it running "untrusted" code. It's configured to go as fast as possible, nothing more.
Anyone in cloud hosting or desktop would be insane to not install spectre/meltdown fixes. But in this particular niche of HPC supercomputers it's actually reasonable to not have those mitigations.
I'm not really in the HPC space, but I can't really see any vp/cto/cio reasonably putting that asterisk on a security audit.
If you've got cyber insurance, you probably wouldn't be able to keep coverage if you did that.
Any CIO that decides not to use such security patches will carry on his/her shoulders the full risk. Big bosses don't like risk and like even less to be personally accountable for anything => I don't see that happening.
Even if the tests mentioned in the article might be related to HPC, the HW is for general computing therefore its usage doesn't have to be restricted to HPC, but people might still decide to use (as well) those HPC-results to compare the types of CPUs.
And why do you think a security policy is needed here at all instead of just, you know, laws & contracts? You can't just anonymously rent time on these systems even if you have the money - you need your usage approved, too, and it's a big whole process. If you then go and successfully launch a security attack, why does that require technical safeguards instead of just the FBI knocking down your door and hauling your ass to jail?
The main problem with Spectre/Meltdown and friends is two VPSs under the same hypervisor sharing a core running some webstack. With careful analysis of tlb, cache timings, and related you can extract SSL private keys, bitcoin addresses, and similar information that would normally be quite hidden since you are running in a different kernels sharing the same hardware.
With a HPC cluster you are (generally) running on bare metal (no hypervisor), single linux kernel, and can see what the other user (if there is one) with simple tools like ps, w, top, and friends. Additionally there's generally not private keys for SSL or bitcoin addresses.
In fact often unencrypted network filesystems are used sending plain text over the wire. Even on more sensitive clusters the security is highest between the internet and the head node and less so between compute nodes.
Hyperthreading being on vs off is usually just an issue with the performance characteristics of whatever application is most common and the limitations of certain batch queues like sun grid engine (SGE). If your popular app hates hyperthreading then you turn it off. Or sometimes you want to minimize the performance impact of users sharing a core.
Additionally sometimes cores, or even nodes are not shared. But again it's for performance reasons not security. Last thing you want is a 10,000 core job to run 50% as fast because one node is shared with a resource intensive application.
You can browse the internet without JS. Many people here seem to do so.
The number of people who do isn't statistically interesting though. If you are I still wouldn't advise doing so while turning off the mitigations since you're still parsing and displaying hostile ip/tcp/http/(html/CSS/image/video/...).
For example, http://www.misc0110.net/web/files/netspectre.pdf
EDIT: Are the downvotes because people don't believe that the JS mitigations are effective?
I think you're underestimating the % of HPC networks with properly-configured firewalls and only running code from trusted sources.
Why you generally do need it is the above isn't true, and microcode has to be loaded at the hypervisor level for everyone or for no one
This is quite common in the engineering/research sectors, which love clusters. Think Sandia, oak ridge, etc. I also personally know of several private companies with research clusters. I'm curious how common hpcs are outside of the research community, because honestly I'm struggling to think of a practical need for one.
The real question is do you trust any potential co-tenants in a public cloud that share a hypervisor with you, when said hypervisor does NOT have Spectre/Meltdown fixes applied? (There is only one correct answer...)
Also you can fairly easily mitigate this in a public cloud setting by running your VM on a dedicated hypervisor. AWS calls these "EC2 Dedicated Instances". Interestingly enough for most compute sizes the cost different is negligable. I imagine that isn't the case everywhere / for-everything, but if you A) need pre-Spectre/Meltdown perf and B) need protection against co-tenancy attacks, paying a few more dollars for a dedicated hypervisor host seems like a no-brainer to me.
noibrs - We don't need no restricted indirect branch speculation
noibpb - We don't need no indirect branch prediction barrier either
nospectre_v1 and nospectre_v2: Don't care if some program can get data from some other program when it shouldn't
l1tf=off - Why would we be flushing the L1 cache, we might need that data. So what if anyone can get at it.
nospec_store_bypass_disable - Of course we want to use, not bypass, the stored data
no_stf_barrier - We don't need no barriers between software, they could be friends
mds=off - Zombieload attacks are fine
mitigations=off - Of course we don't want no mitigations
The article context is HPC - it seems very unlikely Spectre/Meltdown fixes are needed for HPC loads (no browser, no VM sharing).
If your server is not reachable from untrusted sources, and you are running only what you trust, it is prudent to disable mitigation’s and make use of the extra performance.
Example, I run many compute nodes (only private network) in my HPC cluster with mitigation’s off. If an attacker could reach the compute nodes over the network to push the attack payload, I’ve already been heavily compromised that these attacks are just insult to the injury.
If you run a database server that can never be reached directly by an attacker. You may spend your time watching your application server and let the DB server use the extra oomph.
Naturally, everyone must determine their level of risk aversion and take the steps they feel most prudent. I've not heard this perspective before. Thank you for sharing!
Microsoft has deployed mitigations across all our cloud services.
You're right about that...
> The vast majority of computers have no such need.
You're probably wrong about that. Every, no. Majority or substantial minority? Yes. There are a ton of consumer computer devices that act as web browsers, and smartphones absolutely count here.
If you or your employer doesn't apply mitigations I certainly wouldn't go around bragging about it. Let it be a deep, dark secret until the next hw upgrade cycle. Otherwise you'll get pilloried and/or sued for any data breach.
That's a working exploit, not a hypothetical.
> Chrome intentionally degrades the accuracy of its high-resolution timer to dissuade timing attacks using performance.now() . However, the Web Workers feature of HTML5 makes it simple to create a separate thread that repeatedly decrements a value in a shared memory location [24, 60]. This approach yields a high-resolution timer that provides sufficient resolution.
They did it on Chrome 62.0.3202, but SharedArrayBuffer was disabled in all major browsers right when spectre dropped. Chrome enabled it again in Chrome 67 if Full Site Isolation was enabled. Full site isolation is now enabled by default afaik.
So no, that is in no way a working exploit in any major browser I know of. There is no know spectre (or other variant) exploit that works in a default updated major browser (and there has never been as far as I know), but feel free to link any PoC's.
Was heartbleed not an exploit because it's since been patched? Or the thousands of DOS TSR viruses? What is even your point or logic here?
They fucked up, and are trying to pawn the slowdowns as some sort of Faustian choice.
? Has something changed in the laptop world because you're describing issues frequently heard with Laptops and their screenless cousins, NUCs.
*Edited to clarify I meant socket SP3 boards rather than embedded.
The prices could be better however.
I was looking for a decent AMD replacement for a Intel NUC and didn't found it then, wanted to suggest a new niche in the previous comment ( looked for it a month or so ago). I only found out after my comment it was coming and didn't want to remove my comment ;). As i'm truelly interested in a AMD replacement for my current slow ( and cheap ) Intel NUC
And euh, looking at the top comments. I'm not alone, sarcasms aside
If you order enough of them. They can address a custom team for you! You could earn money like hell launching this on the market!
Build your dream!
You're right to express some concern -- practical residential limit for continuous load on a perfect incarnation of the typical residential circuit is something like 1440W total (80% of nominal 15A breaker on 120VAC).
Maybe more realistic mid-range products are 155-180W/socket, and that drops the CPU draw down to 620-720W.
Yep! The typical US residential circuit is only 120v, 15A breaker (so 12A at 80% load). (And that may be only 110-120v.) 20A circuits are also fairly common, but it's not the most common; the majority of wall outlets in a US house will be 15A.
In this chart, https://en.wikipedia.org/wiki/NEMA_connector#/media/File:NEM... , the commonly used US residential sockets are NEMA 5-15 (labelled "Typical Outlet," for 15A breaker circuits), NEMA 5-20 (for, duh, 20A breaker circuits), and NEMA 1-15 ("Old Outlet," in older buildings).
> 1500W isn't that much of a problem. Oven breakers are even 20 or 32A.
We do have higher amperage circuits for appliances like ovens and clothes driers, but they're usually dedicated circuits and have different socket shapes. You can see the labels "Clothes Dryer" and "Electric Oven" in the chart linked above :-).
And of course some datacenters just wire DC to the whole schebang instead of having an individual, well two redundant, PSUs per server.
AFAIK, there aren't any, Epyc only has 1 socket (with 129 PCIe lanes) or 2 socket (using 64 or 48 of the 129 lanes to connect to the other socket) options.
However, 'chrome' would be valid. :)
There's algorithms that haven't yet been optimized for GPU parallelization. And there's datasets that require memory access patterns that aren't amenable to GPU.
That said, now I'm curious what a 128-core system would do to memory access. Better hope the algorithms are L2/L3 coherent?
Not always. I have a working 10GBASE-T link over about 50 feet of Cat 5e, going through two couplers too. No frame loss.
Probably depends a lot on the quality of the cabling and the noise environment.
As the saying goes, don't attribute to malice that which can be adequately explained by stupidity (including corporate bureaucracy).
If the publication was delayed to allow time for due diligence then this disclosure should have made the cut. I like to think that Intel had some top-notch benchmarking nerds who were superseded by unscrupulous executives and would not have published a dishonest benchmark otherwise.
Despite ServeTheHome's own previous work showing this, they're whining about it being different. But if Intel hadn't tested AMD in the proper configuration they would have complained about that too.
Yeah, you take first-party benchmarks with a big grain of salt like always, but STH is just looking to stir some shit here.
AMD is running at half the threads it can. I'm pretty sure that's not its proper nor fastest configuration.
The one thing I'm not pleased with is the general poor experience with integrators, on both platforms. Who told them they shouldn't put prices on gigabyte.com? I see the actual price on a distributor's site and I'm nothing but pleased.
We just bought our first AMD Epyc server for our HPC cluster. I cant wait until next week to test it against our Skylake nodes.
1 socket EPYC 16C cpu vs 2X Xeon Skylake.
I already have a suite of our internal workloads to hammer them with.. one of them is a key in house software compiled usually with intel compiler and intel mkl for performance reasons. Really curious how it will do with the EPYC cpus.
I will try to do some basic perf tests as well along with our workload specific tests.
I plan to share the results as. Blog post here - https://aravindh.net
For one, the claim is that Intel misleads intentionally. Given the timeline there's no particular reason to think there is something intentional here. The AMD-optimized version of GROMACS just came out ~5 weeks ago.
Second, while this is misleading in a sense, it's in the way that benchmarks are all generally misleading. By their intrinsic nature they don't tell the whole story. (And of course marketing departments cherry-pick benchmarks to tell the story they want to tell. Also by intrinsic nature.)
BTW, this isn't even all that bad for Intel. The AMD chip might outperform the Intel one, but only if your software has been specifically optimized for it.
And frankly I've never seen AMD trying to reach those people.
Source is still a good read for explanations on why this is impactful.
Disclosure: I'm long AMD
That’s damning in and of itself.
The question to ask is whether or not they used the best complier/swithes available for the other chips.
This is not true. Intel compilers cripple non-intel CPUs on purpose, they generate code that literally checks whether CPUID is equal to "GenuineIntel" and if it isn't, it executes the less optimized code path.
 - https://en.wikipedia.org/wiki/Intel_C%2B%2B_Compiler#Recepti...
I don't know about nowadays, but it used to be that the intel libs and compiler would specifically disable SIMD instructions if they found that the CPUid returned a non-"Genuine Intel" string. Doing nothing - and just relying on the CPU feature flags - would have been more fair.
Nonsense. Please do some research before you comment.
In the past Intel would build the benchmarks with commonly available compilers (Intel, GCC, Clang) then select the best benchmark results, eg if GCC gave the best numbers on AMD chips they'd use GCC to compile the tests on AMD. That was a fair comparison.
They deliberately changed to using their own compiler specifically to disadvantage AMD in this comparison.
All models are wrong, some are useful.
Blah, blah, blah.
Every benchmark can be set up in a way that will give edge to one part or another. Especially if there is a vested interest.
So which would you consider faster? The one that can give you a few seconds of effort really quickly; or the one that gives you 90% of that effort but for hours?
Immense opportunities to fool others, and even yourself.
This doesn't make them wrong, but benchmarks are never the full story.
(Bat signal @dang or anyone else)