Lenovo makes one with only 12Gb of RAM, Asustek's effort is 8Gb, a single channel. AMD makes the best integrated CPU/GPU for laptops but you cannot buy it anywhere.
Commentary elsewhere is that Intel is leaning hard on builders not to use the Ryzen 7/Vega 10, or if they use it, to put it in an otherwise shitty spec box that cripples it.
AMD said they expect 25 Ryzen laptops to launch in Q2 and 60 by year end
Also, AMD chipsets seem to use more power than Intel. I'd love to see more work in that direction.
I love AMD and I'm rooting for them, but the stock price isn't the news - it's the unexpected sales strength of the Zen architecture. Don't expect Intel to take their success lying down.
The most interesting thing is that Intel used to be able to crush their opponents by being one step ahead on node size improvements, but that's hit a wall and it has given everyone else a chance to catch up.
Intel is vulnerable if they don't get their act together and the competition do.
AMD has been on a roll since Ryzen was released and as long as time continues to go past without a meltdown level problem cropping up they might have a shot at eating some of the data center stuff.
We upgraded our Xeon servers the week before meltdown hit, if we'd know we'd have held on another year and gone EPYC.
The problem with AMD is that it stopped making server CPUs for 5 years, Epyc was their first release since December of 2012 and that is what essentially prevents any serious ramp up of Epyc currently.
Not having the fastest CPU out there isn't a problem if you can price it correctly but when you don't give your customers any options to upgrade or grow you essentially give them only one option and that is switch the the competition completely.
If people can't trust that AMD won't abandon them again for half a decade to sort their shit out they will never take the risk of using them again at scale.
That 5 year gap also essentially killed the AMD optimized software ecosystem and toolset which now needs to be built from the grounds up again.
I don't see why that should be a concern. It's not as though the effect is any different than switching to a new socket. Existing systems can't be upgraded to the newer processors, which is mostly irrelevant anyway because by the time the processor is stale so is the rest of the system.
It's not as if they're different instruction sets. It's perfectly reasonable to buy Opterons in 2009, replace them with Xeon systems in 2014 and then replace those with Epyc systems in 2019.
> That 5 year gap also essentially killed the AMD optimized software ecosystem and toolset which now needs to be built from the grounds up again.
The Zen microarchitecture isn't based on bulldozer. Even if they had kept iterating on bulldozer in the interim, none of that ecosystem work would have been applicable to Zen regardless.
Let’s take simple examples
Intel and AMD aren’t “compatible”, you can’t cluster non heterogeneous servers together for thin provisioning since you can’t live migrate between them.
You essentially need to convert them and depending on what the OS it might be much more than a simple conversion especially on Linux where you might use specific kernels for each CPU vendor.
Then we have monitoring and remote management both Intel and AMD provide completely different remote management solution.
Does your management stack supports DASH? Are your IT peeps familiar with it? Doesn’t have sufficient traction and market adoption?
Likely not and that is again because AMD slept for half a decade.
Say you are an architect you now need to buy 100 servers with an expected yearly growth of 10% can you see the risk of dealing with a vendor who previously just threw in the towel and stopped making CPUs?
Heck even if you aren’t going to grow what about dealing with disasters? Do you really want to compound an already huge risk with another one?
And about what you said about Zen.
Zen isn’t that different to bulldozer in many aspects I suggest you should read the intrinsics guides for both.
And even if it was 100% different it doesn’t matter a 5 year gap kills the entire infrastructure of partners and provide tools and education.
If I need to optimize software today for an Intel CPU I have a plethora of resources, AMD can’t even release their instructions latency tables for 17h.
> You essentially need to convert them and depending on what the OS it might be much more than a simple conversion especially on Linux where you might use specific kernels for each CPU vendor.
The premise is that you're migrating from one vendor to the other, so once you move something to the other pool it shouldn't have to move back. Having to reboot each guest once is inconvenient, but aren't you already doing this every month or two for security updates?
> Then we have monitoring and remote management both Intel and AMD provide completely different remote management solution.
This absolutely is AMD's fault, but the real issue is that their remote management solution (like Intel's) is a closed source black box. If they would open it up then it might be adopted by ARM vendors and so on and no one would have to worry about being abandoned because the community could continue to support it for as long as enough people want to keep using it. And it would put pressure on Intel to do the same thing, at which point they could be consolidated.
> Say you are an architect you now need to buy 100 servers with an expected yearly growth of 10% can you see the risk of dealing with a vendor who previously just threw in the towel and stopped making CPUs?
That would be the case if we were talking about some low volume product at risk of becoming unavailable. You can still source Opteron systems even today if you really want them. But nobody has wanted them for five years because the migration cost isn't that high.
And as far as sourcing Opterons are you serious? Sure you can source them on eBay but take a look at when the likes of HP stopped supplying them.
From the little research I’ve done though I’m unsure whether their new CPUs really compete with Intel. Any anecdotes here?
It will take a few new data centres to nudge it.
Intel couldn't make Skylake Xeons fast enough it became their fastest ramp up of any Xeon release to date.
The problem is that AMD has lost the trust of the enterprise market because they gave up and stopped making Opterons without even a good reason to do so.
That screwed up their entire customer base because they had no option to expand or upgrade other than just replace all of their servers with Intel.
Opteron was the king and yeah they were losing the pure performance crown but in 2012 it wasn't that bad, server software could be optimized and an acceptable performance per dollar ratio could've been maintained.
Before Epyc the last enterprise Opteron that AMD was released was in December of 2012 on 32nm SOI process 16 core CPU (technically 8 since it's bulldozer but who cares) that's 5 years without an upgrade option.
If I would be a bet on the number 1 question AMD receives today about Epyc is what happens when you stop making CPUs again to which they would probably reply "we won't cuz we can't afford too" and then the cheeky retort "you couldn't afford it last time either".
Epyc ramp up was expected to be slow but I don't think this slow the amount of resistance I think it experience is above even AMDs internal pessimistic predictions.
Without investors pouring money in it, the R&D will stall and that's not something they can sustain. One of the few reasons they haven't been bought yet is because that will invalidate their license sharing with Intel. So I guess the other strategy is to simply bleed them dry. I mean just think about it. They build GPUs, just like NVidia. They build CPUs, just like Intel. They have their own fabrication process, logistics, research, the whole thing. Their chips don't perform much worse, even better in the mid-range tier. Yet AMD is valued at 9, instead of like 5x or even 10x as much. The people with the money are holding a grudge or something, and I can't see them keeping it up for much longer.
- AMD has a superior multicore manufacturing process to its competition. They are able to glue multiple die together to form their high-core chips. Others try to stuff as many cores as possible onto a single die. The problem is that the failure rate during fabrication increases exponentially in the number of cores in a die. Through this innovation, AMD has much lower costs when manufacturing many-core chips.
- EPYC has some serious advantages over its main competitor -- 128 PCI-E lanes, and comparable performance on other dimensions, at a lower cost.
- AMD APU's are not something that Nvidia can compete with. The fact that AMD can manufacture both CPU's and GPU's gives them a huge advantage over Nvidia in this space.
In other words, AMD already has a couple of nice moats. It's not like if they don't come up with a breakthrough within a year, they're screwed.
 Should have said simultaneous multithreading (SMT) instead of hyper threading which is specific to Intel.
It would be interesting to know anything more on the AMD/Intel agreements than the vague "x86 cross licensing" though for sure.
The first will be a quicker read than the second ;). The other commenter mentioned SMT was an IBM patent rather than an Intel one, so that's probably the answer to my question. It seems weird that they wouldn't have just licensed that from IBM as well.
But that SEC link is awesome; I though the agreement was completely confidential and we'd only ever see it in a historical context. Thanks!
I mean, yeah, that's actively happening, as can also be seen by the Intel/Vega combo.
The typical server-farm waits for OEMs to make well-tested complete machines. So it won't be till this year that EPYC even has a chance to make it to your typical serverroom setup.
As far as I can tell, unless you were a hyperscaler, EPYC's release was essentially a paper launch up until Q1 2018.
The big issue with AMD is software and money. Even with the most talented engineers it takes loads of man hours to build up an ecosystem. AMD's is currently valued at 8.3B. Nvidia currently has 7.1B cash on hand. With that kind of money, Nvidia will do things like loan AAA gamemakers some devs to re-write large parts of the game to optimize them for Nvidia graphics cards or build up the CUDA ecosystem.
Also, AMD has been improving its Windows drivers in the last 2-3 years according to reports. I use AMD GPUs on Linux where the driver codebase is different - the improvements there are huge. So much so that it may make sense for AMD to switch to that stack on Windows in the future.
They've basically thrown in the towel as far as the datacenter went with CUDA
Their ROCm environment finally is catching up with NVidia's software / CUDA environment. OpenCL was simply a disaster and hampered AMD's role as a serious software solution for years.
I think its more important to get ROCm fully functional with Tensorflow and other technologies (their current path). AMD can compete on price/performance, selling Vega HBM2 chips at ~$1000 to $2000 (compared to NVidia's chips at $8000 for their HBM2-based V100).
With AMD's drivers properly in the Linux kernel and a nicer licensing model, AMD can achieve a position in the server world. But only if developers start coding in ROCm instead of CUDA.
Honestly, I bet you that software is the bigger issue in the near term. No one wants to code in OpenCL / separated C-environment. ROCm achieves CUDA-parity to some degree with "single source" C++ programming and is beginning to become compatible with Tensorflow.
Sure, they'll be slower than NVidia. But at least your python machine learning code will actually run at all. And paying 1/8th the price for ROCm acceleration will be fine as long as AMD is 1/4th the performance or better. That's how you can actually build a value argument.
Even today OpenCL is a viable solution for GPU. It works fine on both AMD and NVidia GPUs. It is also pushed a lot by Intel for FPGA, which probably scare even more NVidia.
OpenCL kernels are compiled at runtime, which is brilliant since you can change the kernels code at run time, use constants in the code at the last moment, unroll, etc.. which can gives better performances. (Nvidia only introduced the possibility of having runtime compilation as a preview in Cuda 7!)
The "single source" argument is completely overrated. Furthermore, you can have single source in OpenCL putting the code in strings.
These tools aren't available for the wide majority of developers, and are still exceptionally difficult to use and maintain without hardware engineers. I'm going to assume you haven't used FPGAs at all? The ones that can compete at the same tasks for GPUs are not as easily available in terms of price, volume, or even over-the-counter availability (be prepared to ask for a lot of quotes), and the tools have only become more accessible very recently -- such as Intel slashing the FPGA OpenCL licensing costs, and Dell EMC shipping them in pre-configured rack units.
> Nvidia only introduced the possibility of having runtime compilation as a preview in Cuda 7
In the mean time, Nvidia also completely dominated the market by actually producing many working middleware libraries and integrations, a solid and working programming model, and continuously refining and delivering on core technology and GPU upgrades. Maybe those things matter more than runtime compilation and speculative claims about peak performance...
> The "single source" argument is completely overrated.
Even new Khronos standards like SYCL (built on OpenCL, and which does look promising, and I'm hoping AMD delivers a toolchain after they get MIOpen more fleshed out) are moving to the single-source model. It's not even that much better, really, but development friction and cost of entry matters more than anything, and Nvidia understood this from day one. They understood it with GameWorks, as well. They plant specialist engineers "in the field" to accelerate the development and adoption of their tech, and they're very good at it.
This is because their core focus is hardware and selling hardware; it's thus in their interest to release "free" tools that require low-effort to buy into, do as much dirty integration work as possible, and basically give people free engineering power -- because it drives their hardware sales. They basically subsidize their software stack in order to drive GPUs.
> Furthermore, you can have single source in OpenCL putting the code in strings.
This is a joke argument, right?
I'll probably need to be more specific. OpenCL 1.0 through 1.2 is fine, but fell hopelessly behind NVidia's CUDA efforts. NVidia CUDA has more features that lead to proven performance enhancements.
OpenCL 2.0 was the "counterpunch" to bring OpenCL up to CUDA-level features. However, OpenCL 2.0 is virtually stillborn. Only Intel and AMD platforms support OpenCL2.0. Intel Xeon Phi are relatively niche (and their primary advantage seems to be x86 code compatibility anyway. So I doubt you'd be running OpenCL on them).
AMD OpenCL 2.0 support exists, but is rather poor. The OpenCL 2.0 debugger simply is non-functional and you're forced to use lol printfs.
That leaves OpenCL 1.2. Its okay, but it is years behind what modern hardware can do. Its atomic + barrier model is strange compared to proper C++11 Atomics, its missing important features like device-side queuing, shared virtual memory, unified address space (no more copy/paste code just to go from "local" to "private" memory), among other very useful features.
> Even today OpenCL is a viable solution for GPU
OpenCL 1.2 is a viable solution. An old, crusty, and quirky solution, but viable nonetheless. OpenCL 2.0+ is basically dead. And I think only Intel Xeon Phi supports the latest OpenCL 2.2.
I bet you there are more Vulkan compute shaders out there than there are OpenCL 2.0. Indeed, there are rumors that the Khronos project is going to be focusing on Vulkan compute shaders in the future.
> The "single source" argument is completely overrated. Furthermore, you can have single source in OpenCL putting the code in strings.
I like my compile-time errors to be during compile-time. Not during run-time on my client's system. Compiler-bugs in AMD drivers are fixed through device driver updates (!!!) which makes practical deployment of plain-text OpenCL source code far more of a hassle in practice.
Consider this horror story: a compiler bug in some AMD Device Driver versions which cause a segfault on some hardware versions. This is not theoretical: https://community.amd.com/thread/160362.
In practice, deploying OpenCL 1.2 code requires you to test all of the device drivers your client base is reasonably expected to run.
But that's not the only issue.
"Single Source" means that you can define a singular structure in a singular .h file and actually have it guaranteed to work between CPU-code and GPU-code. Data-sharing code is grossly simplified and is perfectly matched.
The C++ AMP model (which has been adopted into AMD's ROCm platform) is grossly superior. You specify a few templates and bam, your source code automatically turns into CPU code OR GPU-code. Extremely useful when sharing routines between the CPU and GPU (like data-packing or unpacking from the buffers)
With that said, AMD clearly cares about OpenCL and the ROCm platform looks like it strongly supports OpenCL through then near term, especially OpenCL 1.2 which seems to have a big codebase.
However, if I were to do any project these days, I'd do it in ROCm's HCC / single-source C++ system or CUDA. OpenCL 1.2 is useful for high-compatibility but has major issues as an environment.
But AMD drivers which cause OpenCL compiler-segfaults and/or infinite loops is a problem that rests squarely on AMD's shoulders.
My personal use case with OpenCL didn't seem to be going very well. I was testing on my personal Rx 290x. While I didn't have the crashing / infinite loop bugs (See LuxRender's "Dave" for details: http://www.luxrender.net/forum/viewtopic.php?f=34&t=11009) that other people had, my #1 issue was with the quality of AMD's OpenCL compiler.
In particular, the -O2 flag would literally break my code. I was doing some bit-level operations, and those bit-level operations were just wrong under -O2. While the -O0 flag was so unoptimized that my code was regularly swapping registers into / out of global memory. At which point the CPU was faster at executing and there was no point in using OpenCL / GPU compute.
It seems like AMD's OpenCL implementation assumes that the kernels would be very small and compact. And it seems to be better designed for floating-point ops. Other programmers online have also complained about AMD's bit-level operations returning erronious results under -O2. My opinion of its compiler was... rather poor... based on my limited exposure. And further research seems to indicate that I wasn't the only one having these issues.
Just because they can trade blows with Intel in the CPU space, doesn't automatically translate to them having equal engineering talent to take on Nvidia on GPU's
From the time AMD released the K7 'Athlon' CPU, until the time Intel released Core2, AMD was king of performance.
That's a long spawn of time.
To understand why Intel kept selling CPUs like pancakes despite this, check this out: https://www.youtube.com/watch?v=osSMJRyxG0k
I used to have a 440BX board with a pair of 300A's clocked to 450mhz and 128MB of SDRAM. Excellent poor-mans workstation.