They just raised more money too - I just wonder how painless the developer experience will be with using their drivers with latest versions of your chosen DL framework, and how price/perf will compare with DL specific tensor processor/GPU hybrids like Volta.
Who knows, perhaps Intel will be developing more general purpose massively parallel compute processors, but intends to integrate some of the knowledge and experience accrued from the field of graphics processors.
But as I said in another comment, the truth is Intel doesn't seem to be knowing what it's doing, which is why it's pushing in 5 or 6 different directions with many-core accelerators, FPGAs, custom ASICs, neuromorphic CPUs, quantum computers, graphcores, and so on.
By the time Intel figures out which one of these is "ideal" for machine learning, and behind which arrows to "put more wood," Nvidia will have an insurmountable advantage in the machine learning chip market, backed by an even stronger software ecosystem that Intel can't build because it doesn't yet know "which ML chips will win out".
If I would describe Intel is a sentence these days is "Intel doesn't have a vision." It's mostly re-iterating on its chips and rent-seeking these days by rebranding weak chips with strong chip brands, and adding names like "Silver" and "Gold" to Xeons (and charging more for them, because come on - it says Gold on them!), as well as essentially bringing the DLC nickle-and-diming strategy from games to its chips and motherboards.
Meanwhile, it's wasting billions every year on failed R&D projects and acquisitions because it lacks that vision on what it really needs to do to be successful. Steve Jobs didn't need to build 5 different smartphones to see which one would "win out" in the market.
It's not clear if doing this in-house, or closely monitoring the state of the art and then buying a company that develops a winner, is superior.
Most of those are completely different technologies that will almost certainly not share a niche.
Effectively, this argument is much like saying that your personal workloads don't use AVX and demanding that Intel tape out a whole different die without it. You would very rightly be laughed out of town for even suggesting it.
Much like the economics of cryptomining cards that lack display outputs, this comes down to whether there is actually enough of a market to justify taping out a whole specialty product just for this one niche, vs the economies of scale that come from mass production. AFter all that is the logic behind using a GPU in the first place, instead of a custom ASIC for your task (like Google's Tensor Processing Unit). On the whole it is probably cheaper if you just suck it up and accept that you're not going to use every last feature of the card on every single workload. It's simply too expensive to tape out a different product for every workload.
This only gets more complicated when you consider that many types of GPGPU computation actually do use things like the texture units, since it allows you to coalesce memory requests with 2D/3D locality rather than simple 1D locality. I would also not be surprised if delta compression were active in CUDA mode, since it is a very generic way to increase bandwidth.
The GPGPU community absolutely does use the whole buffalo here, there is very little hardware that is purely display-specific. If you want hardware that is more "compute-oriented" than the consumer stuff, that's why there's GP100 and GV100 parts. If you want even more compute-oriented than that, you're better off looking at something like that's essentially fixed-function hardware dedicated to your particular task, rather than a general-purpose GPU unit.
So, it doesn't really make any economic sense.
Whoever at AMD who refused to match the offer probably made a terrible decision. This is about the worst time to lose that talent right after inking a gpu die deal which, in light of this news, will only be temporary. AMD just got played.
If I were AMD, I would review Mark Papermaster's comp and incentives to ensure he doesn't leave.
(I'm long AMD)
I think the recent Intel + AMD custom chip was probably the last thing Raja did before he got pulled RTG got the reins put back on and now he’s hoping ship to peruse what he’s wanted all along. To work with more independence.
More power to him.
I know AMD has access to some resources, but If Intel decided he was a strategic hire, the game was over before it started. It’s not just that they are richer, it’s worth spending more to them because you could argue it ties into their most important long term IP battles in related areas like massively parallel computing, ML, etc.
At that point you have to just cut the line.
Gaming customers don't care because it's only consuming more when actually playing - 500 hours played in a year times 0.1 eur/usd per kwh says you only save 50 eur or usd per year per 1000 watts reduction on power consumption, so it's like 2.5 eur or usd per year for a 50W difference. Cryptocurrency miners have voted with their wallets and gone AMD. There are some 24/7 HPC GPGPU users who put weight on power consumption but it is a small market segment.
Of course lower power consumption lets you do all kinds of useful engineering decisions within the product, so lowering it would make the product faster or cheaper, but that is already accounted for in the direct cost and performance numbers of the current product.
With cryptominers I'm not sure they are actually "voting" for anything, to me it rather looks like they are buying up pretty much all the decent mid to high range cards in bulk, regardless of brand, it's more about availability.
Right now, if you are building a new computer and get to make all your own component selections, a Vega 56 card at MSRP can be a really good deal, at that segment AMD GPUs are really competitive except at the very top -- you might pay a little more in the PSU and your power bill, but the sticker price of the GPU will make up for that. However, if, like >90% of gamers, you don't make all your own component decisions and don't have the power budget, the NVidia 1050Ti reigns supreme. The best competition AMD can muster against it in the "no external power connector" segment are some RX560 models. This is in itself a terrible marketing decision as it prevents the kind of word of mouth "just buy a 750Ti" advertising that worked really well for nVidia, on top of the fact that in the segment the AMD cards just do much less well than the nVidia ones.
If Vega were smoking a 1080 Ti then you wouldn't hear any grumbling at all. It's when you get into a situation where Vega is pulling more power than a 1080 Ti and delivering performance that's barely above a 1070 in many titles that people start to get queasy about it.
PCGH just put out their new benchmark charts and the only Vega part that can even match the 1080 on average is the 64 Liquid Cooled version, which is roughly a $650 product at the moment. You're paying 1080 Ti money for a 1080 that pulls twice as much power as a 1080, which is a pretty unappealing on the whole. The only real value argument AMD has been able to make is FreeSync, but it can't really make up for that kind of performance/value deficit.
Another little-discussed disadvantage is that Vega is a delicate little flower, even moreso than Fiji was. Even most board partners that normally allow you to keep your warranty while using a waterblock have decided that Vega is just too delicate to have users taking the cooler off. You put a waterblock on, you lose your warranty. Many stores are not allowing returns for them either and I suspect this is a factor (along with generally immature drivers and other problems resulting in generally low user satisfaction).
In the mid-range (/general audience) it's true heat can sway the buying decision, and in fact, everything that comes along with heat (for example, space occupation) is taken into account in the price.
Indeed, they're not voting for anything.
And if Ethereum gets any significant traction, the ASICs will come. That's pretty much inevitable. Heck, I bet ASIC would be worth the investment even for Argon2d hashes —even though that one was designed for modern stock hardware.
You mean like a $30 billion market cap? 
Afaik not all blockchain implementations profit from ASIC hardware, some even actively discourage ASCI use by making ASCI hardware use not efficient, like Monero ,
could be that Ethererum does something similar.
Gamers care about noise and heat very much.
> There are some 24/7 HPC GPGPU users who put weight on power consumption but it is a small market segment.
Ugh latest estimates on machine learning are something like $40B by 2024.
I am short AMD, long NVDA. Keep an eye open on ER tomorrow.
1) FP8 half-precision training: NVidia is artificially disabling this feature in consumer GPUs to charge more for Tesla / Volta.
2) A licensed / clone of AMD SSG technology to give massive on-GPU memory: NVidia's 12 GB memory is not sufficient for anything beyond thumbnail or VGA sized images.
My experience with Intel Phi KNL has been miserable so far, I hope Raja has better luck with GPU line.
The GP102 and 104 Cards which include the consumer cards and Tesla’s like the P4 and P40 are inference focused and support Int 16 and Int 8 dot products while the first generation Pascal GP100 doesn’t.
If anyone artificially locked down FP16 support it’s AMD as consumer VEGA doesn’t support it for compute.
2) NVIDIA already has a competing solution, Pascal has had unified memory support form day one, 49 bit address space, AST and paging they already partner with SSD makers to have an addon card which is mapped to VRAM.
I'd love to see the Phi approach taken further. I'm not a huge fan of having different ISAs, one for my CPU, one for the compute engines of the GPU (to say nothing about the blobs on my GPU, network controller, ME). I'd prefer a more general approach where I could easily spread the various workloads running on my CPU to other, perhaps more specialized but still binary-compatible, cores.
Heck... Even my phone has 8 cores (4 fast, 4 power-efficient, running the same ISA).
Power and area generally scale as the square of the single threaded performance of a core. The huge number of "cores"/lanes in a GPU are much smaller and more efficient individually than even your phone's smaller cores. And the x86 tax gets worse and worse the smaller you try to make a core with the same ISA. Intel wasn't even able to succeed in competing with the Atom against medium sized cellphone chips.
There is nothing preventing the x86 ISA to be extended in that direction. As long as all cores (oxen and chickens, as Seymour Cray would say) can shift binaries around according to desired performance/power, I don't care.
Binary compatibility is awesome for software that has already been written and for which we don't have the source code. Pretty much everything on my machines has source available.
The OS may need to be more aware of performance characteristics of software it's running on the slightly different cores so it can be better allocated, but, apart from that, it's a more or less solved problem.
Atoms didn't perform that much worse than ARMs on phones. What killed them is that they didn't run our desktop software all that well (even though one of my favorite laptops was an Atom netbook).
If your program is computationally intensive enough that you're abandoning general-purpose processors and moving to a specialized co-processor card, you should really just go whole-hog on it and bare-metal optimize to get some performance out of it. It doesn't make sense to do this half-way - which is why Xeon Phi has always faced such an uphill battle for adoption.
half precision is FP16.
No, this physically is not present on consumer chips. You can't subdivide the ALUs like that even on Tesla P5000 cards. Of course you can promote FP8 to FP32 without an issue, on any card, but you don't gain any performance either.
At the time Pascal was designed it didn't make any sense to waste die space on FP16 support let alone FP8, since games are purely FP32. This is changing now that Vega has FP16 capability ("Rapid Packed Math") and titles may be using this capability where appropriate. I would not be surprised to see it in Volta gaming cards at all.
It's funny, everything old is new again. Someone comes up with this idea about once every 10 years. Using FP16 or FP24 used to be big back in the DX9 days.
> 2) A licensed / clone of AMD SSG technology to give massive on-GPU memory: NVidia's 12 GB memory is not sufficient for anything beyond thumbnail or VGA sized images.
You're looking for NVIDIA GPUDirect Peer-to-Peer, which has existed since like 2011.
AMD's product is actually purely marketing hype, it's simply a card that contains a PLX chip to interface a NVMe SSD. It is the same technology that is used for multi-GPU cards like the Titan Z or 295x2, and it offers no performance advantages vs a regular NVMe SSD sitting in the next PCIe slot over.
This is something that people didn't know they wanted until AMD told them they wanted it. But you can do this on any GeForce card even, no need to shell out $7000 for some crazy custom card that doesn't even run CUDA.
The bigger problem is that there really isn't much of a use-case for it. NVMe runs at 4 GB/s, which is painfully short of the ~500 GB/s that the GPU normally runs at. That is even significantly less bandwidth than host memory can provide (a 3.0x16 PCIe bus limits you to 16 GB/s of transfers regardless of whether that's coming from NVMe or host memory).
I suspect part of the reason is the long time frames for dev of this tech. I suspect it is at least 2 years for this to see the light of day. That is forever in this space.
Intel failed with Larrabee and itanium. Maybe this will go better?
How many machine learning strategies is Intel going to try? Does it even know what it's doing? Spending billions of dollars left and right on totally different machine learning technologies kind of looks like it doesn't, and it's just hoping it will get lucky with one of them.
And even if you think that's not a terrible strategy to "see what works", there's still the issue that they need to have great software support for all of these platforms if they want developer adoption. The more different machine learning strategies it adopts, the harder that's going to be for Intel to achieve.
But I bet that branching instructions (various variants of search) still play a big role when you go beyond classifiers to reinforcement learning etc. so there is need for other architectures beyond GPUs.
I do have high hopes for their memristor initiative, but that's got to be years out.
The GPU move is smart for Intel.
The later is further proof that if you sell to dinosaurs, you won't survive the Big One (in this case, x86 growing up courtesy of AMD and Arm spending 15 years washing the footing underneath both by moving personal computing to mobile devices). This should be a big warning sign to the OpenPower guys. You need to start small and scale/price up, not the other way around.
Maybe they are building upon AMD's core tech based on that other licensing deal? If so I would bet on them succeeding.
In my ideal world competition from intel would force NVidia to play nice with OpenCL or something similar, and encourage competition in the hardware space instead of driver support space. Unfortunately the worst-case looks something more like CUDA, OpenCL and a third option from Intel with OpenCL like adoption. :(
Even if my precompiled CUDA application could run on Intel GPUs at 50% of the throughput, I'd be happy if I could later tweak and recompile it to get the full benefits from their hardware.
It shouldn't be a surprise then that along comes NVidia with their own instruction set, PTX, and Intel's desire to own the instruction set will be their undoing.
An alternative would be for them to (again magically) modify all major DL frameworks to support their GPUs.
I don't even know which option is more realistic.
If the speed is there the world will write the code.
I'd love to get surprised though.
Do you have another open, cross-platform, widely compatible GPU programming framework to recommend?
The alternatives recommended here aren't even serious IMHO. I'd rather switch to CUDA and wait till Intel/AMD sort out a REAL compatibility layer than deal with those.
Unless I'm mistaken, HIP still requires a separate compile for either platform and what runtime do they expect end users to have exactly?! At least CUDA and OpenCL are integrated in the vendor drivers.
Vulkan compute with SPIR-V seems to be the only real solution, but even that is still very early. Sill waiting for proper OpenCL 2.0 support in NVIDIA drivers :P
(The models can be executed on low-powered, commodity CPUs. No need for any GPU there.)
That's totally an option for our product, great idea! Why did I never think of this!
No seriously we are shipping, using OpenCL and it gives about a 20 times performance advantage for most users regardless if they have AMD or NVIDIA hardware. If something that's actually better than OpenCL comes along (or if AMD RTG goes out of business) I'll switch to it no heart broken.
But that hasn't happened yet.
On iOS it is kind of deprecated and the way forward is Metal Compute.
On Android it never happened, Google created their own Renderscript dialect instead.
Do you mean adaptive algorithms or dynamic recompilation? And yes I do expect that cross API will be difficult, for both running and getting good perf.
But it is not just the room at the high performance end of the spectrum that is important, but also the lower end that is stifled by the barrier to entry that would benefit from the extra compute.
My point is - was drawn to OpenCL for its 'portability' claim, and yes kernels will 'run' on various hardware, but with massive differences in speed. what good does that portability do me? My workloads (scientific computing, branch heavy) are different from the typical ML half- or single precision MulDiv()/linear algebra applications, so all the hand-tuned CUDA libraries aren't even my concern. The elephant in the room is that performance doesn't depend on this API or that; it's in how you tune your algorithm to the actual hardware you're running on. Which is the direct opposite of portability.
To come back to the post I was replying to - yes it's 'trivial' (I mean, a lot of work, but technically not hard, not to belittle your work) to compile almost any statically typed code into either SPIR-V or PTX or any other future format for that matter. But that doesn't mean that it will work 'well' (not even 'optimal') on other hardware. In the real world, you're almost always better off just spending a few hundred to get the same GPU as whoever wrote the kernels tested them on (or if you're running your kernels on existing clusters, to focus on optimizing them for what you know you'll be running them on).
Oh and all of that is just considering GPU's. I mean, when you read an OpenCL book they make it seem (in the first few chapters) that you don't even have to think about whether you'll be running on a CPU or a GPU. And then you accidentally use USE_HOST_PTR instead of COPY_HOST_PTR (or the other way around) in the wrong place, and all of a sudden your code is 10x slower than the sequential version of your algo even.
What I'm saying is - I no longer believe in easy to use abstractions for these purposes. If you want speed, you code to the metal, and/or you tweak your abstractions to your specific use case. Yes this is a lot of work. And if you don't need speed, you just throw in a few std::thread's here and there and call it a day.
But that's just my conclusion for my use cases.
I agree that it is not easy to 'parameterise the metal', but it is certainly doable in D, in C++ (guessed from the std::thread) good lord no: D's is orders of magnitude ahead of C+'s meta-programming. Writing different versions of a kernel is a poor mans parameterisation ;)
LDC, the LLVM D Compiler, will be getting a dynamic re-optimisation, which I hope to get to play nicely with DCompute. Well PTX, because SPIR-V doesn't yet have a jit backend. Dynamic re-optimisation from what I understand is freeze some variables as constants and rerun the optimisation passes. This is as opposed to recompilation with complete restructuring of the kernel. Not quite the same but for things like loop counts and branch "prediction" this should help a lot.
w.r.t USE_HOST_PTR/COPY_HOST_PTR, that's not a part of the kernel parameterisation, that's part of the host and is easily adjustable. Yes you need to figure our which one to use, but that's just part of tuning.
> What I'm saying is - I no longer believe in easy to use abstractions for these purposes.
I hope to be able to show you otherwise :)
If AMD GPUs die out, CUDA it is.
Or maybe give up and use a wrapper library for whatever 10 alternatives-only-supported-by-one-marginal-vendor there are. (like Apple)
The issues go away if you use a good OpenCL frontend. PyOpenCL for Python goes a long way towards this, and is not really any more awkward than the corresponding PyCUDA, and higher-level languages that generate OpenCL code, like Lift or Futhark (tooting my own horn here), remove the awkwardness completely.
You're not wrong there. However with the advent of SPIR-V it is possible to write code in whatever language you please (with the caveat that at the moment you need an LLVM backend using https://github.com/thewilsonator/llvm-target-spirv or the Khronos repo I forked that off of.
Then comes the issue of making the code generator friendly interface user friendly, which I have done for D so that you get the ease of use of CUDA.
OpenGL ES only took off thanks to gaming on the iPhone, and now is deprecated on Apple platforms.
Vulkan still lives in a C world, and the semi-official C++ bindings only exist thanks NVidia.
OpenCL waited too long to support C++, Fortran and providing an infrastructure for compiler writers to add GPU support to their own languages. And two years later the majority of drivers are not there yet.
Which Khronos finally adopted as SPIR, but the drivers aren't there yet.
Regarding the other Khronos APIs, a C API is like being stuck in a PDP-11 world.
Many mix the idea of C API with OS ABI, it only happens to be the same if the OS APIs are exposed as plain C.
There many cases where this isn't like it, e.g. mainframes, mobile OSes, and most userspace on OS X (Obj-C runtime) and Windows (.NET and COM).
Yes driver support is a bit lacking, although I hop that I can convince the OpenCL working group of the need to get a backend (such as https://github.com/thewilsonator/llvm-target-spirv) into mainline LLVM so that writing drivers becomes easier for vendors.
And segmenting the codebase is a MAJOR feature. With CUDA you are stuck on an old compiler until NVIDIA issues an update. How anyone can think this is a good idea...
"Designing (New) C++ Hardware”
CUDA has had C++, Fortran support since the early days, with PTX for additional compiler backends added in version 1.4.
That was 2007, Khronos waited until 2015 to specific similar capabilities.
People still write regular shaders in languages that are much more C like.
People writing GPGPU code are few. Most of the DL GPU use is in Python through several layers, and in the end you are running hand-written SASS assembly sitting in an NVIDIA DLL or whatever.
I guess some people must think it's handy to have C++ support in GPU kernels, or they wouldn't have added the feature. But for it to drive technology, hard to believe.
The Metal, DirectX, PS3, PS4, Nintendo and several middleware engines are C++ like.
Also the fact that OpenCL lost to CUDA for being stuck in C for so long, shows what most GPU devs actually prefer.
Aw come on, you're sure it has nothing to do with the largest GPGPU vendor pushing CUDA very heavily and intentionally gutting their OpenCL tools? Or putting out a ton of very high performance libraries with no OpenCL equivalent? Putting out a shitton of marketing and tutorial videos for CUDA only?
Yeah, that surely was totally unrelated.
Also pushing a proprietary standard goes faster than a standardized one. No surprise there.
If AMD, Intel and embedded OEMs actually produced quality OpenCL drivers, debugger and IDE support and libraries that could match CUDA productivity, maybe devs would bother to use C with OpenCL.
Even Google decided to create their Renderscript dialect instead of supporting OpenCL on Android.
I am saying that if the other GPU vendors bothered to actually provide a competing technology stack, that was worth the pain of using plain C, maybe GPU devs would have bothered.
Their recent failed push into 'wearables' was a great example. They bought up a number of small(ish) but interesting players (Basis, Recon Jet etc) and squandered them completely. Their complete missing the boat on smartphones, save perhaps a small amount of modem chip business the past couple of years, is especially damning. With GPUs there's the whole failed Larrabee thing as well.
If Intel ever acquire a company I care about I will be extremely concerned.
Also, Koduri recently left AMD after what many felt was a disappointing discrete graphics release in Vega.
I feel like non-competes are similar to parents these days. Everyone has tons of patents and everyone is infringing on everyone else so they just agree to pay licensing fees to one another and never go to war.
I’m certain AMD has hired their share of Intel people by now, its a no win.
But I'm in awe of the what one can read there.
"This vendor[Nvidia] is extremely savvy and strategic about embedding its devs directly into key game teams to make things happen. (...). These embedded devs will purposely do things that they know are performant on their driver, with no idea how these things impact other drivers.
Vendor A[Nvidia] is also jokingly known as the "Graphics Mafia". Be very careful if a dev from Vendor A gets embedded into your team. These guys are serious business."
So, basically Nvidia is sabotaging OpenGL to fuck up the specs and then implement other working variations and make the game developers use their version? If that is true, fuck Nvidia.
"On the bright side, Vendor C[Intel] feeds this driver team[Windows Driver Team] more internal information about their hardware than the other team[Linux Driver team]. So it tends to be a few percent faster than driver #1 on the same title/hardware - when it works at all."
What the fuck is going on in this industry? Intel is sabotaging its own Linux driver team? Why?
"I don't have any real experience or hard data with these drivers, because I've been fearful that working with these open source/reverse engineered drivers would have pissed off each vendor's closed source teams so much that they wouldn't help.
Vendor A[Nvidia] hates these drivers because they are deeply entrenched in the current way things are done."
That, now finally, makes sense. Nvidia is strong-arming developers to not support Mesa because they are afraid of it. Nvidia is afraid of Mesa. I think this should be more widely known.
Nvidia is strong-arming developers not to support Mesa because they are afraid of open drivers.
Nvidia is afraid of Mesa.
Erm. Nope. No Intel iGPU is on par with the 1050 much less the 1050 Ti.
(I compared mobile chips since the most powerful GT4 can only be found in the mobile chips.)
It's only slightly behind the 1030 which costs $73.
As GPUs continue to evolve into general purpose vector supercomputers, and as ML/deep learning applications emerge, it seems clear that more and more future chip real estate (and power) will go to those compute units, not the x86 core in the corner orchestrating things.
Why on earth would you think Intel extending their near-monopoly is a thing to celebrate?
With Intel and AMD backing Mesa, things on Linux will get very interesting.
Threadripper, and the Zen architecture, put them back on the map, that’s some serious hardware for the price. I wish they had just kept iterating on the CPUs and GPUs.
Vega is not a bad product, it just doesn’t beat nvidia’s offering in the bar charts, doesn’t mean it’s bad, it just means it’s second place which is fine since it’s cheaper as well. Technology needs to be iterated on. Something must be going on at AMD at the moment.
The case is probably that the Intel graphics team just decided they'd rather play against the big boys at nVidia and actually put enough cores on a chip to be a competitor, but in order to do that, they'd need to go off-chip for power and heat dissipation reasons. Hiring the guy from AMD helps you sell the new solution, since presumably that's what this guy's good at.
The market's rife for being disrupted as it has been incredibly stagnant with nVidia and AMD's tit-for-tat for the past, well, decade.
I'd happily take a Chief Architect role at a company if the paycheck had enough zeros and I was the domain expert for that technology.
Maybe someday I'd actually be able to afford a home in this miserable region...
From a strategic markets point of view I see it this way;
Discrete GPUs gives Intel a shot of owning both pieces of high margin silicon in a laptop / tablet design win (GPU & CPU) and potentially it gives Intel additional ammunition to go after Nvidia or to mitigate their encroaching.
This seems to be more of an direct competitive attack on AMDs integrated product than it is competition with nvidia. It feels to me like building discrete GPUs is almost a misdirection.
Either way surely this is a move by Intel to take away from Nvidia's consumer share (which makes up the vast majority of their income) as Nvidia make inroads into the data center market?
Microsoft's Mixed Reality platform has the stated goal of running on integrated graphics and even a mid tier card in a year or two should do fine for usable vr/ar.
Most people, honest people, have no problems understanding these obligations and abiding.
Dishonest people, who lie about destroying documents, are why we have Uber and Waymo battling it out.
"We reserve the right to let someone go, at any time, for any or no reason." and "We also reserve the right to dictate who they can (or rather can't) work for."
No. If you want to say "I can't work in my field for 2 years", then you can pay me 2 years severance.
Given how the human brain works, that's very much impossible to do... "standing on the shoulders of giants" and all that, as the saying goes.
I'm sure some companies would love to be able to "reformat" employee's brains when they leave, but (fortunately) that's not the reality.
Of course. No question that you take the sum of your education and experience with you to each new job. The "company knowledge" limitations are around specific trade secret inventions or verbatim recreation of such.
Unclear what AMD thought they stood to gain with the Monday announcement - and it didn't take long to have it play out in their disfavor.
I'm guessing Intel's GPU will never support OpenPower and Arm servers, and will never ship on a CCIX-enabled adapter.
Wonder if this time they will stick with it for the long haul.
I think Intel should acquisition Nvidia, and let Jen-Hsun Huang lead the new company.
Keep in mind Intel currently builds GPUs - just of the integrated variety. What's new here is that Intel is deciding to build discrete (standalone, like those you'd plug into a PCIe port) GPUs.
It definitely wasn't a "saving throw" that Larrabee's architecture got repurposed. There were several teams at Intel working in similar directions - one team worked on a "cloud on a chip", one team worked on high bandwidth chip-to-chip interconnects, one team worked on on-chip networking... they all came together and formed the Knights Ferry research project, which then got turned into the Xeon Phi.
The "core" of Larrabee, its quick little Pentium-derivatives, went on to be repurposed in the Quark product line and its lineage (e.g. the Intel PCH has a "Quark" inside). The 512-bit instruction set got parted out and became AVX512 in is various incarnations. They definitely got their money's worth out of Larrabee.
Nobody is disagreeing with the fact that Larrabee didn't turn into a discrete GPU despite their attempts make it so. (It's also not surprising, seeing the carriage turn back into a pumpkin with Cell and other Many Core architectures fail to pan out to be good at graphics workloads). But that's a separate issue from Intel building GPUs, since they have a completely other team that works on building productized and shipped GPUs.
Ryzen's chief architect left in 2015, now the master mind behind its GPU is leaving. You need to be really religious to believe that AMD is going to get any better in the coming competitions with NVIDIA and Intel.
A GPU on the CPU die, non-discrete, is often referred to as an "integrated GPU" or "integrated graphics." They're typically not very powerful, though they run common non-gaming applications just fine.
Raja: "I'm...um...going on sabbatical."
Lisa (CEO): "OK."
Intel: "We're hiring Raja!!..."
Use Reddit for fun! There is plenty of fun on the internet. Like Reddit, hackers don't want [Serious] tag.