Hacker News new | past | comments | ask | show | jobs | submit login
Intel to Release Discrete Graphics Card in 2020 (tomshardware.com)
58 points by modeless on June 13, 2018 | hide | past | favorite | 39 comments



I remember the first time Intel tried this, in 1998, when they introduced the i740 AGP card. [1][2] The data sheet is still out there if you know where to look. [3] It made everyone in the industry from S3, Matrox, ATI, Nvidia and 3dfx all tremble. In the end, it flopped.

[1] https://www.anandtech.com/show/202/3

[2] https://assets.hardwarezone.com/2009/reviews/video/v2740tv/v...

[3] https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...


I had an i740 and it was OK. It ran games from the era without complaint: https://youtu.be/x8sZ3kazUaU


This was one of their previous attempts: https://en.wikipedia.org/wiki/Intel740

I hope these GPUs will be far better documented and have very good open-source drivers --- Intel has historically been much more open than ATI/AMD or nVidia at releasing public developer information.


Wow, I was an intern at Intel when they were making the i740. I would run the WHQL tests on sample boards / drivers and document which tests they failed. Part of the job was testing them with preview builds of the upcoming NT5.0. Tests involved playing a whole bunch of different games in 30 second stretches, changing screen resolution, and repeating. The full-timers also thought it was very important to stress test the hardware, which meant playing Action Quake 2 over lunch every day. They were quite good.


Intel is also much more open than ARM. If you want to use the GPU on ARM SBCs you are at the mercy of the manufacturer to release proprietary GPU drivers that usually are only compatible with an old kernel version that has custom patches to support that specific SBC.


There was an article written by a guy from the Larrabee team, where he explained how the projet wasn't a failure, in the sense that management wanted a processing beast and that's what they got, not a GPGPU.

But what's interesting is that in the article, he said that the Intel GFX dep. were "begging to let them build a GPGPU", and he might have been exagerating but he made it seem like all they needed was to slap some ROPs and texture units on that thing.

That makes me question wether we'll be seeing a brand new architecture, or an Intel HD unleashed edition with some GDDR5.


I think the article that you're thinking of is "Why didn't Larrabee fail?" by Tom Forsyth.

http://tomforsyth1000.github.io/blog.wiki.html#


There was also this fantastic series of articles by Matt Pharr (of Physically Based Rendering fame) - http://pharr.org/matt/blog/2018/04/30/ispc-all.html

While it's focused on ISPC (https://ispc.github.io) it does go into a lot of detail about what happened at Intel during the Larrabee & post Larrabee days.


There was an article written by a guy from the Larrabee team, where he explained how the projet wasn't a failure, in the sense that management wanted a processing beast and that's what they got, not a GPGPU.

Great. Except the kind of parallel processing Larrabee was supposed to provide isn't something anyone wanted.

It's a half-pregnant architecture. Not parallel enough to compete with GPUs, not enough single thread performance to compete with Intel's own CPUs.


As someone who was doing enormous amounts of CPU-bound raytracing at the time, I wanted it. But quite honestly the product sounds absolutely incredible even without workloads like that - to read the article, it worked perfectly well as a graphics card, with the added bonus that 1) the graphics code was in software, meaning you could arbitrarily upgrade the graphics model to support later software, and 2) the system also ran general purpose loads, simultaneously.


Should have told that the Tianhe-2, which uses Xeon Phis (aka Larabee 2013 and later) and was the fastest super computer for multiple years.


I am wondering if GPU design are basically approaching the end and limited by smaller node and bandwidth. We don't see better performance per die / transistors any more . And hasn't been so for many years. The gain seems to be small.

Given Intel has had the lead in transistor density for years, and will continue to do so for at least a few more years, I wonder why Intel hasn't jump into this earlier.

The largest obstacle in GPU is actually the software and driver support. Especially in gaming. Middleware Engines has made this 10x easier. but it is still a fairly hard problem.

I wish they would call their new silicon as AGP; Advanced Graphics Processor. :P


This will be great for everyone. About time theres some more competition.


Call me a conspiracist but Intel strategically are roadmapping an exit of the IGP market? Perhaps a new focus on high performance OpenCL computing, if the returns on low-power IGP are diminished, viz:

(1) The Vega partnership with AMD (2) thunderbolt eGPUs (3) Apple PowerVR-less in A1x chips (4) Adreno on budget Win 10 devices.

Headless desktop motherboards, with Intel Inside laptops and NUCs utilizing Vega graphics would be my prediction.


I really hope they don't do weird market segmentation stuff with their virtual gpu support. They've got the best support for doing that with qemu/kvm currently. Amds sr-iov support exists but the hardware support isn't very widespread still. No idea about the Nvidia side of things.


Hedging bets against the rise of ARM processors?


I'm surprised they don't make lossy GPUs that have a few defects among the obscene amount of transistors. Consumer graphics you wouldn't know the difference.


Interesting idea.

I assume you mean that each GPU gets a variable number of execution units, as opposed to code randomly failing depending on the core it gets run on.

You've just helped me realize that AMD and Nvidia overproduce every GPU and kill off the execution cores that don't work to reach the final fixed number (and that all GPUs probably have a few tens to hundreds of working cores that will never ever be used).

Unfortunately I don't think such variation would work, from a market standpoint: actually exposing the lossiness rating would devalue you as a manufacturer, solely because of the fact that the media would pounce on it like crazy, and even though said media would simply report what's there without putting malice in anyone's mouth, the fuss would mean naive users would go "wat", collectively conclude "well the number is smaller", interpret that to mean "vague less-iness", and bam, people don't think your product is as good.

Despite the fact that, for all intents and purposes (read: not mining), you are 100% right.


> You've just helped me realize that AMD and Nvidia overproduce every GPU and kill off the execution cores that don't work to reach the final fixed number (and that all GPUs probably have a few tens to hundreds of working cores that will never ever be used).

That's basically the idea, but it's usually scaled down. A single GPU generation is essentially one chip design. If a single die has a defect in a particular unit, its turned/soldered off at the factory and sold as a lower end chip. Repeat until you have a chip that works. If you have a die where everything works, clock it up as much as you can, and ask a king's ransom for it.

The best exception is the PS3 "Cell" CPU.[0] It had an extra 'sacrificial' processing core.

[0] https://en.wikipedia.org/wiki/Cell_(microprocessor)#Commerci...


Lossy compression is widely used for textures. Low-precision data formats are also very commonly used for other data. Etc.


No one does, for the same reason that no one makes a 'lossy' CPU.


They must want some of the bitcoin market.


Very unlikely, I would guess. There's a reasonable chance Bitcoin won't even exist in 2020.


[Citation needed]

Whats your line of thinking here? Are you saying that all crypto currency wont exist by then or just BTC?

Why 2020 as the time frame? What is going to happen between now and then thats going to make it so that it wont exist?

Can you qualify what you mean by wont exist? All the code is open source so I've got copies saved locally, and I've also got enough hardware to run the the whole network myself. I could, if need be, single handedly restart/run BTC.

It's my opinion that the only people more ridiculous than those that insist Bitcoin will replace all other fiat currency in the future are those that continually insist on its constant demise and absolute lack of value.


My thoughts exactly but I'm pretty sure that's too late, the availability has come down a bit and I doubt that proof of work can scale much further.


Graphics cards aren't used to mine bitcoin any more. Only chips that are made specifically for mining bitcoin (ASICs) are used in mining rigs.



According to [1] if you have 15c/kWh electricity you can make about $1/day with a 1080ti on Bitcoin. This of course doesn't cover the cost of the card for over a year.

Other coins are slightly more profitable though.

[1] https://crypt0.zone/calculator/s/most-profitable-coin-to-min...


What these calculators never seem to account for is difficulty factor. That dollar-a-day today rapidly approaches zero as more and more people bring more equipment online. No way you're making a dollar a day 600 days from now.


Yeah, I agree.


Years ago. Your first link says crypto mining, not Bitcoin mining. Other coins with different mining algorithms to bitcoin.


So, I guess I mispoke and that’s what I meant. Does Intel want part of this market? Would it be lucrative to them?


While the boom lasts.

The impression I get is that the only people taking cryptocurrencies seriously are managlement types who say "blockchain" in their sleep without having a clue what it means, obscure types with a few million invested, and the black market weaving in and out of the shadows.


Untrue. People still buy graphics cards to mine Crypto because, even though they can't compete with dedicated mining rigs/cards, they are still leaps and bounds better than a CPU, and can be resold for nearly the price they were bought for.


Bitcoin vs Crypto.

It isn't economic (in most places, unless you have extremely cheap electricity and GPU cards) to mine bitcoin on GPUs.

It can be economic to mine other crypto currencies.


Correct, they are not used to mine Bitcoin.

However, they are used to mine other crypto-currencies, which use different algorithms for which there are no ASICs.


I don't think bitcoin is even on their radar.


If mining operations raise the value of high-end graphics cards in general, which they have, it sure is.


A good company doesn't let fads affect their long term plans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: