
Intel Starts Publishing Open-Source Linux Driver Code for Discrete GPUs - symisc_devel
https://www.phoronix.com/scan.php?page=news_item&px=Intel-Memory-Regions-Local-Dev
======
djsumdog
Finally. Options!! Back in the day we had 3Dfx (went under and sold to
nVidia), PowerVR (They're still around. They're probably the GPU chip in your
cellphone), Intel i740, and the AMD Radeon. nVidia broke in to the market with
the Riva/Vanta/TNT cards. Hell even Trident had some Direct3D/opengl chips.
For the past several years we've been down to just the big two (although Intel
integrated has held up, even for some medium gaming and indie titles).

5+ was probably too many to support by most game devs, but 3 would be nice and
solid. It's also nice that we'll have another player besides AMD that actually
gives a shit about Linux/OSS drivers. nVidia might finally be forced to open
up and contribute to Nouveau and bring it up to par with amdgpu. The future
could be a team green, team red and team blue. Intel has killed the discrete
graphics projects before though. So we'll have to wait and see.

~~~
ksec
>AMD Radeon

Or you mean ATI Rage3D. There was also as mentioned S3, Matrox, and 3D Lab (
Sold to Creative, that maker of Sound Blaster Sound Card ) as well. I miss
3DFx ( I still have a few Voodoo graphics Card somewhere ), Glide was way
ahead of its time. John Carmack was fighting for OpenGL.

The biggest barrier of Entry to GPU isn't the Hardware, but the insane amount
of Software ( Drivers ) optimisation required for the GPU to be even remotely
competitive with its rivals.

~~~
YUMad
Don't forget Rendition and their Verite chips.

They had an absolutely insane card that had both pci and agp connectors.

~~~
lightedman
I'd rather forget Rendition and their Verite chips. My V1000 would NEVER work
with Quake.

------
bitL
Can't wait to pair my Threadripper with an Intel GPU :D Now NVidia, please
release an x64 processor to have a complete mix on the market.

~~~
giancarlostoro
I rather they fix their GPU drivers by open sourcing them, or AMD.

~~~
implying
AMD's Linux stack is not only completely open source, most of it is in-kernel
and maintained by AMD devs.

~~~
slezyr
Try to use OpenCL and you will need blobs. It's major pain to use
blender(requires OpenCL if you want to finish render today) not long time ago
their blob crashed on new versions libdrm.

~~~
greenpenguin
I'm using RocM OpenCL right now, which I believe is the approved option, or
will be. Personally I've not had any issues with it - I can certainly use
Blender with it, although the performance right now is so-so (I only have a
cheap APU though, so hardly surprising).

I don't believe I have any blobs, other than the firmware one, which is true
regardless - can you tell me what blobs you mean?

~~~
slezyr
[https://aur.archlinux.org/packages/opencl-
amd/](https://aur.archlinux.org/packages/opencl-amd/)

this one, part of amdgpu-pro

~~~
greenpenguin
I can confirm I don't have that installed. I'm using this:
[https://github.com/RadeonOpenCompute/ROCm-OpenCL-
Runtime](https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime)

Edit: actually Blender performance is decent, but the kernels take ~10 minutes
to compile.

------
frou_dh
Intel's Gen10 integrated graphics presumably had a bunch of effort put into
it, and has been mothballed for years because Intel never could get the
CannonLake CPUs out the door. That's got to sting a bit for its designers.

------
kuwze
I can’t wait for some eDRAM to act as my L4 cache and finally start addressing
the von Neumann bottleneck.

~~~
frou_dh
Get yourself a 5 year old laptop ;)
[https://en.wikichip.org/wiki/intel/crystal_well](https://en.wikichip.org/wiki/intel/crystal_well)

------
petecox
Are there security benefits of "fake local memory"? I imagine a sandboxing
arrangement where a rogue WebGL app can't hack your entire OS.

Several applications (a) opencl servers - just fill an extra slot (b) eGPU
over thunderbolt (c) a drop in GPU for Risc-V boards (sans ARM's Mali)

~~~
twtw
As far as I can tell, you are describing problems already solved by virtual
memory.

