
Llvmpipe: The Jitted OpenGL software renderer inside Mesa - vmorgulis
http://www.mesa3d.org/llvmpipe.html
======
AshleysBrain
This looks interesting. I wonder if browsers could implement this as a
fallback for WebGL when the drivers are missing or blacklisted? Chrome on
Windows uses SwiftShader to fallback to software-rendered WebGL, and I think
IE11/Edge use WARP, a DX11 software renderer. Firefox and other browsers on
non-Windows platforms could benefit. But by the sound of it running this
against the WebGL conformance tests could be spotty!

~~~
yoklov
Does Chrome still use SwiftShader? I've tried on a reasonably large number of
machines to verify this, with no luck. Including machines that have
blacklisted GPUs.

It's clear that there seems to be some cases where it kicks in (or at least
theres some legacy cruft that references it if not) but hell if I've ever seen
it happen.

------
gsnedders
FWIW, OS X at least used to, and I think still does, use a JITed software
renderer using LLVM.

See <[http://lists.llvm.org/pipermail/llvm-
dev/2006-August/006497....](http://lists.llvm.org/pipermail/llvm-
dev/2006-August/006497.html>) for a short overview and
<[http://www.llvm.org/devmtg/2007-05/10-Lattner-
OpenGL.pdf>](http://www.llvm.org/devmtg/2007-05/10-Lattner-OpenGL.pdf>)
(slides, PDF) for a longer one.

~~~
simcop2387
fixed links: HN is keeping the > at the end of them.

[1] [http://lists.llvm.org/pipermail/llvm-
dev/2006-August/006497....](http://lists.llvm.org/pipermail/llvm-
dev/2006-August/006497.html) [2]
[http://www.llvm.org/devmtg/2007-05/10-Lattner-
OpenGL.pdf](http://www.llvm.org/devmtg/2007-05/10-Lattner-OpenGL.pdf)

------
i336_
Potentially relevant sidenote:

Using VirtualGL
([http://www.virtualgl.org/About/Introduction](http://www.virtualgl.org/About/Introduction))
and TurboVNC
([http://www.turbovnc.org/About/Introduction](http://www.turbovnc.org/About/Introduction)),
you can drop a GPU in a server and serve up OpenGL applications _rendered
using the GPU in the server_ via VNC over the network.

VGL pools the GPU(s) between multiple VNC displays, allowing for excellent
performance.

~~~
tadfisher
Virgil ([https://virgil3d.github.io/](https://virgil3d.github.io/)) is a
similar effort for virtualizing GPU(s) in QEMU. The virtio-gpu guest DRM
driver just shipped yesterday with Linux 4.4, making it possible to accelerate
multiple VMs using host hardware.

A Direct3D guest driver, if it performs well enough, could finally end the
hacky VFIO passthrough setups that are currently in vogue.

~~~
i336_
Wow, this looks really cool.

One curious idea that might be worth exploring: writing a Vulkan guest driver
on top of this stack. It would be a novel approach, as the guest would see a
Vulkan-centric view of the world (making development really simple), but that
"world" would be easily transportable between machines, regardless of said
machines' host graphics stacks.

------
chriswarbo
I followed planet.freedesktop.org around the time that Gallium3D was being
developed, so I knew that LLVM was being used inside Mesa. However, the only
time I actually noticed it was when I hit
[https://github.com/NixOS/nixpkgs/issues/11467](https://github.com/NixOS/nixpkgs/issues/11467)
last week

------
yarp
Looking at this:
[https://codereview.chromium.org/1548893004/](https://codereview.chromium.org/1548893004/)
and this, is there chance that there will be decent option for headless webgl
testing or making screenshots? Phantomjs doesnt want to do that, slimmerjs
also (on xvfb), anybody have a working solution?

~~~
bhouston
For [http://Clara.io](http://Clara.io), we explored a bunch of headless gl
solutions and none worked. It may have changed by now. But even if you got a
headless gl solution working, it likely would have poor extension support,
which means the results will suffer anyhow. :(

~~~
yarp
So did you take any other solution? I was thinking about renting or
collocating server with gpu and just automate whole process with
ansible/fabric or just vnc. About exentions - thats true. I was able to run
software rending for shaders (llvm, xvfb, ec2), framerate was acceptable (1-5
fps, which is more than fine for screenshots), but issues like having max
anisotropy = 1 make it unusable for this use case.

------
starquake
Multiple times I've read about the better software renderer. If I read it
correctly it's because of LLVM. What has LLVM to do with that? Why wasn't
there a faster software renderer before there was LLVM? Is it easier to create
a performant software renderer with LLVM? Could someone explain this to me or
maybe provide a link for more information?

~~~
dietrichepp
In OpenGL, you take vertex attributes in some user-specified format, run them
through a user-supplied program, and rasterize it with a user-supplied program
to generate pixel data. These programs are specified in GLSL. You want to run
them, you have to interpret or compile. No surprise the compiled version is
faster.

Even unpacking attributes is going to be faster with JIT.

