

TinyGL: A Small, Free and Fast Subset of OpenGL - adamnemecek
http://bellard.org/TinyGL/

======
maggit
Related, and also interesting (especially as a response to "TinyGL is a lot
faster than Mesa"), is Mesa's llvm rendering backend llvmpipe. It compiles the
rendering pipeline to llvm bitcode which is then compiled to machine code. It
makes use of multiple CPU cores and SIMD instructions. The result is
surprisingly quick software rendering of anything OpenGL.

Key differences to TinyGL: Mesa llvmpipe implements "all" of OpenGL and GLES2
(That is, everything that is implemented in Mesa works with this backend) and
the rendering is accurate.

I have not done a head-to-head benchmark, and I don't think it would be that
interesting since the two libraries are from entirely different ages.

The existence of llvmpipe was part of the rationale for Qt5 to require OpenGL
(GLES?) for rendering [1]. Qt5 has no other rendering backend.

[1]: I remember this from the beginning of Qt5 development, which is so long
ago that I have a hard time finding a quotation. Maybe I can use this Phoronix
article as backup:
[http://www.phoronix.com/scan.php?page=news_item&px=MTA5ODc](http://www.phoronix.com/scan.php?page=news_item&px=MTA5ODc)
"The Qt developers praised the CPU-based Gallium3D driver and will be relying
upon LLVMpipe when no GPU hardware driver is available. They say that using
LLVMpipe is working better than any software rasterizer of their own. "
"LLVMpipe remains too slow for OpenGL gaming (...), but it's good enough for a
tool-kit and composited desktop."

------
keeperofdakeys
Given that this is from 2002, it probably isn't a great platform to use. It
probably wouldn't support VBOs, which are required to get decent performance
out of modern graphics cards. The modern day equivalent would probably be
GLES.

~~~
adamnemecek
I didn't post it assuming that anyone would use it, I posted it to be studied.
It was written by arguably one of the best programmers of all time after all.

~~~
fsloth
Thanks, looks interesting. And what's wrong with the people? "I can't use it,
therefore it sucks"? Software rendering will always be an interesting problem
domain because implementing it is a non-trivial problem.

------
cageface
OpenGL is such a terrible API I can't imagine using it in an environment where
you don't even get the cheap benefit of putting up with it, i.e. hardware
acceleration.

~~~
Xylemon
I can name a worse API: DirectX

~~~
wolfgke
Can you give concrete arguments why you consider DirectX a worse API (other
than "it only works on Windows")?

When Direct3D 10/11 came out, it was considered as a lot better to prgram for
than the corresponding OpenGL versions. See for example the following article
from 2011

> [http://www.bit-tech.net/news/gaming/2011/03/11/carmack-
> direc...](http://www.bit-tech.net/news/gaming/2011/03/11/carmack-directx-
> better-opengl/1)

where John Carmack clearly says that he now prefers Direct3D over OpenGL.

~~~
fulafel
Times change, Carmack has recently been saying good things about the low
overhead OpenGL model championed by NVidia (see "Approaching Zero Driver
Overhead" etc) and how it can get an over of magnitude more draw calls than
D3D.

reference: [http://vr-zone.com/articles/john-carmack-mantle-became-
inter...](http://vr-zone.com/articles/john-carmack-mantle-became-interesting-
dual-console-wins/61108.html?utm_source=rss)

~~~
wolfgke
Luckily rumours say that Microsoft took a lot of the good ideas from Mantle
and will implement them in DirectX 12:

> [http://www.extremetech.com/gaming/177407-microsoft-hints-
> tha...](http://www.extremetech.com/gaming/177407-microsoft-hints-that-
> directx-12-will-imitate-and-destroy-amds-mantle)

On the other hand: Even if the overhead for OpenGL can be lowered a lot (and I
believe this is possible), this will be released as extension set - not as an
elegant revised API as new DirectX versions do (see the Longs Peak fiasco).

~~~
fulafel
The zero overhead talk was done by guys from Intel, AMD and NVidia - I think
it's rather probable that this will end up in core OpenGL later. Especially if
MS is reacting to Mantle as well.

It's not "will be released", it's in current drivers - some in extensions,
some in core Opengl 4.4 or earlier. As far as OpenGL backwards compatible
approach, I think that's pretty much a necessity as long when you consider the
stakeholders.

D3D and OpenGL are pretty similar from 10000 ft. Really the GPU vendors should
just open up instruction sets, binary formats and ABIs for their devices, and
focus on writing high quality open source compiler backends for them, and let
people program GPUs whichever way they want.

------
Zancarius
I'm curious. The page was last updated March 2002. Is this project still
active? Is this even still relevant given OpenGL ES?

~~~
zanny
This is a software mode graphics API for embedded.

Since almost any embedded chip will have some graphics cores, or in their
absence I have no idea what you are trying to render for, a software only API
is insane in this day and age. The performance differentials are so many
orders of magnitude between the best CPU and the worst GPU it isn't fair.

Software rendering died a decade ago, and should be left dead. We have great
new tech like EGL and GLES growth to standardize on a base set of accelerated
3d functionality you can just assume. Which is great.

~~~
clarry
Software rendering was fast enough for games in the 90s. On machines featuring
a few megabytes of RAM and CPU clock speeds measured in tens of, or at most a
couple hundred MHz.

Now even the cheapest and lousiest PC hardware is orders of magnitude faster
than back then. Why wouldn't software rendering be fast enough now that it is
orders of magnitude faster than when it was fast enough?

The benefit is that it is more reliable. If you can shove pixels on to the
screen, you can do graphics in software. The graphics stack for hardware
accelerated 3D graphics is incredibly complex and buggy. Questions like _if I
buy this laptop, will my graphics work?_ are sad sad reality. Problems like
_libGL error: failed to load driver: i915_ are reality. And for those who
write graphics code, doing workarounds for specific chipsets or drivers is
reality.

~~~
moonchrome
>Why wouldn't software rendering be fast enough now that it is orders of
magnitude faster than when it was fast enough?

Because games in the 90s used < 1k poly models for characters, I don't think
you could get away with those nowdays, not even on mobile.

~~~
fulafel
Unreal Tournament 2004 had a software renderer with good performance, on the
single core CPUs of the era. See [http://www.drdobbs.com/architecture-and-
design/optimizing-pi...](http://www.drdobbs.com/architecture-and-
design/optimizing-pixomatic-for-x86-processors/184405765)

The people working on that, Abrash and Sartain, went on to Intel to do
Larrabee - something akin to "software renderer on GPU". If that had worked
out, maybe we wouldn't be in the current dark ages, where GPUs are a nightmare
to program and everybody suffers from the buggy, closed proprietary software
stacks that hide the hardware behind many layers of obfuscation.

