
Drawing Lines is Hard (2015) - lebek
https://mattdesl.svbtle.com/drawing-lines-is-hard
======
dahart
Drawing lines in OpenGL & 3d hardware in general has always had some of these
issues. The best support for GL/OpenGL line drawing I knew of was in the more
expensive SGI machines. (Capitalized on purpose to indicate before they
rebranded lowercase -- but this is fuzzy memory and I'm no sgi history
expert.) The cheaper ones' lines definitely weren't as good, but SGIs did have
pretty nice line support back in the day, better than what you get in WebGL
today.

One thing I didn't see mentioned here is the line drawing API calls are
typically not as well optimized as the triangle mesh calls, or so I've heard.
Part of the reason good line drawing support was more expensive was
(allegedly) only a few customers truly need antialiased lines with performance
as good as the mesh API, and it's extra silicon, lines have specific needs not
shared with meshes.

FWIW, I've tried all these approaches in production and ended up just doing
the meshing myself, and avoiding shader tricks. It's not that bad, it gives
the most control, and once you have an abstraction for it, you don't need to
think about it again.

~~~
vvanders
For us screen space worked a bit better but I'm sure it varies by use case.
Getting the right fall-off on non-aligned pixels is always tricky(including
0.5f offset in Direct3D).

------
Kenji
It's pretty much impossible to draw pixel-perfect lines in OpenGL. Even the
library code of SDL2 fails to do it in certain instances, when its 2D drawing
functions use OpenGL - and SDL2 is super clean and solid like a rock.
Sometimes, you're even better off using images (!!) to draw lines. The sad
thing is that Bresenham's line algorithm is so very simple. Somewhere between
my program and the GPU, the communication of where exactly the line starts and
ends is lost.

------
teddyh
I thought this would be about Bresenham's line algorithm and its drawbacks.

~~~
userbinator
Bresenham is easily beaten in speed and simplicity by a fixed-point loop:

[https://hbfs.wordpress.com/2009/07/28/faster-than-
bresenhams...](https://hbfs.wordpress.com/2009/07/28/faster-than-bresenhams-
algorithm/)

Furthermore, fixed-point also allows antialiasing since it always gives how
close the line is to the pixel being drawn.

~~~
vanderZwan
> _The naïve algorithm in float averages 4.81 µs, Bresenham’s algorithm
> averages at 1.84 µs, my fixed point variation at 1.74 µs. The Fixed point
> implementation runs about 5% faster. Which isn’t all that much, but still
> better than Bresenham’s; and much better than the naïve version using mixed
> floats and integers._

> _The situation is similar on the 64 bits machine, but the advantage of fixed
> point vanishes. Both methods takes very similar times: fixed point averages
> at 0.85 µs and Bresenham 0.84 µs, a difference of about 1%. However, the
> naïve implementation is still very far behind, at 2.96 µs._

When you said "easily beaten" I kind of expected more than just a 1-5%
performance improvement.

There is also another (potential) problem with fixed point: it works by adding
up _rounded numbers_ :

> _The only thing we need, is to compute the slope_ m _as a fixed point number
> rather than a floating point number._

As a result, the fixed point _might_ create rendering artefacts from adding up
a rounded number repeatedly. This makes the whole thing an apples-to-oranges
comparison:

\- floating point method that does multiply/divide every iteration

\- fixed point that adds a rounded fraction

\- Bresenham that does _unrounded fraction adding_ by splitting the fraction
into an accumulator and divisor (Bresenham is actually really simple primary
school math with some geometry on top)

To make a truly fair comparison, we should add a version that precomputes the
floating point fraction, and adds that each iteration; I suspect the main
slowdown of the naive floating point algorithm is the repeated
multiply/divides more than the cast to integer, not the casts.

~~~
rwallace
> I suspect the main slowdown of the naive floating point algorithm is the
> repeated multiply/divides more than the cast to integer, not the casts.

You'd be surprised. Last time I tried it, casting a floating point number to
an integer was many times slower than floating point multiply.

~~~
vanderZwan
Now that you mention it, I know the basic "add/sub is faster than multiply is
_much_ faster than divide" order, but I have no idea how it compares to
casts.. why isn't that ever mentioned anywhere?

~~~
rwallace
I'm guessing for the same reason casts are slow; they're not bottlenecks for
the most common number crunching workloads, so it doesn't occur to anyone they
might be bottlenecks for somewhat less common ones.

------
paulddraper
You can force MSAA in WebGL by using a larger canvas, and then downsizing it
via CSS.

Not sure the performance implications, but I use this frequently if MSAA is
not natively supported.

------
ghurtado
A while back I made this prompted by a question in Stack Overflow:
[http://codepen.io/garciahurtado/pen/AGEsf](http://codepen.io/garciahurtado/pen/AGEsf)

It is a less refined version of the techniques in the article, the approach is
different in that it uses post-processing to provide the line thickness.

------
mcpherrinm
I'm really glad to see this. I attempted to draw lines when first learning
OpenGL and wasn't able to get results like I wanted.

------
gens
Xorg seems to agree.

~~~
upofadown
Seeing as how the line drawing function in X doesn't do any anti-aliasing, the
result would likely be very consistent across different environments even for
fat lines. The problems to some extent are differences of opinion about how to
make lines look pretty.

~~~
gens
Sorry, i meant that it is buggy. Specifically the line\\* drawing functions. I
filled a 22x22 window with segmented lines and it blanked the window. A line
across the leftmost pixels on the window (x=0) behaves differently then
anywhere else. And such.

At least i think, as there is no hard spec on X11.

