
Drawing Lines is Hard (2015) - Doches
https://mattdesl.svbtle.com/drawing-lines-is-hard
======
mpax
Does there not exist some ridiculously parallel equivalent of Bresenham or Wu?
I imagine with the right datastructure you should be able to do pixel/line
intersection tests in the pixel shader?

~~~
Diggsey
A pixel shader cannot choose _where_ to draw, it is simply given a pixel and
asked what colour it should be.

In order to reduce the overdraw to something manageable, you need to generate
geometry (via the CPU, geometry shader or vertex shader) that covers a
relatively small superset of the pixels you actually want to draw.

------
dang
A couple of earlier discussions:

[https://news.ycombinator.com/item?id=13671016](https://news.ycombinator.com/item?id=13671016)

[https://news.ycombinator.com/item?id=9179336](https://news.ycombinator.com/item?id=9179336)

------
dreamcompiler
It's easy to extend Bresenham's algorithm (and probably Wu's too) to work in a
3D voxel space. But I don't know if it would solve the problems in TFA or
whether the speed penalty of working with voxels would be worth it. Might be
an interesting experiment.

~~~
pcwalton
The basic problem is that Bresenham and Wu are both sequential algorithms.
While you can overcome that by implementing Bresenham in your rasterizer
hardware (which I think is what many GPUs do?), you're out of luck if you're
implementing it in software. You need an approach that uses the existing
rasterization hardware.

~~~
convolvatron
if you want to, there are at least a couple flavors of parallel bresenham - is
not particularly tricky on the face of it. you do have to be super careful
with your arithmetic so that tasks end up mapping to unique pixels. and yeah,
you have to be in an environment where task creation overhead is effectively 0
for it to make sense.

~~~
pcwalton
Yeah, to be more specific the problem with doing Bresenham on GPU is that, in
the traditional GPU pipeline, the main way to create tasks is to use the
fixed-function triangle rasterization hardware.

I guess one could technically do something like figuring out the number of
pixels spanned on CPU and then rasterizing point primitives. In your vertex
shader you'd have each 1x1 pixel point figure out, in parallel, where it's
supposed to go. This is of course bound to be horrendously slow and would
never be practical :)

------
matkinz
It's hard to draw 2D lines using a 3D API? Doesn't sound like a revelation.

~~~
ikfmpwdsoz
OpenGL is a 2D API, bring your own projections.

