I remember enjoying Abrash's write up in the mid 90s.
http://www.phatcode.net/res/224/files/html/ch35/35-01.html
He starts with the algorithm, then a C implementation, then optimization in ASM, only to throw it all away in the subsequent chapter because of a better algorithm. He goes through this cycle several times. Splendid educational effort on multiple levels.
Not sure why they author immediately dismisses bresenham. It's fast, quality, and well known. I feel like if the post continued for another few logical steps of optimization that's exactly where he'd end up!
Bresenham is slower on modern hardware. Well, still better on microcontrollers and less gates to implement it on silicon / FPGA resources.
Ever since 486 days, it's been faster just to do a division and loop in direction of the longer delta axis. Fixed point addition and shift to truncate for shorter axis coordinate.
In some old CPUs, add to eax (or even ax) and ah (higher half of 16-bit ax) for the coordinate. Requires a small (or split/tiled) frame buffer. I remember it was still worth it 20 years ago.
Nowadays you don't have to perform that ax/ah/al trickery, because CPUs have been capable of register renaming, pipelining, multiple issue, etc. for at least 20 years, and extra mov, shr etc. to replace it could be sank between other instructions and memory waits. Accessing ax ah/al halves is the slower, unoptimized path in anything that resembles a modern CPU.
This benchmark from 2001 with a Pentium III has bresenham about 7.5 times faster than DDA. There's another pair of algorithms that are a little faster still. http://www.edepot.com/algorithm.html
These days the limiting factor is probably going to be cache access rather than fixed vs floating point. I'm pretty sure bresenham is still the way to go but it'd be fun to see modern comparisons.
Well, DDA should use fixed point, not doubles. Old stack based x87 FPU is just slow and converting doubles into integers in a loop... Well, I guess you can understand where the time is spent in the version you linked! I meant of course IDIV and fixed point, not FDIV FADD FISTP etc. madness!
Bresenham lines are not "quality." For one, they're obviously jaggy. As well, most implementations of the algorithm will give you different results if you invert the start and end points. This matters a great deal for the common use-case in games of casting rays to determine sight lines. It is quite annoying to have a tactical combat game where a foe can shoot from behind a wall and you can't shoot back from the exact same position. Amit notes lack of symmetry in the "2.1 Orthogonal steps" implementation, which leads him to the "Supercover" implementation.
What a stroke of luck for me! Just today I was studying line drawing techniques on a grid for a prototype related to offline mapping using polygons. It's yet another reminder how topics and links that don't seem relevant to my interests at first (games in this case) often turn out to be relevant to other areas of my interest.
I used to draw the lines that way, but I've switched to drawing the squares instead. It tends to look cleaner, the code is simpler, and it also lets me color the squares individually. If I really need to draw the lines, I can set a background color. It's a cheap trick :-)
One nice thing about linear interpolation is that the same algorithm works for 2D grids, 3D grids, and hex grids. For 3D, the diagonal_distance, point_lerp, and point_round functions can look at at z in addition to x and y. For hex grids, you need hex grid distances and hex grid rounding (http://www.redblobgames.com/grids/hexagons/#line-drawing).