Which links to http://www.folklore.org/StoryView.py?project=Macintosh&s..., which talks a little bit about algorithms for drawing circles, ellipses, and RoundRects, and in particular what was used in QuickDraw.
QuickDraw source code is available at http://www.computerhistory.org/highlights/macpaint/ and I've posted what I think is the relevant file at https://gist.github.com/1287861. I think the relevant routine is BumpOval, starting around line 1000. But I don't understand it at all, partly because it's been 18 years since I looked at 68000 assembly. Can anyone help explain what's going on in there? I'd like to know if the algorithm being used there is the same one in Mukund's awesome blog post.
One bug: Mukund, if you're going to call malloc() without checking its return value, use xmalloc()! It's like five lines of code.
You'll do more harm trying to use something like xmalloc and failing to use it consistently (obviously you need to "x" more than just "malloc") than you will by taking steps to guarantee a swift failure to make sure any malloc failure ends the program.
Obviously, in any code where "malloc(3)" is the way you get more memory, I think it's pretty silly to check malloc's return value. You aren't going to get recovery right.
I think there are only very rare cases in production code where it makes sense to check malloc's return value and try to recover, but they do occasionally exist.
I think we agree about explicit malloc return value checks.
I don't think I understand what you mean by 'obviously you need to "x" more than just "malloc"'. If you use xmalloc() instead of malloc(), and either abjure realloc() or use xrealloc(), haven't you covered all your bases?
It's true that a bug could creep in somewhere, but that hardly seems like a particularly damning criticism. Every place you use xmalloc() instead of malloc() is one less bug, one less potential Mark Dowd Flash exploit. And you can use nm(1) or the Visual Studio equivalent to find out if your program still has any calls to malloc() or realloc(), and then you can remove them.
From my perspective, the problem with rigging malloc() itself to blow up is that there's no portable, standard way to do that. And, if someone fails to do it when they're porting your code to a new platform, everything will appear to work perfectly, but the port will have introduced a subtle bug that may result in remote-root vulnerabilities. That would be really bad.
It is seriously soooo much easier to write C code when you don't have to explicitly check allocation returns. I shaved FOUR THOUSAND LINES out of not- particularly- huge- to- begin- with- server in a previous job simply by getting rid of ridiculous chains of "functions that returned errors that ultimately boiled down to malloc failures", and "call sites to those functions that checked their errors and propagated them". I'll never explicitly check malloc() again.
(There are allocators you do need to check; they just tend not to be called "malloc").
Almost no software of any significant size has been written in the last 10 years that can honestly claim to gracefully and reliably handle out-of-memory conditions.
Programming regimes that ostensibly cover out-of-memory cases are usually delusive; they provide for some superficial handling of out-of-memory issues (which usually just devolves to exiting the program anyways), but do nothing to address the myriad instances of malloc calls happening behind their backs in libraries or temporary allocations.
Fuck that. These people are going through extra work which (a) provides no greater user experience and (b) actually harms their program by creating opportunities for missed checks that propagate NULL pointers (which, when offset against, are actually exploitable!) through the rest of their code.
Just have malloc terminate your program for you when it fails and be done with it. You seriously aren't going to get anything else right, and it's silly to waste your time trying anyways.
Yikes. And you work in security?
ITEM 149 (Minsky): CIRCLE ALGORITHM
Here is an elegant way to draw almost circles on a point-plotting display:
NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X
This makes a very round ellipse centered at the origin with its size determined by the initial point. epsilon determines the angular velocity of the circulating point, and slightly affects the eccentricity. If epsilon is a power of 2, then we don't even need multiplication, let alone square roots, sines, and cosines! The "circle" will be perfectly stable because the points soon become periodic.
The circle algorithm was invented by mistake when I tried to save one register in a display hack! Ben Gurley had an amazing display hack using only about six or seven instructions, and it was a great wonder. But it was basically line-oriented. It occurred to me that it would be exciting to have curves, and I was trying to get a curve display hack with minimal instructions.
ITEM 150 (Schroeppel):
PROBLEM: Although the reason for the circle algorithm's stability is unclear, what is the number of distinct sets of radii? (Note: algorithm is invertible, so all points have predecessors.)
ITEM 151 (Gosper):
Separating X from Y in the above recurrence,
X(N+1) = (2 - epsilon^2) * X(N) - X(N-1)
Y(N+1) = (2 - epsilon) * Y(N) - Y(N-1).
These are just the Chebychev recurrence with cos theta (the angular increment) = 1-epsilon^2/2. Thus X(N) and Y(N) are expressible in the form R cos(N theta + phi). The phi's and R for X(N) and Y(N) can be found from N=0,1. The phi's will differ by less than pi/2 so that the curve is not really a circle. The algorithm is useful nevertheless, because it needs no sine or square root function, even to get started.
X(N) and Y(N) are also expressible in closed form in the algebra of ordered pairs described under linear recurrences, but they lack the remarkable numerical stability of the "simultaneous" form of the recurrence.
ITEM 152 (Salamin):
With exact arithmetic, the circle algorithm is stable iff |epsilon| < 2. In this case, all points lie on the ellipse
X^2 - epsilon X Y + Y^2 = constant,
where the constant is determined by the initial point. This ellipse has its major axis at 45 degrees (if epsilon > 0) or 135 degrees (if epsilon < 0) and has eccentricity
sqrt(epsilon/(1 + epsilon/2)).
For the full MIT HAKMEM (a report everyone who programs should read at some point) http://www.inwap.com/pdp10/hbaker/hakmem/hacks.html
EDIT: err - nevermind, I had a silly bug that didnt update the old values. Kinda working now!
oldX = x; oldY = y;
It's sort of the question of what is the smallest thing a human can "see". Stars, for example, are extremely small, far below the resolution of the human eye (outside of the atmosphere, of course). Yet seeing them is not a problem, you just get an unresolved image.
A circle is a set of points, and a point is an abstract concept. Having a photon centered on a point does not make the point visible.
Real objects can only approximate a circle. The pedantic definition of a circle in this context is being used to make the point that you can't make a perfect circle with pixels.
I don't think it makes sense to discuss if a mathematical concept is 'visible' just because it happens to be something that is often represented on a 2d plot.
This discussion is not visible, what's visible is the group of pixels that form the text on the display device you're using to read this webpage.
Honestly, is this kind of observation of any value?
So no, it's not of much value in that it doesn't tell us anything that we don't already know. But it can be of value to frame our understanding or discussion of the problem.
It was republished in his excellent collection of essays "A Trip down the Graphics Pipeline"
It would be an interesting exercise to take this example and add a draw_arc() function.
I couldn't tell if he thought of this method, but it is a quick way to draw a circle.
Think of the article as a bunch of ways to draw circles, and you can make up something based on what device you write your code for. :)
Drawing the circle as points using the last function in the article, rather than 2 concentric fills would be faster unless you have some unique hardware for it. :) Disk filling can be implemented in a fragment shader by testing each point against the implicit equation. Such testing is easily parallelized.
You are a clever thinker nevertheless. :)
No, image is an Image*, so *image is an Image, so sizeof(*image)
is the amount of memory you need to allocate for an Image. Saying
sizeof(*image) rather than sizeof(Image) allows you to change
the type of image in one fewer place, should you want to do that.
Well, I was clearly disappointed this lousy article got so much points.
What is important for me in CS is not the algorithm and the code, but how the code elegantly express the general understanding of a real problem into code. How one can fluently translate from a natural language problem into code, and swicth from reformulation of the problem to coding.
These kind of developpers should not be hired at any costs at my opinion.