Hacker News new | past | comments | ask | show | jobs | submit login
A Pixel Is Not a Little Square (1995) [pdf] (alvyray.com)
81 points by noch 18 days ago | hide | past | favorite | 70 comments

Is this not outdated?

It uses examples of how pixels on scanners, printers, and CRT monitors are not squares. Fair enough. Back when this was written, one could very reasonably argue that pixels were intended to be point samples with a convolution.

But modern pixels on most LCD screens certainly are exceedingly crisp squares (with subpixels), and processing RAW camera sensor data, antialiasing techniques, and image resampling seems to "mostly" assume that pixels do represent the average color over a square rather than a point sample. Not perfectly, but as a reasonable first-order approximation.

I understand that 25 years ago it might have been reasonable to argue that pixels aren't squares.

But today... aren't they, at least as a first-order approximation? (With some further adjustments to avoid antialiasing artifacts, desired sharpness, and so forth.)

The idea that pixels are primarily point samples seems far more misleading -- as if they were analagous to audio samples which are point samples as a first-order approximation, but pixels are nothing like audio samples.

No, not at all. They're still point samples. You still deal with aliasing and all the learnings there even talking about rendered color buffers.

No, LCDs are not crisp squares. Pretty much every screen type is still RGB in some kind of array. RGB stripe or Pentile are most common. You can also think about things like VR or a projector where your screen in transformed through a lens.

Everyone has to constantly compensate for this fundamental thing in computer graphics to this day.

Edit: As other things come to mind with how relevant this paper still is...

>antialiasing techniques, and image resampling seems to "mostly" assume that pixels do represent the average color over a square rather than a point sample. Not perfectly, but as a reasonable first-order approximation.

Not in the least. For example in texturing (wrapping images onto meshes), mip maps and bi/triliner filtering are very common. Assuming average color leads to a jagged (if you mean avg over one pixel) or blobby (if you mean average over several) mess. Mip maps are essentially point samples of a texture taken at different sample resolutions that are then resampled at runtime to form the best approximation. Essentially no one uses "average color."


Raw trilinear is shit. It only reasonably works because mipmaps sort of do what you rail against, namely taking multiple samples, and averaging them. Actual modern graphics rendering uses anisotropic filtering, using typically 4 to 16 trilinear samples, averaging the lot. The result is both more consistent and much sharper.

Sure, but we have to start explaining somewhere. That said, I think most graphics settings will still let you toggle aniso off but I don't think very many let you drop down to bilinear.

Would it still matter when PPI are 400+ if not 800 ppi? I was always under the impression as we increase PPI, antialiasing and other texturing technique would become irrelevant.

A higher screen resolution increases sampling density / frequency, so it pushes out the Nyquist frequency a bit. But this can never eliminate aliasing. It will always be there once your signal's frequency spectrum reaches above the Nyquist frequency. The resulting aliasing will introduce spurious low frequency components like Moire patterns. So low pass filtering of the content will always be a requirement (and mipmapping is a low pass filter in this context).

I'm not sure but we're not there yet if that is the case.

I really doubt it though. I can't imagine a world where we chose to waste massive amounts of resolution in the normal case just so we don't need to sample as intelligently as we do today.

Even in upsampling things we do better than simple square pixel average so if we're always upsampling we'll probably always need to be smart.

> I can't imagine a world where we chose to waste massive amounts of resolution in the normal case just so we don't need to sample as intelligently as we do today.

It is already happening, see e. g. [1]-[3]. And the more common will be HiDPI displays - the worse would be support for low-res (say 96 DPI) displays. It is not uncommon to encounter an opinion that tricks used for high quality rendering on no-HiDPI displays are legacy and cruft, but more common is just to tread rendering issues which not visible on HiDPI as low priority.

It is the same old mantra - programmer's time is expensive, hardware is cheap, let's not spend time on creating efficient software and focus on making development fast and easy.

[1] https://github.com/harfbuzz/harfbuzz/issues/1892 [2] https://github.com/harfbuzz/harfbuzz/issues/2394 [3] https://gitlab.gnome.org/GNOME/pango/-/issues/463

It depends on what you're doing. In Smith's example of scaling an image down 2×, for example, it's easy to produce heinous aliasing artifacts with naïve scaling algorithms, regardless of what resolution you're doing it at.

IMHO it was never fully accurate, because the term "pixel" refers to many things, from abstract concepts to physical devices.

In image formats and image processing you can define pixel to be whatever you like. In many cases it's useful to define/interpret it as an infinitesimal point.

In other algorithms, it's not useful. For example, pixels of textures are filtered (mipmaps, AF, MSAA), because an actual ideal point sample of a texel being read as an ideal point sample of a pixel on the screen has awful aliasing artifacts.

Pixel samples are not the same as audio samples. Screen isn't a 2D wave. There's no gibbs phenomenon for pixels on screen. Treating samples as a wave works for resampling to a large degree, but breaks down in a few places like crisp edges (see font hinting, pixel art) and makes no sense for the alpha channel (wave-based sharpening filters create opaque halos).

And when a pixel is connected to a physical device, it's clearly not a point. Camera sensors' pixels aren't points. LCD pixels aren't points. Printed pixels aren't points. Our eyes actually see the difference. Pixel-grid-aligned lines are clearly sharper than the same lines shifted by half of a pixel. If sampling theorem applied to pixels, it would make no difference.

> and makes no sense for the alpha channel (wave-based sharpening filters create opaque halos).

Alpha channels are generally problematic, since if alpha is small, big changes in the rgb channels result in small changes in the actual color. You can't linearly interpolate in this color space, just like you can't in a polor coordinate space like HSL.

Pre-multiplied alpha should fix most of these problems (though you still need to be careful with fancy filters, no not end up with values outside the valid range)

I think your phrase "with subpixels" directly contradicts "exceedingly crisp squares". I'm actually somewhat curious what you think a subpixel is, if you think it can combine into an exceedingly crisp square.

For instance, this is the Nexus One screen under a microscope:


Each of those lights is a subpixel.

The bottom half of this image shows some other common LCD pixel layouts:


As you can see, subpixels can be square but usually aren't, and pixels can't be square because they're made of subpixels.

Out of curiosity, I took a picture of my monitor, and this is what it looks like:


These look very different from the microscope pictures because the light sources are "glowing", which I think is probably more accurate to how we actually perceive pixels. But as you can see, "glowing" makes them not square, but round!

The most common modern geometry is the fourth one ("LCD") in your second link, where each pixel is indeed perfectly square. That's what I was referring to.

The fact it is made of three rectangular subpixels in no way means "pixels can't be square".

And the "glowing" effect in your picture is an artifact of light bleed from your camera. The only time we perceive pixels as round rather than square is if we have eyesight problems that blur the image projected onto our retina.

I mean, with the fourth one, each pixel is only approximately square when white. A red, blue, or green pixel would be very rectangular.

That's what I mean. Because of the way subpixels combine into pixels, only some colors can be square.

The RGB subpixels of your monitor would be more apparent if you had a higher magnification. Compared to what I see under my jewelers loupes, the first image you posted looks close to my screen under 90x magnification. They display different levels of RGB to show different colors.

The little squares/rectangles mindset is more or less appropriate depending on the context. This paper is unfairly dismissive of the contexts where the little squares mindset is clearly better than the point samples or gaussians.

I dunno, I just looked closely at my OLED TV and it’s point lights (at least to the resolution of my own eyes), not tiny squares butting up to each other, so I think the lesson of treating your images as samples of a continuous function still applies.

The AMOLED displays in most higher end phones definitely do not have square pixels.

I think you mean Pentile Sub pixel. The Pixel itself are still square.

Pixel itself are like ////// and not square

> But today... aren't they, at least as a first-order approximation? (With some further adjustments to avoid antialiasing artifacts, desired sharpness, and so forth.)

It still depends on what you’re using them for. Image that will be rendered straight to a webpage? Yeah, it’s probably a fine first order approximation. But that’s not the only context that gets rendered on a modern screen.

Say you’re using it for 3-d scenes. Yeah you absolutely should pay attention to what the paper says about footprint and sampling and filtering because, when you render it onto the screen, all of the processing steps between pixels on a texture to pixels on a screen require awareness of pixel footprint.

>Image that will be rendered straight to a webpage? Yeah, it’s probably a fine

I don't know, I still think the concept would appear in the context of image scaling and aliasing.

I agree in the sense that image scaling definitely requires you to think of pixels as more than just squares, but IMO browsers and OSes have gotten good enough at it that as a web developer you mostly don’t have to think too hard about it other than to remember to use high enough resolutions.

Oh sure. At that point though, are you even thinking as far as "little squares"?

Fair point. I'm not really thinking of it as little squares, I'm more just thinking in terms of integer coordinates in a 2-d Cartesian space. But I do think I'm inherently abstracting the "screen" as a grid of square pixels because it's a serviceable abstraction that is substantially simpler than "here is the whole signal processing pipeline that sits between my <image> tag and some phone user's eyes."

I suppose you're right that what I'm describing is just the lack of conscious mental model for what the pixel represents, versus when I'm writing a shader, where I'm actively thinking through the mental model of a pixel.

Straight-forward example of non-square pixels is the Raspberry Pi 7" LCD display. Viewable area is 155x86 and resolution is 800x480; simple plotting of a circle on it is visibly non-circular because the horizontal and vertical pitch are different.

No, there are two fundamental misunderstandings in your comment.

You posit an opposition between "[arrays of] point samples with a convolution" and "[arrays of] exceedingly crisp squares". This is incorrect. If the kernel you convolve your array of point samples with is an exceedingly crisp square, the result is an array of exceedingly crisp squares. Convolution with a square is just a special case. Smith explains this, perhaps not very clearly, in Fig. 3, p. 5.

Conceptualizing this as a convolution allows you not only to handle the special case you're saying is "a reasonable first-order approximation," but also to analyze what effects that display or sensor convolution is having on your image, and possibly to correct them. Moreover, it can handle some other kinds of image degradation as well that go beyond that first-order approximation, such as bokeh, diffraction, defocus, and nonuniformity within pixels, as long as it's the same for every pixel.

That's how you find out what those "further adjustments to avoid antialiasing artifacts, desired sharpness, and so forth" are.

And it's how you can go about analyzing what happens to the image when you, for example, resample to a different sampling grid, as Smith explains on p. 4. If you analyze different linear resampling algorithms using the theory of convolutions, you can predict precisely how good the results will be; if you do not use this theory, you will be puzzled and will not understand your results at all. Believe me, I know; I wandered in that desert for years. But you don't have to. Learn from my mistakes.

Second, audio samples are frequently handled exactly the same way as pixels in this sense, using a "zero-order hold". If you oversample by taking 64 consecutive samples from your ADC and summing them, or set the PWM duty cycle on your Arduino to a new value every 125 μs and leave it there until the next sample, or shift a new sample into the shift register connected to your R-2R DAC and leave it there until the next sample, you're doing exactly the same thing as trying to draw a little square for each pixel—just in one temporal dimension instead of two spatial dimensions. A zero-order hold is a perfectly reasonable thing to do with audio samples for both input and output, although it's not the only option, and it can produce audible artifacts. The math for analyzing this—and correcting it, if that's what you want—is exactly the same as the analogous math for displays or camera sensors made out of little squares.

Far from being "far more misleading", the analogy between pixels and audio samples is deeply insightful and unlocks a wealth of mathematical tools that can be applied in both domains, as well as, for example, to linear control theory and GPS position estimation. The results this analogy leads you to are profound truths, not falsehoods as you seem to think. And some of them are very beautiful.

None of this has changed in 25 years (except the detail that now we have LCDs with little rectangles on them to complement the little squares of dye-sublimation printers that Smith mentions) and little of it has changed in 50 years.

I think we're agreeing in all our facts but simply disagreeing on how to most usefully interpret them.

You call convolution with a square a special case, I'm calling it a reasonable first-order approximation for modern displays.

And using a zero-order hold on audio samples isn't something most people frequently need to conceptualize. In fact, it's responsible for many people's great misconception that sampling rates above ~40 kHz result in noticeably higher-fidelity audio, because they think the DAC's output will be smoother. It's clear to me that you know better, but most people don't.

I'm in full agreement with you on all the math involved. It's simply that going around saying "pixels are point samples not squares" seems like it's going to be more misleading than helpful to 99% of people.

If someone wants to understand the basics of antialiasing, or reducing image resolution with bilinear interpolation, the conceptualization of square pixels works perfectly. More advanced antialiasing techniques are ultimately just subtle adjustments to that in terms of measurable output differences, which is why I stand by calling square pixels a reasonable "first-order approximation".

80+% of people would respond to a statement like "Pixels are little squares" with something like "我不会说英语", "أنا لا أتحدث الإنجليزية. ", or "मुझे अंग्रेजी नहीं आती। ". Of the remaining 20%, 99% would have no idea what a "pixel" was supposed to be, perhaps thinking it's a Google product. But the fact that the garden is mostly bare brown earth today does not imply that sowing the seeds of knowledge is futile. (I have certainly explained the Dirac-comb sampled image model in Spanish on more than one occasion, and if I have not done so in Hindi or Mandarin, it is only because of the untilled wastelands of my own withered, stunted mind.)

> If someone wants to understand the basics of antialiasing, or reducing image resolution with bilinear interpolation, the conceptualization of square pixels works perfectly

As Smith explains in the paper and I explained in the comment above, it's easy to get extremely conspicuous artifacts if you design a downsampling algorithm with the little-squares model, although if you're increasing image resolution you can frequently get away with it.

If you don't understand the formalization of the imaging problem that Smith is laying out—and it's clear that you don't—you aren't in a position to opine on how useful that formalization is. Applying that formalization will show you whether it's useful or not, and you can't try that until you understand it. My opinion from my experience is that I wasted a lot of time on the kind of seat-of-the-pants approach you're advocating, and now that I understand the rudiments of sampling theory, many problems that were intractable before have become easy. Learn from my mistakes instead of urging other people to repeat them.

> You call convolution with a square a special case, I'm calling it a reasonable first-order approximation for modern displays.

You are positing another opposition that simply doesn't exist. Perhaps you don't know what the phrase "a special case" means? A mathematical operation can simultaneously be a special case of a more general mathematical operation and useful for something, for example approximating a given display. And that is the case here. Being "a special case" of a more general case just means that everything that is true of the more general case is also true of the special case. It's not a way to deprecate the special case. It's a way to pointing you at a deeper understanding of it.

(Again, though, it's a zero-order approximation. First-order approximations are bilinear sampling and similar things. I think you have a lot of work to do before you can be in full agreement with me on all the math involved.)

They are exactly like audio point samples.

Pixels are always interpolated, just like audio.

The fact that 1080 pixels exactly fit 1080 screen pixels don't give pixels a shape. And by the way, most modern display pixels are not squares. They come in all kinds of shapes. Triangles for example.

As a pixel art artist, I certainly see pixels as little squares. As a 3D programmer working with 2D point-sample textures, I do not see pixels as little squares. So the answer to whether or not a pixel is a little square is, like so many of life's questions, not a simple yes or no, but rather, it depends.

> As a pixel art artist, I certainly see pixels as little squares.

Then you are ignorant about the history of pixel art. Which I am not judging btw. Just saying.

Pixel art from the golden years doesn't look right when displayed as little squares. It needs to be shown on a CRT or with an overlay emulating this look to do its creator's intentions justice.

"The Bitmap Brothers: Universe" (fantastic book btw.) went to great lengths to replicate the original rendering of the depicted pixel art. See e.g. [0].

[0] https://twitter.com/romvg/status/776081029882937344

Some pixel artists indeed intended their creations to be seen as little squares, even if CRTs of the 1980s stretched the squares into rectangles and fuzzed and scan-lined them. Right now there is a story on HN about a $660,000 copy of Super Mario Brothers. Look closely at Nintendo's original 1980s box art. Mario is made up of crisp little "pixel perfect" squares:


Look at other Nintendo pixel box art from the 1980s. Many of the earliest Nintendo games had this common box art style. It's all little squares:



I have an NES classic edition and it lets you select if you want the CRT effect applied or not. Sometimes I play with this effect applied (how I experienced it as a child) but other times I enjoy the "pixel perfect" mode.

I designed pixel fonts back in the 1990s and I was always unhappy about how CRTs fuzzed them. I loved the sharp little squares of LCDs much better.

This is a bit of a misunderstood overreaction.

It's true that if you were playing Super Mario on your TV, the normal experience was for it to be a bit blurry.

On the other hand, if you were playing Lemmings on an Apple monitor with Sony Trinitron technology, the pixels were exceptionally clear. Here's what Trinitron looked like:


I'd argue that playing Lemmings in an emulator, you want the pixels to be clear, without a CRT overlay -- that this would be closer to the artists' intentions.

Some early Nintendo games needs the “rectangular” pixels in order to display obvious circles (like images of a moon) as circles, other games needs square pixels.

You also see that graphic design for icons and graphics being done on square gridded paper for many early games and graphical workstation.

Funny to think we're nostalgic for something that didn't really exist.

Pixel art is a kind of fake aesthetic based on what NES emulators show games as, though. In real life they were made to go through various filters that blurred the image until it didn't look as much like little squares.

That's just one example, a game console where the video output was typically via composite video or even an RF modulator to a TV. Not so much with e.g. VGA mode 0x13 PC graphics on a relatively crisp computer display.

VGA mode 13h PC graphics on a relatively crisp computer CRT display still drew each of its 200 "scan lines" as a double fuzzy line, and a single pixel on it as a double fuzzy dot (more or less an isotropic Gaussian, despite what Smith says) or a double fuzzy dash. Superimposed on these fuzzy patterns you had the shadow-mask circular colored dots or the Trinitron colored rectangles. Sometimes you would also get echos or "ghosting", if there was an impedance mismatch or two in the analog signal path.

Later graphics cards in the 01990s would sometimes emulate CGA 320×200 and mode 13h by using more scan lines, giving pixels that looked more like little squares. Or, well, rectangles.

> VGA mode 13h PC graphics on a relatively crisp computer CRT display still drew each of its 200 "scan lines" as a double fuzzy line, and a single pixel on it as a double fuzzy dot (more or less an isotropic Gaussian, despite what Smith says) or a double fuzzy dash. Superimposed on these fuzzy patterns you had the shadow-mask circular colored dots or the Trinitron colored rectangles.

Now we're veering back into technical details of its implementation. The question is, is "squares" (well, yes, rectangles) a useful model of what is visible to the viewer? To that end, the fact that each line was traced as twice on the display is trivial. We have to concern ourselves with the visible result to answer that question

I have a decent CRT VGA monitor here, connected to a VGA card in an old PC, and at lower resolutions the pixels are crisp and hard. The monitors are made for reading at a higher resolution after all. If I press my face at the screen I can see that yes, the pixels appear on a much more fine-grained grid of apertures, and yes, this and other factors somewhat fuzz their boundaries.

But for normal use at an arm's length, rectangles are a very good approximate model for some display types that were in use while VGA was still a thing games used, one that a skilled artist can exploit in an entirely different way than they'd exploit the characteristics of NTSC/PAL displays or an RGB display with a coarser dot pitch.

A rectangle is a pretty reasonable approximation of a fuzzy point. It's an even more reasonable approximation of two fuzzy parallel dashes.

I agree Math Blasters looked pretty rectangular. Are modern pixel artists inspired by PC games, though? I think they mostly played Nintendo games in emulation.

That's quite alarming. On many systems with TV output, including widely popular consoles, there were no pixels at all, just an analog line signal, most often modulated/composited/combined, and often low in quality because of cost-effective hardware and cables. Artists certainly looked at how images appeared on TV screens, and not at raw values of the bitmap samples presented as squares, and they compensated for unwanted effects. It is slightly maddening to know that the lack of analog output emulation in early console emulators and general lack of education resulted in the modern faux-retro aesthetic, some kind of new age belief that pixels should be big blocks. Mario sprite was never intended to be seen as made of blocks, no matter how many times people use that scaled abomination.

You do have big flat pixels when you have high quality but low physical DPI output device. Say, black and white Macintosh, or late '90s computer monitors displaying '80s screen resolutions, or a panel of individual LEDs. On the other hand, halftones are also sharp and have low DPI, but have no square grid at all, nor they correspond to source data used in printing in any straightforward way.

To conclude, big flat rectangles are just a specific artistic choice, they are not universal at all.

>Mario sprite was never intended to be seen as made of blocks

It was depicted as made of blocks on most versions of the cover art:


And many Mario games were released for portable systems with sharp LCD screens, and they used the same pixel art techniques.

Oh god, they even turned it at arbitrary angle, like in some teenage Flash animation.

> Mario sprite was never intended to be seen as made of blocks

idk I still play on a 1990s CRT and it's pretty blocky

Plus there are 12 comments on HN mentioning this URL in 12 different discussions: https://ampie.app/url-context?url=alvyray.com/Memos/CG/Micro..., and it has been linked from this great article on alpha compositing by Bartosz Ciechanowski (you should really check out his blog if you haven't seen it): https://ciechanow.ski/alpha-compositing/

Also https://news.ycombinator.com/item?id=1472175 - June 2010 (20 comments)

It's more like a little fuzzy blob, typically with RGB sub-blobs.

This is where I take issue with the paper. The sub blobs are really important because different displays/focal plane arrays have different pixel arrangements that can be dramatically different. You can't assume it'll be a collection of three sub-elements (often there will be different ratios in a complex arrangement) or that the pixel is even being displayed as RGB (CMY?).

You have to know some things about your pixel arrangements in order to properly anti-alias, font render, or display camera images, otherwise the result can be pretty ugly.

Back when I was writing code for the Apple ][ (long-hand, on graph paper), I worked out graphics layouts by drawing my pixels four to a square on the graph paper, representing each pixel with a circle. I also never forgot that (a) the pixels were actually kind of rectangular and (2) in hires graphics mode, if you viewed things as 280x192, adjacent pixels were either •• and • or • and •• depending on the high bit of the byte that had the bitmap data. I think it might have been possible to get •◦• where one RGB-triple was split across bytes.

Not exactly right either. Before its displayed to some approximation in some RGB array, it was some back buffer. That back buffer could have been attempting to render an infinitely small, crisp line or a line almost two pixels wide.

The point is that kind of information is not encoded in the buffer. Its just a point sample of the output of the previous stage in the pipeline.

I think you're thinking of a pixel on a display? Because arguably that's just a rendering of a data set. The pixels the article are referring to are the data set, a set of point samples. In which case it has no shape - since it's a point - and the r, g and b components are just different samples at the same point

Indeed… and a sample is not a point along the audio waveform :)

Don't you have that backwards? A sample _is_ a point along along a waveform.


A sample is a point along the band-limited waveform (otherwise you get aliasing). If you treat band-limiting and sampling as a single operation on the original waveform then it's integrating over neighboring points on the waveform with some kind of kernel, the shape of the kernel depends on the form of the band limiting.

A sample is a mathematical value akin to the handle of a Bezier curve, there to facilitate the reconstruction of a different waveform that may have little to do with the sample's apparent position at all. :)

(granted, this is specifically at high frequencies: at low enough frequencies, the sample might as well be the waveform within whatever contstraints the word length puts on you)

On the digital wave, yes (that's the definition of a sample, or of the wave). On the analogical wave, no.

As analogical waves don't have a concept of samples, I'm not sure what your point is?

Digital samples have a representation in the analog wave.

But I'm not sure what your complaint is, really. Sampling is a concept by itself.

(In English the word is "analog" or "analogue.")

I can't help but think of the famous humpty dumpty quote

> When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less." "The question is," said Alice, "whether you can make words mean so many different things." "The question is," said Humpty Dumpty, "which is to be master—that's all."

The fact is that pixels don't have a well defined meaning. Some people use them as squares, some people use them as samples. Then you get into subpixel font rendering, where you quite explicitly acknowledge the screen as master, and you design your image for a desired result.

So yeah it's an ecosystem of people doing different things with different interpretations, and trying to mostly play nice with each other.

This is important point indeed. Here is what helps myself, when thinking about pixels.

- pixels are point samples in 2D space - their position is exact - position of of top-left point is (0,0) - position of bottom right is (cols-1,rows-1)

This way all math work (subsampling, affine or perspective warps, lens distortion, or even warping between image and any layer neural network). Failure to do that, will cause subtle issues that will degrade your performance. These will become quite important when working with pixel accurate methods (3d object tracking, object detection in tiny resolutions and 1-1 mapping between OpenGL and Neural Network). So I have to agree, pixels are not tiny squares, they are dots in 2d space.

One of the weird experiences I had in college was working for the paper in the 90s. We has desktop publishing but the photos for the paper were taken out back to a machine with a screen copied them into solid black dots of various sizes. They were waxed onto printed newspaper pages and wisked off to the printer.

I often wondered how you would reproduce it digitally but apparently I was thinking about it wrong.

Halftone is the term https://en.m.wikipedia.org/wiki/Halftone

Pixels, at least when they come from cameras or need to be mapped to a 3D world, are still not square.


DLP is little squares

I wish the abstract actually summarized her points.

Alvy Ray Smith, founder of Pixar, is not a "her"

That's neat, but I stand by my point.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact