Funny coincidence, I mentioned Potemkin Villages in a lecture discussing the Rasterization pipeline for a Computer Graphics class at CMU [1]. It is very interesting to look back at Potemkin Villages, as they can be seen to be a very primitive form of "rendering." Abstractly, the designers of the villages worked with the limited materials that they had to produce the closest reproduction of a real city that they could. In this way, the final aesthetic of the Potemkin Villages is a function of the limitations of the creators.
You can draw a direct analogy to Real-time rendering today. The "limited material" that we have is the GPU. Because the GPU is great at crunching matrices and other embarrassingly parallel problems, we have converged on Rasterization as the predominant method of rendering in real time. Rasterization was chosen because it is fundamentally based on two concepts: linear coordinate transformations (model->world->view->screen space) and triangle fill algorithms. The GPU can do both of these exceptionally well.
I think it's important and instructive to keep all of this in mind, because tomorrow we may be stuck with something other than the GPU, and the paradigm could shift completely. There's nothing fundamental about Rasterization that gives it better physical footing than other methods -- it is simply the best known good that we have with current hardware.
Whoa, Baader-Meinhof effect. I was talking to my gf about the ghost cities in China, and how they're like modern Potemkin villages, except the Chinese government wants to make the villages real. Fake-it-till-you-make-it villages.
That might be why it was posted. Someone mentioned the Baader-Meinhof phenomenon. But is there a word (other than meme) for the concept of a mass media reference obliquely triggering a resurgence in interest in a non-current topic? You see this often on reddit, especially in TIL posts.
You can draw a direct analogy to Real-time rendering today. The "limited material" that we have is the GPU. Because the GPU is great at crunching matrices and other embarrassingly parallel problems, we have converged on Rasterization as the predominant method of rendering in real time. Rasterization was chosen because it is fundamentally based on two concepts: linear coordinate transformations (model->world->view->screen space) and triangle fill algorithms. The GPU can do both of these exceptionally well.
I think it's important and instructive to keep all of this in mind, because tomorrow we may be stuck with something other than the GPU, and the paradigm could shift completely. There's nothing fundamental about Rasterization that gives it better physical footing than other methods -- it is simply the best known good that we have with current hardware.
[1] http://15462.courses.cs.cmu.edu/fall2019/lecture/opengl/slid...