You can draw a direct analogy to Real-time rendering today. The "limited material" that we have is the GPU. Because the GPU is great at crunching matrices and other embarrassingly parallel problems, we have converged on Rasterization as the predominant method of rendering in real time. Rasterization was chosen because it is fundamentally based on two concepts: linear coordinate transformations (model->world->view->screen space) and triangle fill algorithms. The GPU can do both of these exceptionally well.
I think it's important and instructive to keep all of this in mind, because tomorrow we may be stuck with something other than the GPU, and the paradigm could shift completely. There's nothing fundamental about Rasterization that gives it better physical footing than other methods -- it is simply the best known good that we have with current hardware.