I have no clue what hilbert space memory organization could possibly be - arbitrarily deep hardware support for indirect addressing? - but it sounds simultaneously very cool and like an absolutely terrible idea.
the framebuffer had a recursive rasterizer which followed a hilbert curve through memory, the thinking being that you bottom out the recursion instead performing triangle clipping, which was really expensive for the hardware at the time.
The problem was that when you take some polygons which come close to W=0 after perspective correction, their unclipped coordinates get humongous and you run out of interpolator precision. So, imagine you draw one polygon for the sky, another for the ground, and the damn things Z-fight each other!
SGI even came out with an extension to "hint" to the driver whether you want fast or accurate clipping on Octane. When set to fast, it was fast and wrong. When set to accurate, we did it on the CPU [1]
Nowadays all GPUs implement something similar (not necessarily Hilbert but maybe Morton order or similar) to achieve high rate of cache hits when spatially close pixels are accessed.
3D graphics would have terrible performance without that technique.