When rendering, it is common to texture 2D surfaces by sampling a 3D noise function, but the resulting 2D texture will in general not be band-limited, even if the 3D function is perfectly band-limited. This means that
the loss-of-detail vs. aliasing tradeoff cannot be solved simply by constructing a band-limited 3D function. To our knowledge this problem has never even been identified, much less addressed.
I think this is responsible for the questionable look of the screenshots of 3D and 4D simplex noise at the end of this pdf (sorry, couldn't find a html version):
Another practical problem:
A Perlin band near the Nyquist limit contains both frequencies that are low enough to be representable (i.e., they contain detail that should be in the image) and frequencies that are high enough to be unrepresentable (i.e, they can cause aliasing). Excluding the band
causes loss-of-detail artifacts, but including it causes aliasing artifacts. Balancing this tradeoff between loss of detail and aliasing has been a constant source of frustration for shader writers at Pixar and elsewhere. Because aliasing is usually more unacceptable than
loss of detail in feature film production, bands are attenuated aggressively. An unfortunate consequence of this is that as you zoom into a scene the texture detail becomes visible later than the geometry detail, so the texture doesn’t appear to be tied to its geometry.
Instead, the texture appears to fade in unnaturally as if there were a haze that obscured only some aspects of the surface appearance and only when it was farther away.
Enter wavelet noise. It's heavier in the theoretical side, but the paper that presents it claims it's efficient and solves the above problems.