Why? Vram has to be powered as long as you're scanning out of it, any competent design is going to support powering down most of the GPU while keeping RAM alive otherwise an idle desktop is going to suck way more power than necessary
GPUs will drop memory clocks dynamically, with at least one supported clock speed that's intended to be just fast enough to support scanning out the framebuffer. I haven't seen any indication that anybody is dynamically offlining VRAM capacity.
you can validate this yourself: if you have access to an A/H100, allocate a 30gb tensor and do nothing - you'll see nvidia-smi's reported wattage go up by a watt or so