It's a cool idea; as other posters have mentioned, there have been other projects mapping VRAM for swap, etc.
I'd personally be wary of putting anything too important into VRAM. About five years ago I did a bunch of work testing consumer GPU memory for reliability [1, 2]. Because until that time GPUs were primarily used for error-tolerant applications (graphics) storing only short-lived data (textures) in memory, there wasn't a whole lot of pressure to make sure the memory was as reliable as that found on the main system board. We found that indeed, there was a persistent, low level of memory errors that could be triggered depending on access pattern. I haven't followed up for recent generations, but the fact that the "professional" GPGPU boards both clock their memory slower and include hardware ECC is possible cause for concern with leaving anything too important on the GPU for a long time.
There's code [3,4], too, but I haven't actively worked on it in a few years, so no guarantees on how well it runs nowadays...
While memory errors in textures would usually cause only visual artifacts (unless used for data), memory errors in executable code, shader programs, vertex data, and other types of data could easily cause more fatal problems.
GPUfs: Integrating a File System with GPUs. Mark Silberstein (UT Austin), Bryan Ford (Yale University), Idit Keidar (Technion), Emmett Witchel (UT Austin)
The concept is not new - I remember some utilities that would let DOS use the VRAM on a VGA (256KB available, but slightly less than 4KB actually needed in 80x25 text mode - and 256KB was a big amount in those days), back in the early 90s. There were some demos that used this to their advantage too.
Could anybody please explain to me why there is a need for a special treatment of VRAM compared to a regular system RAM in this use case? Assuming, we can perform an allocation in VRAM (probably using OpenCL API), why can't we use tmpfs/ramfs code? Do I understand correctly that PCI maps VRAM to a certain memory region and it is accessible via regular CPU instructions? Is it because CPU caching is different or VRAM is uncacheable? Or is it something else?
VRAM is not mapped to the same memory space as your normal RAM and it is not directly accessible via regular CPU instructions. It's wholly owned by the GPU, and that's who the CPU has to talk to to use it.
This is in fact a (if not the) major limiting factor to expanded use of GPUs for general purpose calculations: you always have to copy input and results between video RAM and normal RAM.
Yes, it's owned by the GPU, but you can map it to the regular space. In fact, this is exactly how textures and other data gets loaded to the video card. See http://en.wikipedia.org/wiki/Memory-mapped_I/O
True -- you can use (in OpenCL) clEnqueueMapBuffer to get something that looks like memory-mapped IO, but the consistency guarantees are different from regular host-based MMIO. Specifically, if you map a GPU buffer for writes, there's no guarantee on what you'll get when you read that buffer until you unmap the region. (You can think of it as buffering up writes in host memory until you unmap the region, at which point it's DMAed over to the GPU.)
In other words, OpenCL supports this very limited buffer interface due to compatibility issues, i.e. this kind of MMIO is the lowest common denominator that has to be implemented by any GPU claiming OpenCL compatibility. Although, this does not preclude most desktop discrete GPUs from mapping their whole internal VRAM onto host's memory address space through PCI bus. It seems to be the common mechanism for a host to access VRAM in the modern ATI and NVidia GPUs from what I understood after skimming through several technical documents. It is, by the way, as far as I can tell, the main reason behind the infamous 'memory hole' in 32-bit Windows OS's (inability to use more than 2.5-3G of RAM).
So, I guess, the correct answer to my initial question as to why it's not possible to use tmpfs with VRAM would be because that will require special memory allocation made in VRAM. Meaning, a patch to tmpfs code that can properly allocate memory in VRAM buffer would suffice if we are willing to limit compatibility to 64-bit x86 architecture with AMD/NVidia GPUs.
Your graphics drivers aren't written to be able to share resources with tmpfs. Going through OpenCL ensures that the graphics drivers know about and will respect any VRAM allocations.
Ideally then one should be able to use spare VRAM as a second level RAM - an area to page out things to before disk.
I've played a bit with the different memory compression tools on Linux, zram, zswap, and zcache, and they all behave in interesting ways on workloads whose active set is well over 2x available RAM. I played with compiling the Glasgow Haskell Compiler on small and extra small instances of cloud services, I wager this would work for the GPU instances on EC2 to increase their capacity a little.
The transcendental memory model in Linux is interesting for exploring these ideas, and it's one of the things I really like about the kernel. However the last time I played with it (Kernel version ~3.10) I had some lockup issues where the kernel would take up almost all of the CPU cycles with zswap. That was kind of a nasty issue.
I've found that putting swap files on things that don't behave like filesystems can cause interesting behaviors. In this case, all writes to the file would go through the VFS. I imagine there could be some curious issues if that write path has any allocations or significant amount of mutation in it.
I would trust more something that got rid of the VFS layer and simply allowed VRAM to be used directly as a second level below RAM using the transcendental memory model.
This is interesting. However, considering how 4GB to 8GB RAM is getting so common nowadays, using a ramdisk e.g. a tmpfs partition is quite useful. I've set Firefox and Google Chrome to use a 1 GB tmpfs partition for their cache and the performance improvement is clearly visible.
Since just about the first version of Netscape (that I can remember), if you select cache in the settings dialog, there are two settings, one for how much disk space to use and one for how much memory to use.
(I had to double check just now since you asked. Yep, still there.)
Is that called shared GPU memory? Can it be adjusted in WinNT/Linux? Some recent console game ports need/want 3+ GB VRAM. Upgrading VRAM is impossible, RAM is easy and cheap(er).
I'd personally be wary of putting anything too important into VRAM. About five years ago I did a bunch of work testing consumer GPU memory for reliability [1, 2]. Because until that time GPUs were primarily used for error-tolerant applications (graphics) storing only short-lived data (textures) in memory, there wasn't a whole lot of pressure to make sure the memory was as reliable as that found on the main system board. We found that indeed, there was a persistent, low level of memory errors that could be triggered depending on access pattern. I haven't followed up for recent generations, but the fact that the "professional" GPGPU boards both clock their memory slower and include hardware ECC is possible cause for concern with leaving anything too important on the GPU for a long time.
There's code [3,4], too, but I haven't actively worked on it in a few years, so no guarantees on how well it runs nowadays...
[1] http://cs.stanford.edu/people/ihaque/papers/gpuser.pdf
[2] http://cs.stanford.edu/people/ihaque/talks/gpuser_lacss_oct_...
[3] https://github.com/ihaque/memtestG80
[4] https://github.com/ihaque/memtestCL