Hacker News new | comments | show | ask | jobs | submit login
Utilizing video memory for system swap (archlinux.org)
38 points by DyslexicAtheist 8 days ago | hide | past | web | favorite | 23 comments





Yellow Dog Linux 6.1 (and 6.2) for PlayStation 3 (256MB RAM, 256MB of video RAM) had a similar feature [1], being the write fast (GB/s) but the read slow (16 MB/s), because of the graphics bus configuration. It is better than to swap to a mechanical hard disk, but not better than to swap to a SSD hard disk (PS3 has a serial ATA I -150MB/s- hard disk interface).

[1] http://us.fixstars.com/technologies/linux/support/solutions/...


Wow, GB/s vs MB/s is the worst read-vs-write disparity I've heard of yet, although I haven't studied the subject extensively (mostly because I don't know where to start).

I wonder what the average readback rate on modern GPUs is. And whether it has an impact on, eg, on-GPU graphics compositing.


There are more cases, e.g. the AGP bus (evolution of the original PCI bus for graphics, pre PCI-Express). I don't know if the limitation of the PS3 graphics chip (NVidia RSX) comes because of being derivated from an AGP-based chip or because of a different reason.

>mkswap /dev/mtdblock0

That's how you destroy data. Don't assume mtdblock0 will always be the same thing.


What's the best way to get the correct device name? Parse /proc/mtd ?

This makes me wonder why PCs still haven't trended towards the same Heterogeneous System Architecture that the PS4 has. Instead of having 4GB DDR3 RAM and 4GB GDDR5 video RAM it just has 8GB GDDR5 and everything runs on that. I mean, how many PCs these days have 8GB RAM + 2-6GB video RAM, some of the system RAM sitting unused during gaming and most of the video RAM sitting unused during general use? Seems like a decent hardware optimization target..

GDDR generally has higher latency than DDR in return for higher bandwidth. The reason is that the minimum read/write size is much higher, so the overhead of addressing is lowered.

For graphics applications, this is a good tradeoff as they generally use large objects (e.g. textures) but for existing compute applications, the latency penalty is often too high.

The PS4 can presumably get away with this as applications will be specifically written for it, unlike PCs which have to support legacy software.


And, besides legacy software, to get benchmark numbers as high as possible.

For integrated graphics, that can happen.

Practically all discrete GPUs are connected through PCI-e, and serving memory requests through that would slow things to a crawl. PCI-e 4.0 with 16 lanes has 32 GB/s of bandwidth. A Geforce 1050 has more memory bandwidth than that (84 GB/s), and that GPU is not high end.


I think slram is deprecated in favor of phram, though I'm not really sure what what the differences are besides giving length rather than end offset in the arguments to phram. The only information I can really find is this[0] post from 2003 which introduced phram. There wasn't any discussion back then and doesn't seem to have been since.

[0] http://lists.infradead.org/pipermail/linux-mtd/2003-July/008...


This might be kind of a naive question, but wouldn't it be more cost-effective to use an NVME SSD? (Unless you happen to have a graphics card with plenty of RAM sitting in your machine that you not going to utilize otherwise.)

Video RAM is not cheap. But often it sits unused. It's also one to two orders of magnitude faster than an SSD.

But but but... if I do something so timing-sensitive that the improved swap-performance is going to matter, maybe I should just stuff my machine with enough RAM so it does not swap in the first place?

Don't get me wrong, it's cool that it's possible to use Video RAM as a swap device. But I don't see the practical value.


Maybe you just happen to want to do something which requires a lot of memory, and don't want to spend the money to upgrade RAM right now, or you're on a system where upgrading RAM is impossible, or your want to complete the task now without delaying it until you have installed more RAM.

My laptop has 8GB of RAM, enough for most use, but it proved way too small when trying to compile Chromium (where the linker would get OOM killed even with 8GB RAM + 8GB swap). If I was instead doing that on a system with <24GB of RAM but with a beefy GPU, using some of the graphics memory as swap instead of a relatively slow SATA 3 SSD would've been attractive.


Think about low-budget computing. Maybe all you have is a dumpster-computer, and a new set of DIMMs costs more than your monthly income.

There are people who use a dedicated GPU they pass through to a Windows VM to play games - so that's a way to still make use of the card when the VM is not running.

It's kind of an old hack (the page dates from 2007); it's not really the sort of thing you'd build a PC for but if you happen to be re-purposing one it might offer an advantage.


I have a rule of thumb on swap cost effectiveness: use compressed ram first, if it's not enough, buy more ram. Because swapping is too slow to be generally usable. There might be a special case where being so slow is ok, but then cheap sata ssd in combination with compressed ram could be cost effective.

Might as well use regular ram which is better than ssd and you're back to square one

IBM's mainframes were in a weird place in the 1990s, because they had a 31-bit virtual address space, i.e. 2 GiB, but the physical machines could be equipped with a lot more RAM. One way to utilize this surplus memory was to use it as a RAM-disk and turn it into a swap device.

It was stop gap measure before their CPUs went 64-bit, but at the time it was kind of a clever solution.


Has anybody tried this on a modern system? Does it still work?

How safe this is, I guess one can store with some extra redundancy in case bits flip (e.g. assuming no ECC too the card)



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: