Hacker News new | past | comments | ask | show | jobs | submit login

Wow, my browser is so much faster now



They should port SoftRAM to the web too, I could do with some additional memory.


SoftRAM wasn't as badly thought out as you thought. The idea was to use compression as a warm layer between "in-memory" and "on disk". That idea is actually implemented in windows 10/11 as "memory compression", and is in linux (zswap) and mac os as well. The only problem was that SoftRAM was half-baked, and their compression algorithm was memcpy (ie. nothing). Raymond Chen has a much longer write-up:

https://devblogs.microsoft.com/oldnewthing/20211111-00/?p=10...


fascinating read, and Raymond seems like an immensely productive person.


Not sure if equivalents are still recommended for Windows users, but I've seen zram recommended to Linux users quite a few times.

Someone also wrote a script to start killing processes before the system can hang when it runs out of RAM.

https://askubuntu.com/a/1018733


> Someone also wrote a script to start killing processes before the system can hang when it runs out of RAM.

Isn't this exactly what the OOMKiller does?


OOMKiller has a bunch of issues. Its heuristics don't apply well across the wide range of workloads Linux provides (mobile/android? webserver? Database server? build server? desktop client? Gaming machine?), each of which would require its own tuning. (more background at https://lwn.net/Kernel/Index/#Memory_management-Out-of-memor...)

That's why some orgs implemented their own solutions to avoid OOMKiller having to enter the picture, like Facebook's user-space oomd [1] or Android's LMKD [2]

[1] https://github.com/facebookincubator/oomd

[2] https://source.android.com/devices/tech/perf/lmkd


In my experience, by the time the OOMKiller actually comes into play, the system has already stalled for minutes if not more. This especially applies to headless servers; good luck trying to SSH into a machine that keeps trying to launch a service configured to consume too much RAM.



I had a bunch of problems with the OOM Killer on a server of mine. It seems to have been due to not having any swap partition. Linux seems to assume you do, and the OOM strategy behaves really poorly when you don't. Like, it doesn't need to be large, the machine has 128 Gb RAM and the swap partition is 1 Gb.


1GB is a giant swap partition; linux regularly ran with swap partitions of 10s of MB in the 90s. The only reason to scale swap with ram is if you want to fit coredumps (or suspend to disk) in swap.


10s of Mb was a lot in the '90s though. A mid '90s consumer hard drive usually clocked in at a few hundred megabytes, and RAM could be in the dozens of Mb.


Right, but swap size should scale with random access times for disks, not disk nor RAM size.


Why is that?


zram shouldn't really be recommended anymore, IMHO. zswap is the more modern alternative — it does not require a separate virtual device, can resize its pool dynamically, is enabled by default on some distributions, and (IIRC) supports more compression algorithms (trade CPU time for higher compression ration or vice versa).

https://wiki.archlinux.org/title/Zswap


Applying memory compression to browsers is a promising idea, the browser heap is not that high entropy and it's largely a white box from the browser runtime POV.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: