The phrase "vrtual memory" is old and still widely used, and it refers to... virtual memory. The memory addresses programs use are virtual addresses into virtual memory, and the kernel sets up lookup tables to let the CPU resolve that virtual memory address into a physical memory address. Saying that "VM == Swap" might be common, but it still seems completely wrong, and makes people make ridiculous claims such as "unlike on the Mac, there is no virtual memory on the iPhone".
If you search on google right now for “virtual memory” for articles geared toward everyday people, you will see that the common vernacular is to refer to it being the same as the swap file. It’s referred to that way in Windows now and I remember “turning off virtual memory” being a control panel option on MacOS System 7 in 1991.
If someone said that the “army was decimated” would you think, knowing how language has evolved, that they meant the army was reduced by 10% or that it was left in ruins? Would you go on to explain to the ignorant plebe and start a sentence with “well actually...”?
When the term “virtual memory” came into the vernacular of everyday people, it was understood to mean swapping to disk.
It’s the same old descriptivism vs prescriptivism debate. Most of the time when someone lands on the prescriptivism side it comes off as pedantic.
Would you correct someone who said that they were buying a PC instead of Mac and say “well actually a Mac is s personal computer therefore it is also a PC”?
I think that truly turned off virtual memory (Mac OS 7 didn’t have separate address spaces, did it?)
I went down a rabbit hole trying to remember the technical details of VM back in the day:
[old man rant]
I remember having at the time an obscene 10MB of RAM in 1992 for my Mac LCII to avoid using VM and to run Soft PC acceptably.
Then I had a whopping 24MB of RAM two years later for the PowerMac 6100/60 because it shared RAM with the DX2/66 DOS Card.
Then two years later I bought 32MB of RAM to attach directly to the DOS Card...
Using it to describe distributed systems comes to mind, and would be more sensible than its current, mistaken use.
Or it could only swap pages that contain form fields where the user has entered data as a last resort.
Well, they can be. That is, my iOS devices are set to have only thumbnails (plus most recently taken/viewed) on the device and dynamically load anything I want to look at (upscaling the thumbnail and then faulting in the full picture). The device memory is treated as a cache which can be flushed as needed.
It makes sense for Safari to do the same since most pages are reloadable (I wonder what it does with nocache-marked pages). And in particular, old tabs are unlikely to be looked at again. You can do some experiments: I'd bet forms are not purged either.
Contrast that with people writing editors - the mere idea of tossing the user's edits randomly is simply absurd.
Edit: You could still swap out to "external", modular storage like an SD card, since killing that would be no big deal - just treat it as disposable. But no SD card slot on iPhones :-P
It seems Safari / iOS memory management isn't very clever. Sometimes Safari reloads pages I just left. Super annoying if you started filling out something there.
I don't understand why they don't just unload apps / pages on an LRU principle: whatever you do, do NOT unload an app / page the user was just in.
I like a lot of things about the iPhone and iOS but this peculiarity ain't one of them. It feels very backwards.
EDIT: I also heard the storage is almost at the laptop SSD class (if not better). So I am not sure swapping memory to it here and there would amortise it that badly. But I don't know for sure so not claiming anything.
Sure this is technically possible, because we already know that it’s possible to suspend a virtual machine, but that doesn’t mean it’s a reasonable use of engineering resources.
It also fails to acknowledge the very first problem: IOS doesn’t use swap because of flash wear (or something?), but what you’re proposing is essentially inventing the same problem again, but this time in user space with a pile a expensive heap traversal added for fun.
The problem is how do I save page state such that in critically low memory situations I can recover that state?
On a full system you can always page to disk - that’s how they keep the appearance of everything running. The downside is that swapping is bad for Flash storage, and apparently more so for the low power stuff you get in phones.
Now your statement “saving all memory is surely more expensive than saving some” isn’t actually correct.
First you have to define what “some” is: the entire dom, the js heap, network resources, and additional worker threads. Those are the vast bulk of the memory already being used - there’s not much else that isn’t already either torn down before a tab is killed or shared across processes (like the text segment, etc).
So we don’t end up meaningfully reducing how much needs to be stored.
Taking us to the next step: how do we selectively store just this “reduced” memory?
The reality is you end up having to add a bunch of additional metadata to the browser’s data structures. Which increases the frequency you hit the event you’re trying to avoid. It also means you have to maintain that metadata, providing a new source of bugs (I would consider highly likely), and using more cpu time (though I suspect not by much).
Finally, let’s say you have all of the logic to allow you to serialise everything needed to bring the page back up, when do you do it?
Tabs (or rather the process rendering the tab) are killed when the system is under memory pressure. Serialising requires a bunch of memory - it needs the space for the serialised page (let’s be generous and say 50% of the existing memory usage), it also needs space for the serialisation state, which is likely to be a bunch of O(Number of heap objects) maps.
That means the first thing a web process does when given a memory pressure warning is allocate a huge amount of memory, which would presumably fail (its a memory pressure situation), and then not be able to continue, and then get killed.
Even if you could serialise straight to disk (bad because aforementioned swapping vs SSD) you still get everything killed because you can’t store the serialisation state itself to disk, it has to be resident. Then what happens is you encounter memory pressure and every browser process simultaneously starts trying to serialise their current page, so all allocate a pile of memory for their serialisation state, and again you’re in the position of having a bunch of memory allocation requests when you by definition have little memory left.
I regularly have dozens of tabs open and only reopening a tab that hasn’t been visited in some time results in a refresh.
It's generally true that memory issues seem to go away if you throw enough extra RAM at it.
Similarly, I recently upgraded from a GT120 (0.5GB) to an RX580 (8GB), and miraculously, my Mac no longer crashes when I switch spaces quickly.
I guess I assumed they did something using the bfcache and serializing page state to disk in newer iOS if needed but perhaps my newer device just have more ram than before?
So you think thrashing is nice? I find that difficult to believe.