It definitely should, but keeping your working set smaller can also make things faster (as seen by Chrome here). This depends very much on the workload and what's being done to the data in memory. My example was just that a raster image editing program probably doesn't require a huge memory footprint just to be able to edit images well (as a lot of the memory use typically is not the image you're seeing, but history and undo state, which is neither latency-critical, not frequently accessed).
Chrome's 32-bit address space/4GB limit is different from having 64-bit machine/48-bit address space/4GB of RAM. In the latter, you can keep allocating after 4GB, it will just get slower as the pager will start swapping pages to and from the disk. But in V8 with pointer compression, you will just hit a brick wall.
To swap memory to disk, you still need address space to map it to. Say the editor has allocated 3.4GB, and then makes another 600MB layer, allocated at roughly 0xD000000. With no address space left, it asks for another layer, and the allocator returns NULL. It can't give you a pointer to a 600MB region, because there's no address space left. If you paged out the layer at 0x20000000, that would not help, because it wouldn't magically free up the addresses 0x20000000-0x40000000. They would just refer to pages that are currently on disk, and still be 'occupied' address space. You still need an address for this new allocation, and there is no room to put it.
No slow degradation -- it won't page fault at all unless you otherwise fill the RAM on the machine. So the image editor just falls over, with an uncatchable OOM exception I presume, with no perceptible warning from page fault slowdown just prior. It will go full speed into the brick wall. For your account to be accurate, V8 would have had to implement their own virtual address space, which they have not. VA basically requires a hardware TLB to be fast, and V8's "TLB" here is just `mov eax, [whatever]; and rax, r13`. Anything other than that would have completely defeated the speed gains from locality.
This doesn't account for nuances like whether ArrayBuffers would be allocated elsewhere and have no pointer compression applied, but it's definitely true of general objects. For a regular JS program to fill 4GB with normal web app things would be a miracle, and the image editors of the world can probably still work if they make the big-allocation APIs use full-size pointers.
Keeping your working set smaller doesn't mean that you have to not keep everything 'in memory': you can carefully craft your memory access patterns to work well with OS virtual memory management.