The goal of virtual memory is to apportion physical memory to the things that need it most -- keep frequently used data in RAM, and pageout things which aren't frequently used. This makes things go faster. When you disable swap, you constrain VM's ability to do this -- now it must keep ALL anonymous memory in RAM, and it will pageout file-backed memory instead. Even if that file-backed memory is much hotter.
I came to this conclusion by following the kernel community, starting with the "why swap at all" flamewar on LKML. See this response from Nick Piggin http://marc.info/?t=108555368800003&r=1&w=2 who is a fairly prominent kernel developer. Nothing I've read from the horses mouth has refuted this since then. This is true even on systems with gobs of memory.
You're worried about systems which grind to a halt under memory pressure, which is unquestionably a concern. The thing is, disabling swap doesn't fix this. As soon as you're paging out important file-backed pages (like libc), your system is going to grind to a halt anyway. And disabling swap can't prevent this -- (same point from a VM developer here http://marc.info/?l=linux-kernel&m=108557438107853&w=2). To really give your prod systems a safety net, you need to (a) lock important memory (like, say, SSH and libc) in memory or (b) constrain processes which hog memory with (e.g.) memory cgroups. IMHO cgroups/containers are a better solution.
Ideal tuning would probably also reserve some decent amount of space for file caches and slab but I'm not aware of any setting that does that.
It's an entirely different story on laptops and development servers, where the workload varies widely, may contain large idle heap allocations worth swapping, and manually configuring memory usage isn't practical.
There is a school of thought which thinks especially server devs should just trust the operating system in this regard. One notable person of that school is phk of FreeBSD and Varnish fame: https://www.varnish-cache.org/trac/wiki/ArchitectNotes
But there are cases where it clearly does more harm then good. I had a postgresql datbase server with a lot of load on it. The server had loads of ram, more than what postgresql had been configured to use plus the actual database size on disk. Even so linux one day decided to swap out parts of the database's memory, i assume since that was very rarely used and it was decided that something else would be more useful to have in memory. When the time came for queries that used that part of the database, they had a huge latency compared to what was expected.
Maybe i'm misremembering and maybe there was some way of preventing that from happen while still having a swap enabled on the server..