Why not bring a node down, and then replace it with a node that has more RAM? Are you exceeding the size of a node you can supply (in terms of RAM) for your cluster?
I'd be very curious to know a bit about the character of your data, the size of your cluster, etc. (I've only run test clusters at this point, so hearing from someone doing production work would be informative.)
Replacement vs. addition is a situational trade-off, but ultimately the problem remains that you need to bring more RAM to the party.
My biggest RAM consumer stores historical data for a goods trading platform. Each trade is a unique key, with all the trade data being the value. Access speed is important, but not as critical as the other goodies I get from Riak (replication and automated rebalancing). Metadata is stored separately, but I hope to change that with Riak 1.0 secondary indexes.
I'd be very curious to know a bit about the character of your data, the size of your cluster, etc. (I've only run test clusters at this point, so hearing from someone doing production work would be informative.)