The article demonstrates how efficiently a very large graph can be processed on a single machine but the architecture has analogues that work nicely on large compute clusters too. It is just a good way to process graphs.
The purpose seems to be presupposed, so I guess it's some common use case that I should know.
Another common example is doing some kind of clustering (community detection) of vertices like for example labeling the users on the social graph by likely political affiliation or likely hobbies. These labels could be used as features for advertising recommendations later.
Could we handle Facebook's edge graph on a single machine?
What exactly does the word "proportional" mean here?