>In common application that means using disk storage is less efficient that storing it in a database on an other server, since it usually keeps almost everything in memory.
Depends on the usage pattern. Database servers generally have more memory than end user machines, but not proportionally more: A database serving a million users won't have a million times more memory. (This matters primarily when each user stores unique or low popularity data, otherwise the shared database has the advantage.) Moreover, data stored "on disk" on user machines will, with modern long-uptime large-memory desktops (and before long mobile devices), have a high probability of having the data cached and retrieved from memory rather than disk.
Thus, if the data is accessed a relatively short period of time after it is written (i.e. within a few hours), storing it locally on disk may be faster. (And storing it nominally on disk even though it's all cached may be preferable to storing it in strictly memory either because some minority of users have insufficient memory to store all necessary data in memory, or because the data should survive a loss of power or other program termination.)
Edit: It's also worth pointing out that latency isn't the sole performance criterion. Local disk generally has more bandwidth than network. If you're bandwidth constrained (either at either endpoint, the database or the user device), or either is getting charged by the byte for network access, that can be an important consideration.
Depends on the usage pattern. Database servers generally have more memory than end user machines, but not proportionally more: A database serving a million users won't have a million times more memory. (This matters primarily when each user stores unique or low popularity data, otherwise the shared database has the advantage.) Moreover, data stored "on disk" on user machines will, with modern long-uptime large-memory desktops (and before long mobile devices), have a high probability of having the data cached and retrieved from memory rather than disk.
Thus, if the data is accessed a relatively short period of time after it is written (i.e. within a few hours), storing it locally on disk may be faster. (And storing it nominally on disk even though it's all cached may be preferable to storing it in strictly memory either because some minority of users have insufficient memory to store all necessary data in memory, or because the data should survive a loss of power or other program termination.)
Edit: It's also worth pointing out that latency isn't the sole performance criterion. Local disk generally has more bandwidth than network. If you're bandwidth constrained (either at either endpoint, the database or the user device), or either is getting charged by the byte for network access, that can be an important consideration.