IMHO, what's needed is something like a document database (in that it allows arbitrary schema) but even more importantly, it needs to have some system-level indexing. Right now, applications that deal with large amounts of small pieces of data each roll their own, e.g., embedded database, because (1) most filesystems aren't capable of adequate performance with very large numbers of small objects and (2) you can't maintain an app-independent index anyway, so... These applications then can't interact with each other without some app-specific API, which is often costly to develop against (at least, compared with a common filesystem API) and with only the capabilities that the app developer sees fit to give you. Which they often have very little incentive to do--their real incentives are usually to keep things proprietary and customers captive.
The absence of these two capabilities makes "the unix way" where "everything is a file" an impossible data model for these types of applications.
IMHO, the right data model is probably some combination of user (UUIDs or strings doesn't matter) and content-hash indexing with versioning and conflict resolution similar to Git (and CouchDB).