Now, I am not asking you to implement this peer type (although that would rock my world if you did), but would it be possible for someone to implement it themselves? In other words will you be providing a 'peer API'?
This means you should get more like 10-20GB per 100GB you commit, otherwise the cloud simply will not have enough space.
Then if you consider that even with many nodes containing your data, there is a decent chance all of them go offline at a certain time. You have to have many, many nodes for the odds to be small enough. Which means the best solution is to use the storage you committed as one of the nodes, so it is always available to you. Then, it really transforms into a cloud backup system, rather than a cloud file system.
You are talking about RAID5. However, RAID5 is useless if more than a few disks go offline at the same time.
RAID1/10 is most useful when there's a higher chance of multiple disks failing at a time, or when the odds of multiple disks failing in your RAID5, while low, are unacceptable.
Of course there are other things at work when you talk RAID0/5/10, but this is a large part of it.
* not completely decentralized or open source. If wuala goes out of business, your data may not be recoverable.
* web interface is lacking (poor folder navigation/listing)
* doesn't work without X on linux
* even with X, not all features are available through the command line or API, though some are
* the interface that is provided is clunky
* the status messages leave me wondering where in the process an update is. If a piece of software can't reliably tell me where it is in a process, I can't trust that the process is happening the way I expect.
I suppose Linux definitely isn't their main market and while the Web Interface is lacking at the moment they are working on an overhaul.
Concerning it not being completely decentralized - I consider it a plus since that fact ensures greater reliability.
Although I have no good ideas where it would be beneficial apart from being fun.