I thought this was common knowledge. It wouldn't make sense to store Pirated.Movie.DVDRiP.avi thousands of times if it's the same file for thousands of users. Files that translate to the same hash, get served from one single file on DB's servers.
It's a pretty common optimization. They had a good plan, and executed well, but let's not get carried away; other people were working on similar ideas. I'd bet that data stored in s3 are deduplicated in a similar manner.
Another plus to their setup is the hash for the file is calculated on your machine, so you pay them so you can calculate the hash on your own files and only upload them if they haven't seen them before.