Fortunately, Mike (Cutler) & Sean (Kamath) documented many things in the USENIX presentation cited by newman314 that I'm still under NDA about. I worked with both of them and they're extremely knowledgeable folks.
Now, switching gears to talk about that presentation. ;)
I'd call out a few things.
1) Note on slide 17 the call out to _multiple sites_. Yup. You got it. There is a single view of the file store. At a practical level to the end user the sync time between sites is trivial to non-existent.
2) automount, automount, automount (Slide 25). The automount configuration is stored in LDAP providing a convenient way to correlate and sync data which is already integrated with the name service switch/NSS (man 5 nss).
3) Caching (Slides 22-24). This is where a lot of the magic happens. It's this cache tier which _presents_ as if it's a single file store.
One conceit I will make (which is glossed over a bit) is that the use of DNS and suffix searching is critical in a situation like this. It allows each site to use configuration management to push out relevant config files (e.g. /etc/resolv.conf) which can then present a search order like "site1.us.example.com us.example.com example.com". Thus, when you do a query for the unqualified name "ldap" (or even better "IN SRV _ldap._tcp") NSS will hand that off to the resolver which will dutifully check for the following (in order) until it gets a positive match or fails: ldap.site1.us.example.com, ldap.us.example.com ldap.example.com. This allows for a DNS based fall back as well as the added benefit of always hitting the local system (instead of a remote site).
Also, make sure you check out the stats on Slide 28.... 4 PB... in 2008... before they had to render a movie for each eye... (3D animation!)
If you can't, legally, nbd.