Unfortunately, the three-drive filesystem lasted two weeks before it became unmountable. The only thing that let me mount it was finally running btrfsck. I was left with 57 unrecoverable errors, and lots of lost data.
I would not recommend running BTRFS in RAID5 or RAID6 just yet. Stick with mirroring, if you want to use it, and rebalance to RAID5/6 later on when it's more stable.
e: To anyone not up on btrfs, its features are closely tied to the kernel version it's used with. For example, raid56 scrub and device replace, and recovery and rebuild code were not available prior to kernel 3.19.
I also believe the only way to use 5/6 modes before they were stable was to explicitly compile with them enabled. It wasn't just something you could accidentally do.
I didn't have much data to submit. No kernel panics, no useful error messages, nothing beyond it saying it wouldn't mount. One could read the tea leaves from the filesystem as it sat, but such data spelunking could take a while on an 8TB partition, and I wanted to get the disks back into use.
I didn't notice the corruption until after I had unmounted it, so scrubbing it wasn't an option.
I recently got myself a new home server and decided to use FreeBSD and ZFS, so the question is settled for me for the moment. I still hope Btrfs gets there in the not-too-distant future, though.