
Developer Update on ZFSOnLinux bug - ryao
It looks like no data is actually lost. The regression caused us to lose hard links in some directories where new hard links were being made. That means that we can get all of the files back, minus their names and directory paths. There will be a tool integrated into the driver that will let people repair affected systems. The missing files would go into a lost+found directory. Running it also would scan for damage on systems.<p>There are a couple caveats though. Any snapshots containing damaged directories will need to be destroyed to restore the pool to pristine condition. If anyone cloned those snapshots, the clones will need to be destroyed too (but you can copy data off).<p>The tool will provide a list of things to destroy that require manual destruction by the system administrator. It is possible to optionally just leave things as is, and nothing bad should happen aside from having an annoying message about the bad snapshots.<p>I cannot give an exact ETA on when the tool will be ready, but I can say soon. :)
======
jlgaddis
Is the plan still for this to be included in 0.7.9?

Thank you again for your prompt response and hard work on this issue!

\---

Also, I'll reply to another comment here in the hope that it is more likely to
be seen:

> _Brian suggested that we make it ‘.zfs /lost+found‘, which might be what we
> do._

+1. IMO, this is the best solution.

~~~
ryao
You are welcome.

If it is ready before 0.7.9, I would expect Brian to tag 0.7.9 just to get it
out the community sooner. We did not have a chat to decide that, but it is
both what I expect him to do and intend to tell him we should do if that is
the case.

I have some concerns about .zfs/lost+found. I’ll need to see if we can do that
without crossing mountpoint boundaries. If we can, it could be what would be
done. Otherwise, it could cause problems should there be any enormous files
smaller than pool free space involved. I will see what makes the most sense.

------
spindle
Thank you!

