It's not worth it - who needs the PR.. it's worthless when we recover zeroed disks frequently anyway :) If their willing to meet the cost to recover it (I'd do it cheap for £600). We are UK based so I guess they wont.
I dont think you can draw the conclusion that overwriting once is plenty fine based on their conclusions. At least not till:
- Someone has tried their disk (people not wanting too is different from giving it a shot :))
- Someone trying a more real life example (install an OS, then copy some files in, then DD it).
Considering the second one has been done... :)
They are quite welcome to clone any old OS drive onto a disk and wipe it the same way with DD. Then fly it over to us here and pay the £1000 (ish) cost to recover it. (yes, I know that is a bit outrageous but so is their "challenge").
It has? Do you have any source/article you can link to? I'd like to read about it and how it's done, seems to me like you'd need a fair amount of black magic to do it!
As I explained elsewhere we get wiped disks sent to us weekly. Some will have deen blanked with 0's (perhaps one a month). I know of only a few that specifically have had dd used on them - but usually we dont know the story of the disks :) so it could be higher.
We have a SEM that produces an image for our analysts to rebuild with a variety of software packages (Encase Enterprise is one example, and we have several pieces of kit from accessdata. Plus scripts/programs written in house).
With a zeroed disk your looking at minimum £1000 upwards and at least a months work (most of that time spent on the SEM and on one of our clusters processign the data).
Am I near the truth if I say that you analyze lots of residual bits to see how the drive usually manages to overwrite a one, and then use that to get a fuzzy logic version of the contents of the drive?
Yes that kinda explains the proces. It's rather complex and not something I am fully versed in (it not being my field, I process the data) but I will have a shot at explaining. It is possible to analyse the individual bits and predict what the byte was before by seeing what has "moved" (i.e. when you zero a byte or a cluster it simply moves all the 1 bits to zero)
The reason 3 passes defeats is (mostly) is that it deliberately makes sure every bit is moved at least once (for example by writing first FF and then 00 to it). Then a final zeroing pass. Because you write the inverse of the first pass on the second run it ensures every "pin" is moved. Then when you write 0's anything that can be reconstructed is just the random garbage from the second pass.
Anyway; a 120GB disk will produce about 1TB of statistical data from the SEM process - which we can analyse. Once you get a handle on a few "known" files (like the OS ones) you can begin to rebuild unknown protions based on that data. Keyword recognition and file signatures help identify when we succesfully recover something.
You are talking about a weeks processing on 25 node cluster (100 cores).
I'm more or less wondering if a RAID:ed system (with say 64kb big chunks of data) will make it impossible to recover the data.
I suspect that a RAID would foil it. For a start we would need to program in the facility to rebuild the RAID (and analyse based on Chunks). I doubt it would work out.
We do quote a price for SEM Raid recovery but it is in the 10's of thousands - a.k.a no thanks :D