
dd – Destroyer of Disks (2008) - opensourcedude
http://www.noah.org/wiki/Dd_-_Destroyer_of_Disks
======
DanBC
About drive wiping: You're probably better off using the ATA Secure Erase
command, which is very quick and does the entire disc. dd and other tools risk
not doing blocks marked as bad, for example.

He's right that a single overwrite of zero is probably good enough to make
sure that data is gone, but it's probably not enough to persuade other people
that it's gone. A few passes of pseudo random data is probably better if you
need to persuade other people that the data has gone.

But if it's really important drive wiping is what you do to protect the drives
until you get a change to grind them.

~~~
drzaiusapelord
>but it's probably not enough to persuade other people that it's gone.

I believe there is a long standing bounty for anyone who can retrieve useful
data from a drive that had been zero'd once. No one has been able to thus far.

A lot of the disk wiping "culture" stems from a much earlier time when disk
technology was less reliable, especially in regards to writes. Dan Gutmann
himself says that the Gutmann method is long antiquated and only worked with
MFM/RLL encoded disks from the 80s and early 90s.

Perhaps instead of humoring these people, we should be educating them. A
zero'd out disk is a wiped disk until someone proves otherwise.

~~~
opejn
This reminds me of assertions we used to take for granted about DRAM. We used
to assume that the contents are lost when you cut the power, but then someone
turned a can of cold air on a DIMM. We usually assume that bits are completely
independent of each other, but then someone discovered the row hammer. The
latter is especially interesting because it only works on newer DIMM
technology. Technology details change, and it's hard to predict what the
ramifications will be. A little extra caution isn't necessarily a bad thing.

~~~
drzaiusapelord
I agree but redoing a wipe isn't extra caution, its just literally repeating
the same thing. If that thing is wrong, you're not helping the situation, just
wasting time/resources.

Extra caution would be shredding the drive or some other non-wipe method. At
work for example, we zero out drives and then those drives get physically
destroyed by a vendor.

------
amelius
> Unfortunately, there is no command-line option to have `dd` print progress

How difficult could it be to write a dd command from scratch that does include
progress-reporting? I mean, dd is simply reading blocks of data from one file
descriptor and writing them to another.

~~~
itchyouch
`pv` (pipeviewer) is usually very useful for tracking progress.

dd if=/dev/zero count=10 bs=1M | pv > file.bin

The other way to see progress on `dd` is to issue a signal 3 (USR1, iirc) to
the dd process. kill -3 <dd pid>

~~~
malcolmputer
> The other way to see progress on `dd` is to issue a signal 3 (USR1, iirc) to
> the dd process. kill -3 <dd pid>

Be careful with this on some distributions and compilations of DD. Purely
anecdotal evidence, but in college I had a friend imaging a very large
(5400RPM) drive and about 10 hours into the process he lamented that he wished
he could see how far along it was.

I popped open a terminal, ps -A |grep dd, kill -USR1 $PID, and it just exited.

He was rather pissed that I lost him 10 hours.

------
vog
Does the third command really work as intended?

    
    
        sudo cat ubuntu-14.04-desktop-amd64.dmg >> /dev/sda1
    

I believe this will attempt to write data after the of the the block device,
which almost by defintion will fail.

However, I often do the following, which works pretty well:

    
    
        sudo cat ubuntu-14.04-desktop-amd64.dmg > /dev/sda1

~~~
Sir_Cmpwn
Better:

    
    
        cat ubuntu-14.04-desktop-amd64.dmg | sudo tee /dev/sda1

~~~
laumars
I know you meant to use the pipe instead of redirection, but it might be worth
updating your comment for the benefit of others who are less command line
literate :)

~~~
Sir_Cmpwn
Whoops, fixed.

------
nailer
It's actually 'copy and convert' but 'cc' was taken.

------
rsync
You can 'dd' from Unix to the cloud ... well, some clouds ...

    
    
      pg_dump -U postgres db | ssh user@rsync.net "dd of=db_dump"
    
      mysqldump -u mysql db | ssh user@rsync.net "dd of=db_dump"
    

... although these days, now that we support attic and borg[1], nobody does
things like this anymore.

[1]
[http://www.rsync.net/products/attic.html](http://www.rsync.net/products/attic.html)

~~~
kazinator
That has only one minor advantage compared to:

    
    
      mysqldump -u mysql db | ssh user@rsync.net "cat > db_dump"
    

Namely, the syntax is one character shorter. (But only because I used
whitespace around >).

With dd, you can control the transfer units (the size of the read and write
system calls which are performed) whereas cat chooses its own buffering.
However, this doesn't matter on regular files and block devices. The transfer
sizes only matter on raw devices where the block size must be observed. E.g.
traditional tape devices on Unix where if you do a short read, or oversized
write, you get truncation.

------
SeldomSoup
> If you want to erase a drive fast then use the following command (where
> sdXXX is the device to erase):
    
    
        dd if=/dev/zero of=/dev/sdXXX bs=1048576
    

Question: is there a disadvantage to using a higher blocksize? Is the
read/write speed of the device the only real limit?

~~~
opejn
> is there a disadvantage to using a higher blocksize?

Maybe, depending on the details. Imagine reading 4 GB from one disk then
writing it all to another, all at 1 MB/sec. If your block size is 4 GB, It'll
take 4000 seconds to read, then another 4000 seconds to write... and will also
use 4 GB of memory.

If your block size is 1 MB instead, then the system has the opportunity to run
things in parallel, so it'll take 4001 seconds, because every read beyond the
first happens at the same time as a write.

And if your block size is 1 byte, then in theory the transfer would take
almost exactly 4000 seconds... except that now the system is running in
circles ferrying a single byte at a time, so your throughput drops to
something much less than 1 MB/sec.

In practice, a 1 MB block size works fine on modern systems, and there's not
much to be gained by fine-tuning.

------
rdc12
It is worth noteting that the shred program mentioned is more or less useless
on modern filesystems for a variety of reasons, the man-page has a list that
it will fail to work correctly on (btrfs, ext3, NFS).

It may well be that the only usable filesystem for it, is FAT32 (and possibly
NTFS, not sure on that thou).

------
esaym
This usually messes stuff up pretty good:

    
    
      perl -e '$file = '/dev/sda';\
      $s = -s $file;\
      $i = $s/2;\
      while(--$i > 0){\
        $r = int rand($s);\
        system("dd if=/dev/urandom of=$file skip=$r count=1");\
      }'

------
cmurf
The "Unable to install GRUB" recommended fix to remove the GPT is wrong. The
proper thing to do is create a 1MiB partition, BIOSBoot partition type (gdisk
internal code 0xEF02), and then grub-install will find it and install core.img
there automatically.

------
x0
I prefer `pv` (pipe viewer) for watching dd's progress

[http://linux.die.net/man/1/pv](http://linux.die.net/man/1/pv)

~~~
feld
CTRL+T on BSD platforms works brilliantly. I will never understand why Linux
refuses to adopt CTRL+T (SIGINFO)

------
shin_lao
The point of using random instead of zero is that it's harder to see which
parts have been overwritten and which parts haven't been.

~~~
BCM43
When is that useful?

~~~
_petronius
In the case of a partial erasure (eg, maybe someone disconnected the power
during the write, to stop the write from completing), I guess it would make it
harder to prove someone had tried to erase (and therefore possibly
hide/destroy) information.

------
fiatjaf
I don't believe this is the actual name. Are you serious?

