I'm helping a coworker recover some stuff of a failing drive. It's quite important to him, as his little girl is very sick and he has some pictures of her on it.
I'm hoping to get a clean bit-by-bit copy of the drive while I still have options. The hard drive is obviously failing (chunks are bad, and parts of the hard drive just don't show up), and I want to get a big rip/dump of everything on the drive (including deleted stuff) before the drive is retired so that he can later pic through it and recover what might be salvageable.
Using System Rescue CD, I ran dcfldd on the drive and split up the images in chunks that would read to an attached FAT32 drive. (I'm fairly sure that the command is right). However, I'm getting the following error (below is a pic of my monitor)
http//
The dcfldd command that I've been running has been going on for over a week (/dev/sda is the screwed up IDE 160 GB hdd), and if that's what it takes to get a clean dump of the raw data on the hard drive that I can later work on, then that's cool and I'll just leave it.
But if I should cut my losses and do something else here before the drive fails further, please let me know so I can do so fast before the drive takes a turn for the worse.
IMHO, all that really means is that DCFLDD is encountering errors on sections of the hard drive, as you expected. You should give dd_rescue a shot)
Ronan
The approach I take with my own software is to build up an image file section by section. When an area has not been imaged, it is padded, so the image file always has the correct sector in the correct location. Thus a good way is to image the main directory section. Read this image to see where other directories are, NTFS is best as $MFT is normally only in a few locations. With this very sparse copy of the disk image, determine where the critical files are stored, and just image that area.
The more retries etc, the greater the chance of terminal disk failure.
Make sure you use an NTFS disk to store the image file, FAT32 has a 4GB limit
We are currently using dd_rhelp that works in tandem with dd_rescue on a bad CD that has the evidence we need right in the bad blocks. It has been running (grinding away) for about a month but is actually getting more of the bad area's data than we could get previously for the same amount of time effort. It controls dd_rescue by jumping automatically to good areas and then returning to the spots it has problems with and has gotten at about 100MB of data we couldn't access before. It cuts down the overall time effort to get good data out and logs everything for you.
You may want to give it a shot.
RB
I agree with mscotgrove
The approach I take with my own software is to build up an image file section by section. When an area has not been imaged, it is padded, so the image file always has the correct sector in the correct location. Thus a good way is to image the main directory section. Read this image to see where other directories are, NTFS is best as $MFT is normally only in a few locations. With this very sparse copy of the disk image, determine where the critical files are stored, and just image that area.
They way I would approach is similar to the above.
You could also try and run smaller chunks of the disk at a time. But also beware that if the head is damaged you attempting to do this may damage the drive even more.
For those with I/O errors I quietly, but strongly, suggest GNU's ddrescue.
Cheers!
farmerdude