I'm doing several recovery projects for friends with no money, and one thing I've noticed with the tools I'm using (e.g. ddrescue) is that they take a helluva long time.
e.g.
a 50,000 B/s recovery rate equals about 40 days of scanning on a 160GB drive.
A friend just had a 900 GB drive crap out on him, and given the rate it's taken me on the 160 GB drive, it would likely take him over 200 days!
Any suggestions on what I can do to speed this up? These drives are normally fairly screwed up (but at least they spin), and I don't need to waste time with real forensic stuff, just whatever works for quickly pulling the data off for later analysis (e.g. mount the file as a loopback and run with a different scanning tool later).
Often the problem with inaccessible drives is at a "higher level" than what actually needs a ddrescue approach.
I mean, it often happens that just some files are missing/needed to recover, and as well it happens that only a few "basic" filesystem structure have gone bad.
A try with Testdisk
http//
or it's accompanying app Photorec "targeted" to a given type of file may be worth a try.
The "old fashioned" advice of having drives partitioned in "manageable" size volumes as a preventive measure is still valid, as I see it, as you can image the single partitions to different drives and use some sort of parallelism if you have some spare, even oldish, machines around.
jaclaz
A try with Testdisk
http//www.cgsecurity.org/wiki/TestDisk
or it's accompanying app Photorec "targeted" to a given type of file may be worth a try.
I'm planning on using one or both of those tools, but only after I make a good image. The drive is on the fritz, and once I get an image, then I can mount it as a loopback and scan it with whatever tool I want.
I often use a LogicCube device to duplicate disks and some can be at less than 1% complete after 24 hours. My solution is to use the LogicCube with the USB port and build up a disk image section by section. Typically for NTFS start with the $MFT and then discover where your files should be. If you are only after certain files, a complete image may not be required.
The same approach can be achived by straight USB caddies but they can sometimes hang on failing disks.
The process may require a week, but 6 months is rather long
I often use a LogicCube device to duplicate disks and some can be at less than 1% complete after 24 hours. My solution is to use the LogicCube with the USB port and build up a disk image section by section. Typically for NTFS start with the $MFT and then discover where your files should be. If you are only after certain files, a complete image may not be required.
The same approach can be achived by straight USB caddies but they can sometimes hang on failing disks.
The process may require a week, but 6 months is rather long
Remember, these are horribly screwed up drives. I'm not just making a copy, I'm making a copy with a dd variant (ddrescue) that is at times running only 1000-5000 B/s scans on certain sections.
Also, these are drives for a friend, so buying any sort of disk duplicating software is kind of hard to justify here.
scubascuba,
I think I have not made my point clear. Some disks are very slow to duplicate due to failing sectors. Hence you need to limit the area of duplication to just the critical files. There is no point duplicating system and program files. I don;t know if ddrescue can be used with a file selection function which would answewr the question.
You may also find that the end of the disk will duplicate at normal speed
As a side not, there is, besides ddrescue, dd_rescue
http//
and dd_rhelp
http//
(Yes, linux developers don't have much fantasy when finding a name for their app wink )
myrescue
http//
could be what you want to try
myrescue is a program to rescue the still-readable data from a damaged harddisk. It is similiar in purpose to dd_rescue, but it tries to quickly get out of damaged areas to first handle the not yet damaged part of the disk and return later.
safecopy
http//
was recently released in version 1.0….
jaclaz
Please test and research applications before use.
GNU's ddrescue offers much more than the dd_rescue application. Research and testing will validate this.
For failing drives ddrescue is a very good application to use. Time estimates are just that, estimates, and they often will vary as the process continues along.
Cheers!
farmerdude