HD rescue with dd a...
 
Notifications
Clear all

HD rescue with dd and big block size (bs=16M)?

3 Posts
3 Users
0 Reactions
999 Views
(@olathe)
New Member
Joined: 20 years ago
Posts: 1
Topic starter  

Dear subscribers

I’m occasionally copying hard disk using dd from one disk to another and have discovered that the copying time is considerably shortened by using a big block size while copying the data, i.e. my dd command will look something like this

dd if=/dev/hda of=/dev/hdb bs=16M conv=noerror,sync

My question is, while copying error free disks the above works well and is more rapid than using the default block size of 512 bytes but if the source disk is faulty, i.e. one or more blocks are unreadable, what will happen in that case? Will I have a block of 16 Mb padded with zeros even if it’s only maybe a single block of e.g. 512 bytes that are unreadable or will dd only pad the minimum number of bytes, i.e. equal to the number of unreadable bytes on the disk and transfer valid data in the rest of the 16 Mb batch? In other words, do I risk loosing more data than necessary by using a big block size?

Any help on this matter would be greatly appreciated.

Kind regards, Ola Theander


   
Quote
(@farmerdude)
Estimable Member
Joined: 20 years ago
Posts: 242
 

As you typed your command;

dd if=/dev/hda of=/dev/hdb bs=16M conv=noerror,sync

Yes, you will lose a lot of data if read errors are experienced. Potentially 16MB. Because you tell 'dd' to continue on a read error, and pad with the 'sync' option. "bs" is interpreted by 'dd' as both input and output block size. If the read error were the start of 16MB then you should lose 16MB, because that is the ibs and obs and it will be represented by NULs.

This is why smart acquisition programs are necessary. Having a hard and soft block size works great then. The larger (faster) block size is used until a read error is encountered. Then the tool defaults back to the lowest common denominator, 512 bytes. Therein you will lose 512 at a time, the minimal amount you could lose.

regards,

farmerdude


   
ReplyQuote
(@olddawg)
Estimable Member
Joined: 19 years ago
Posts: 108
 

And you could really be screwed if the BG hacked the "corrupted
cluster" profile of the disk so that lots of good clusters
appeared to be bad. You could be skipping 16M clusters
often.


   
ReplyQuote
Share: