Hard Drive Issues -...
 
Notifications
Clear all

Hard Drive Issues -- a different approach

8 Posts
4 Users
0 Likes
924 Views
(@athulin)
Posts: 1156
Noble Member
Topic starter
 

In a recent thread Chiprafp asks about problems with 2TB+ drives connected to an unknown operating system through an unknown connection technique. I want to widen that question to a related area.

What tecniques or solutions do forum readers use to verify that new ATA-to-USB/whatnot bridges work as they should, for whatever HDD size?

Myself, I've been hit by bad ATA-USB bridges, and like to verify those before I use them. This has so far been done by connecting a separately verified HDD+USB-bridge to a Unix computer, write special sector data to each sector (essentially just sector number in an otherwise zeroed sector), and verify things by reading back the HDD contents and hashsum it. That way I believe I get an upper bound of the disk size I know this particular bridge works for and know to be careful with HDDs that are larger.

However, that does not test device driver and similar platform-dependent layers, and very strictly speaking I only get a validation that is true for the particular Unix platform I'm using for the test. It might be ported to Windows using a POSIX layer (Interix), but as I don't really use POSIX for acquiry, it's strictly speaking not that useful. For Windows-level acquiry I would really like to do the same thing using native Windows tools on the actual platform used.

While the CFTT Handbook can be useful in general, it tends to lag regarding HDD size related issues.

Are there any such tools? Or are there any ways of using other tools (perhaps a HDD erase tool that produces a predictable sector contents) to verify that there is no sector addressing problems?

 
Posted : 07/09/2013 12:07 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

NOT what you actually asked for (exactly) but you can use *any* tool you normally use to wipe (00) the whole disk, calculate it's hash once 00ed and compare it with the theoretical hash of a 00ed disk with exactly the same number of sectors through a tool like

http//www.forensicfocus.com/Forums/viewtopic/p=6560016/#6560016
http//www.forensicfocus.com/Forums/viewtopic/t=5077/postdays=0/postorder=asc/start=9/
http//www.edenprime.com/software/epAllZeroHashCalculator.htm

jaclaz

P.S. Just to keep everything as together as possible the referenced thread started by Chiprafp
is this one
http//www.forensicfocus.com/Forums/viewtopic/t=10967/

 
Posted : 07/09/2013 2:43 pm
(@athulin)
Posts: 1156
Noble Member
Topic starter
 

NOT what you actually asked for (exactly) but you can use *any* tool you normally use to wipe (00) the whole disk, calculate it's hash once 00ed and compare it with the theoretical hash of a 00ed disk with exactly the same number of sectors through a tool like …

If I read that kind of disk through an USB bridge with an LBA translation deficiency (say, LBA-40 with wrap-around, as that's where 2TB is), I may read the right number of sectors, with the right contents, without knowing that I don't actually read the right sectors (this point also seems to be made in some of the links you provided).

I think that the lowest requirement is that a sequence of unique and predictable sectors should be written to the drive, followed by a read of all sectors, verifying that they indeed do return the same sequence. And while a hash will verify that, it's diagnostic utility is too small in case of a problem – sector-by-sector read-and-verify seems somewhat better, as it can report the exact LBAs where problems are detected.

The closest thing I have found so far is the diskwipe tool in the NIST FS-TST toolkit. Unfortunately it is designed to run on DOS only.

 
Posted : 07/09/2013 5:41 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

If I read that kind of disk through an USB bridge with an LBA translation deficiency (say, LBA-40 with wrap-around, as that's where 2TB is), I may read the right number of sectors, with the right contents, without knowing that I don't actually read the right sectors (this point also seems to be made in some of the links you provided).

I doubt that a wrap-around issue will provide you with the "right" number of sectors. unsure

There is however another point that is as I see it worth some thoughts, which I hinted about in this post
http//www.forensicfocus.com/Forums/viewtopic/p=6560021/#6560021
about the *need* as a verification method of a fully functional disk to write *something else* from all 00's as that may be not a "valid" way to recognize an issue in writing data as the "defective" or "sticky" area may well be "already" 00's.

The use of 55AA as "magic bytes" in bootsectors/MBR's may ? also have been originated by that, even indirectly as
55 hex = 85 Dec = 01010101 binary
and
AA hex = 170 Dec = 10101010 binary

i.e. if you first fill a disk (or area/file/etc) with 55's and then with AA's before zeroing it (or 55AA's and then with AA55's) and verify hashes, you will have also verified the correct capabilities of the target by flipping each and every bit in it.

jaclaz

 
Posted : 07/09/2013 11:13 pm
(@athulin)
Posts: 1156
Noble Member
Topic starter
 

… if you first fill a disk (or area/file/etc) with 55's and then with AA's before zeroing it (or 55AA's and then with AA55's) and verify hashes, you will have also verified the correct capabilities of the target by flipping each and every bit in it.

True. However, that is about testing the disk for functionality. That may precede the testing of a USB/FW/whatever bridge. But at the moment I'm only looking at testing the bridge/the channel between the HDD and the software.

 
Posted : 07/09/2013 11:55 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

Yep, but it could be - if and when someone can make a "All55AAHashCalculator" or modify the AllZeroHashCalculator to that effect, a way to combine the two (i.e. test the bridge while also verifying the hard disk). )

Quick (or maybe not so quick on a > 2 Tb disk) experiment.

Try dsfoing the whole disk (zeroed) to NUL and compare MD5 with what you get with your tested UNix/Posix platform and with the result of the AllZeroHashCalculator.
dsfo is part of the DSFOK toolkit
http//members.ozemail.com.au/~nulifetv/freezip/freeware/

and is AFAIK as "base windows NT" as something can be, usage hinted (for a different scope) here
http//reboot.pro/topic/5000-managing-mbrs-by-jaclaz-mbrbatch-release-001-alpha/?p=38197

jaclaz

 
Posted : 08/09/2013 3:48 am
(@mscotgrove)
Posts: 938
Prominent Member
 

I got caught out a 4 year old DELL PC. I put in a formatted 3TB drive. Windows saw it as 3TB, but the BIOS only saw it as approx. 700GB. Eventually it started 'self' data destruction.

 
Posted : 08/09/2013 12:26 pm
Passmark
(@passmark)
Posts: 376
Reputable Member
 

Are there any such tools? Or are there any ways of using other tools (perhaps a HDD erase tool that produces a predictable sector contents) to verify that there is no sector addressing problems?

For Windows, OSForensics has a function to test external drives. You can write a pattern (of your choosing) to every sector the drive and then read all the sectors back, testing each byte for correctness. There is also the option for a short test of USB drives (~ 3minutes) for when you have dozens of new drives to test, but not weeks of time.

 
Posted : 09/09/2013 5:34 am
Share: