±Forensic Focus Partners

±Your Account


Nickname
Password


Forgotten password/username?


Membership:
New Today: 1
New Yesterday: 5
Overall: 27487
Visitors: 49

±Follow Forensic Focus

Join our LinkedIn group

Subscribe to news

Subscribe to forums

Subscribe to blog

Subscribe to tweets

Hard Drive Issues -- a different approach

Computer forensics discussion. Please ensure that your post is not better suited to one of the forums below (if it is, please post it there instead!)
Reply to topicReply to topic Printer Friendly Page
Forum FAQSearchView unanswered posts
Go to page 1, 2  Next 
  

Hard Drive Issues -- a different approach

Post Posted: Sat Sep 07, 2013 1:07 am

In a recent thread Chiprafp asks about problems with 2TB+ drives connected to an unknown operating system through an unknown connection technique. I want to widen that question to a related area.

What tecniques or solutions do forum readers use to verify that new ATA-to-USB/whatnot bridges work as they should, for whatever HDD size?

Myself, I've been hit by bad ATA-USB bridges, and like to verify those before I use them. This has so far been done by connecting a separately verified HDD+USB-bridge to a Unix computer, write special sector data to each sector (essentially just sector number in an otherwise zeroed sector), and verify things by reading back the HDD contents and hashsum it. That way I believe I get an upper bound of the disk size I know this particular bridge works for and know to be careful with HDDs that are larger.

However, that does not test device driver and similar platform-dependent layers, and very strictly speaking I only get a validation that is true for the particular Unix platform I'm using for the test. It might be ported to Windows using a POSIX layer (Interix), but as I don't really use POSIX for acquiry, it's strictly speaking not that useful. For Windows-level acquiry I would really like to do the same thing using native Windows tools on the actual platform used.

While the CFTT Handbook can be useful in general, it tends to lag regarding HDD size related issues.

Are there any such tools? Or are there any ways of using other tools (perhaps a HDD erase tool that produces a predictable sector contents) to verify that there is no sector addressing problems?  

athulin
Senior Member
 
 
  

Re: Hard Drive Issues -- a different approach

Post Posted: Sat Sep 07, 2013 3:43 am

NOT what you actually asked for (exactly) but you can use *any* tool you normally use to wipe (00) the whole disk, calculate it's hash once 00ed and compare it with the theoretical hash of a 00ed disk with exactly the same number of sectors through a tool like:

www.forensicfocus.com/...6/#6560016
www.forensicfocus.com/...c/start=9/
www.edenprime.com/soft...ulator.htm


jaclaz

P.S.: Just to keep everything as together as possible the referenced thread started by Chiprafp
is this one:
www.forensicfocus.com/...c/t=10967/
_________________
- In theory there is no difference between theory and practice, but in practice there is. - 

jaclaz
Senior Member
 
 
  

Re: Hard Drive Issues -- a different approach

Post Posted: Sat Sep 07, 2013 6:41 am

- jaclaz
NOT what you actually asked for (exactly) but you can use *any* tool you normally use to wipe (00) the whole disk, calculate it's hash once 00ed and compare it with the theoretical hash of a 00ed disk with exactly the same number of sectors through a tool like: ...


If I read that kind of disk through an USB bridge with an LBA translation deficiency (say, LBA-40 with wrap-around, as that's where 2TB is), I may read the right number of sectors, with the right contents, without knowing that I don't actually read the right sectors (this point also seems to be made in some of the links you provided).

I think that the lowest requirement is that a sequence of unique and predictable sectors should be written to the drive, followed by a read of all sectors, verifying that they indeed do return the same sequence. And while a hash will verify that, it's diagnostic utility is too small in case of a problem -- sector-by-sector read-and-verify seems somewhat better, as it can report the exact LBAs where problems are detected.

The closest thing I have found so far is the diskwipe tool in the NIST FS-TST toolkit. Unfortunately it is designed to run on DOS only.  

athulin
Senior Member
 
 
  

Re: Hard Drive Issues -- a different approach

Post Posted: Sat Sep 07, 2013 12:13 pm

- athulin

If I read that kind of disk through an USB bridge with an LBA translation deficiency (say, LBA-40 with wrap-around, as that's where 2TB is), I may read the right number of sectors, with the right contents, without knowing that I don't actually read the right sectors (this point also seems to be made in some of the links you provided).

I doubt that a wrap-around issue will provide you with the "right" number of sectors. :unsure:

There is however another point that is as I see it worth some thoughts, which I hinted about in this post:
www.forensicfocus.com/...1/#6560021
about the *need* as a verification method of a fully functional disk to write *something else* from all 00's as that may be not a "valid" way to recognize an issue in writing data as the "defective" or "sticky" area may well be "already" 00's.

The use of 55AA as "magic bytes" in bootsectors/MBR's may Question also have been originated by that, even indirectly as:
55 hex = 85 Dec = 01010101 binary
and:
AA hex = 170 Dec = 10101010 binary

i.e. if you first fill a disk (or area/file/etc) with 55's and then with AA's before zeroing it (or 55AA's and then with AA55's) and verify hashes, you will have also verified the correct capabilities of the target by flipping each and every bit in it.

jaclaz
_________________
- In theory there is no difference between theory and practice, but in practice there is. - 

jaclaz
Senior Member
 
 
  

Re: Hard Drive Issues -- a different approach

Post Posted: Sat Sep 07, 2013 12:55 pm

- jaclaz
... if you first fill a disk (or area/file/etc) with 55's and then with AA's before zeroing it (or 55AA's and then with AA55's) and verify hashes, you will have also verified the correct capabilities of the target by flipping each and every bit in it.


True. However, that is about testing the disk for functionality. That may precede the testing of a USB/FW/whatever bridge. But at the moment I'm only looking at testing the bridge/the channel between the HDD and the software.  

athulin
Senior Member
 
 
  

Re: Hard Drive Issues -- a different approach

Post Posted: Sat Sep 07, 2013 4:48 pm

Yep, but it could be - if and when someone can make a "All55AAHashCalculator" or modify the AllZeroHashCalculator to that effect, a way to combine the two (i.e. test the bridge while also verifying the hard disk). Smile

Quick (or maybe not so quick on a > 2 Tb disk) experiment.

Try dsfoing the whole disk (zeroed) to NUL and compare MD5 with what you get with your tested UNix/Posix platform and with the result of the AllZeroHashCalculator.
dsfo is part of the DSFOK toolkit:
members.ozemail.com.au.../freeware/

and is AFAIK as "base windows NT" as something can be, usage hinted (for a different scope) here:
reboot.pro/topic/5000-...a/?p=38197

jaclaz
_________________
- In theory there is no difference between theory and practice, but in practice there is. - 

jaclaz
Senior Member
 
 
  

Re: Hard Drive Issues -- a different approach

Post Posted: Sun Sep 08, 2013 1:26 am

I got caught out a 4 year old DELL PC. I put in a formatted 3TB drive. Windows saw it as 3TB, but the BIOS only saw it as approx. 700GB. Eventually it started 'self' data destruction.
_________________
Michael Cotgrove
www.cnwrecovery.com
cnwrecovery.blogspot.com/ 

mscotgrove
Senior Member
 
 
Reply to topicReply to topic

Share this forum topic to encourage more replies



Page 1 of 2
Go to page 1, 2  Next