Verifying a hard dr...
 
Notifications
Clear all

Verifying a hard drive is blank

26 Posts
11 Users
0 Reactions
8,658 Views
Passmark
(@passmark)
Reputable Member
Joined: 14 years ago
Posts: 376
 

If you want to verify a disk is all zeros, the best way it to read the entire disk and compare each byte read to zero. If you are already reading the disk in full, why bother with a checksum at all? It is just an unnecessary extra calculation and extra risk. There are a bunch of tools and scripts that read and verify data, including our own OSForensics tool.

There are some examples on this page where different byte sequences can result in the same checksum value.
http//noahdavids.org/self_published/CRC_and_checksum.html
(how bad this is depends on the actual checksum algorithm used)

It is also related to the Birthday problem.
https://en.wikipedia.org/wiki/Birthday_problem
"….the expected number of N-bit hashes that can be generated before getting a collision is not 2^N, but rather only 2 ^(N/2)". Note In this context a hash is the same as a checksum.

A typical Checksum is 32bits or 64bits. Meaning collisions are fairly frequent, especially with 32bit. (Compare this with SHA-1 which is 160bits in length).

It also depends on what you are trying to prove. If you are just trying to detect a flipped bit due to a disk error, then CRC-64 should do fine. However if you are trying to detect malicious data modification by a bad actor, CRC is pretty woeful. Meaning given a hard drive full of data, I could modify the last sector on the disk so as to generate the particular checksum I wanted (to make it look like it was a wiped disk).

Here is some code that modifies 4 bytes on the disk (or in a file) to produce a desired checksum
https://stackoverflow.com/questions/9285898/reversing-crc32/13394385#13394385


   
ReplyQuote
minime2k9
(@minime2k9)
Honorable Member
Joined: 14 years ago
Posts: 481
 

Although the OP is contacting me via PM's and email, I'll post an overview as it may be useful to others struggling with the headache that is ISO 17025 (hereafter ISO).

The larger issue is why does a hard disk need to be wiped or rather why do you need to verify that a hard disk is completely blank.

Some places do this before imaging to a drive however, unless you are making a clone of a disk, imaging a disk to an image on a disk with deleted data will not affect the data in your image.
Under ISO, you would verify that you imaging method works however you see fit. What we have seen from the assessors is that, where the instruction states that you image to a wiped disk, they want the wiping process verified as well. To many people this means verifying that all the data is '0x00'. However all they need to show is that the results are not affected.

The easiest way around this is to make the wiping process part of the overall verification. So rather than we image to a wiped disk, wipe the disk as part of the validation process.

If you take disks with known hashes that you are using as test disks and imaging them to wiped disks, which you previously wiped with what ever method you specify, and you get the correct results from your tests you have your confidence in both processes without separate validation.


   
ReplyQuote
(@Anonymous 6593)
Guest
Joined: 17 years ago
Posts: 1158
 

It's interesting to hear that the checksum will not always return 0. I'm interested to hear how, could you explain further?

No, I can't. Brainfart on my part – I probably had got a 32-bit checksum stuck in my mind. Sorry for the confusion.

You still have to validate the software though, which could be a bit of a bother, as one of the subvalidations would be to verify that the sum is performed with 64 bits – or, expressed otherwise, with more than 32 bits. That takes some serious test data creation. For a stream-oriented algorithm, test data also needs to be streams.

The approach suggested by Passmark strikes me as a better base, as you only need to validate that one single sector is checked for all-zero or checksummed or hashed or whatever method you choose. And possibly also verify that the first sector, middle sectors, and the last sector are all treated the same. Much simpler task.


   
ReplyQuote
jaclaz
(@jaclaz)
Illustrious Member
Joined: 18 years ago
Posts: 5133
 

Just in case, there is a "dedicated tool", here
http//www.edenprime.com/tools/epAllZeroHashCalculator.htm

The idea is that the disk, once wiped, is hashed with *whatever* tool is in use.

Then the AllZeroHash Calculator is used to compute the hash of a "theoretical" file/extent of the same length as the wiped device.

If the same device is re-used, the all zero hash needs to be calculated only once and can be physically written on the label of the disk.

Algorithms used are CRC32 (ITU-T V.42), MD5, SHA1 and SHA256.

jaclaz


   
ReplyQuote
UnallocatedClusters
(@unallocatedclusters)
Honorable Member
Joined: 13 years ago
Posts: 576
 

Two Comments

1) I believe the original idea behind drive wiping is to preemptively kill dead any arguments a criminal defense attorney or defendant may attempt to make such as "how do we know the contraband evidence was not already on the hard drive?"

By wiping a hard drive and having a log to prove the date the wipe occurred, one is then in a position to respond to the above spurious argument "the contraband evidence was not already on the hard drive because we wiped the hard drive and have a log to prove we wiped the hard drive".

2) I use OSForensics' drive wiping utility and like the fact that OSForensics stores the wipe log on the hard drive that OSForensics has just wiped.


   
ReplyQuote
minime2k9
(@minime2k9)
Honorable Member
Joined: 14 years ago
Posts: 481
 

Two Comments

1) I believe the original idea behind drive wiping is to preemptively kill dead any arguments a criminal defense attorney or defendant may attempt to make such as "how do we know the contraband evidence was not already on the hard drive?"

By wiping a hard drive and having a log to prove the date the wipe occurred, one is then in a position to respond to the above spurious argument "the contraband evidence was not already on the hard drive because we wiped the hard drive and have a log to prove we wiped the hard drive".

I despise this argument with a passion. Altering your procedure to placate someone who has no knowledge of the field is just plain wrong. I mean we image to a server drive, We clearly don't wipe the server after we remove an image from it.
The answer to the question is "it can't be because we image using a file format that contains only the data extracted from a hard drive and has verification checks to ensure the data gas not changed". Anything else is propagating a stupid myth that if you don't wipe the hard drive your evidence is contaminated


   
ReplyQuote
jaclaz
(@jaclaz)
Illustrious Member
Joined: 18 years ago
Posts: 5133
 

I despise this argument with a passion.

And you are perfectly right ) , still this is how it is going ( .

Previous discussion on the topic (inside an unrelated topic) starting here
https://www.forensicfocus.com/Forums/viewtopic/p=6559991/#6559991
where jhup summed it up nicely.

jaclaz


   
ReplyQuote
gungora
(@gungora)
Eminent Member
Joined: 8 years ago
Posts: 33
 

I despise this argument with a passion. Altering your procedure to placate someone who has no knowledge of the field is just plain wrong. I mean we image to a server drive, We clearly don't wipe the server after we remove an image from it.

I am not a fan of that argument, either. Two other reasons why one might want to wipe & verify drives before use

1. In the process of wiping and verifying a drive, we are also exercising the drive. I have found quite a few problematic drives this way over the years, by comparing the time it takes to wipe a drive to that of others of the same specs, and by observing S.M.A.R.T. data after completion. I would much rather have the drive fail at this stage, than later during imaging, analysis, etc.

2. There is no guarantee that drives will come out of the factory without any data. If they do contain data, having that external data on the same medium as your forensic image is undesirable, to say the least.


   
ReplyQuote
minime2k9
(@minime2k9)
Honorable Member
Joined: 14 years ago
Posts: 481
 

I despise this argument with a passion.

And you are perfectly right ) , still this is how it is going ( .

Previous discussion on the topic (inside an unrelated topic) starting here
https://www.forensicfocus.com/Forums/viewtopic/p=6559991/#6559991
where jhup summed it up nicely.

jaclaz

I remember that post and have some sympathy for the argument.
However with the implementation of ISO 17205, as useless as it is, methods have to be validated
This should, in theory, negate any argument about whether a disk was completely wiped or not as the procedure as a whole has been validated.


   
ReplyQuote
(@randomaccess)
Reputable Member
Joined: 14 years ago
Posts: 385
 

I'm not 100% sure people are aware of what checksum64 is.

Yes CRC and the like have the chance of collisions…we did some of it at university (fun assignment, modifying destination IPs and then manipulating CRCs to match but I digress)

My understanding of checksum64 is to count each byte…at the end it should be 0, if it's not then your drive isn't wiped. The reason you use checksum64 is due to the size of todays drives.

In IACIS it's one of the things they teach with regards to verifying a drive has been wiped


   
ReplyQuote
Page 2 / 3
Share: